id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
5541932 | pes2o/s2orc | v3-fos-license | Oral health related behaviors among adult Tanzanians: a national pathfinder survey
Background The oral health education programs which have been organised and delivered in Tanzania were not based on a thorough understanding of behaviours which influence oral health. Therefore, evaluation of these programs became difficult. This study aimed at investigating the oral health related behaviours and their determinants among Tanzanian adults. Methods A national pathfinder cross sectional survey was conducted in 2006 involving 1759 respondents from the six geographic zones of mainland Tanzania. Frequency distributions, Chi square and multiple logistic regression analyses were performed using SPSS version 13.0. Results The rates of abstinence from alcohol for the past 30 days and life time smoking were 61.6% and 16.7% respectively, with males being more likely to smoke (OR 9.2, CI 6.3 -12.9, p < 0.001) and drink alcohol (OR 1.5, CI 1.2 -1.8, p < 0.001). Multiple regression analysis revealed that; having dental pain (OR 0.7, CI 0.5-0.8; p < 0.001) and being minimally educated (OR 0.48, CI 0.4-0.6; p < 0.001) reduced the likelihood of having a high sugar score. Whereas being male (OR 1.5, CI 1.2- 1.8; p < 0.001), urban (OR 1.9, CI 1.5 -2.3; p < 0.001), and young (OR 1.5, CI 1.2 -1.8; p < 0.001) increased the likelihood of having a high sugar score. Urban residents were less likely to take alcohol (OR 0.7, CI 0.6-0.9; p < 0.01), or smoke cigarette (OR = 0.7, CI = 0.6-0.9); less likely to be those who do not use fruits (OR 0.3, CI 0.2-0.4; p < 0.001); dental clinic (OR 0.5, CI 0.4-0.7; p < 0.001); factory made tooth brushes (OR 0.1, CI 0.08-0.17; p < 0.001) and toothpaste (OR 0.1, CI 0.1-0.2; p < 0.001) than their rural counterparts. More rural (13.2%) than urban (4.6%) residents used charcoal. Conclusion The findings of this study demonstrated social demographic disparities in relation to oral health related behaviors, while dental pain was associated with low consumption of sugar and high likelihood to take alcohol.
Background
Since early 1980s, the Ministry of Health in Tanzania has been advocating the provision of oral health education to all Tanzanians. Most of the oral health education programs which have been organised and delivered in Tanzania were not based on a thorough understanding of behaviours conducive or detrimental to oral health among Tanzanian population. Rather they were based on the studies on behaviours and disease conditions which were limited to groups of segments of the population. Consequently, they have not been systematically evaluated for their effectiveness. To improve on the organization and delivery of oral health education, a national pathfinder survey was organised in order to gather baseline information that will guide future planning of oral health services. This study explores oral health related behaviours as part of the national pathfinder survey.
Oral diseases are linked with the lifestyle of an individual person, and their prevention depends on adopting lifestyles that are conducive to oral health. The important oral health behaviours that have been shown to have a positive impact on oral health include tooth-brushing with fluoridated toothpaste, inter-dental cleaning by tooth picks or dental floss, and dental attendances [1][2][3][4]. Consumption of fresh fruit is important for the maintenance of good health, moreover being less cariogenic; fruits may be good alternatives of sugared snacks for the prevention of dental caries [5]. Much is known about the oral damaging effects of alcohol & tobacco consumption [6]. Excessive consumption of alcohol, especially hard drinks and tobacco has been strongly linked with occurrence of oral cancer. Other detrimental practices such as use of charcoal abrasives are still prevalent in some societies and their prolonged use may have a negative impact on oral health. A variety of factors account for individual differences in the propensity to undertake oral health behaviours, including demographic, social, emotional and personality related factors as well as cognitive factors [7]. Socio-demographic variables have been shown to have associations with the performance of oral health related behaviours in the adolescent and young adult populations. In industrialized countries, females and those who report higher levels of education and income are more likely to engage in oral hygiene behaviours, less smoking and less frequent sugar consumption than males and people of lower socio-economic status [8,9]. In developing countries however, intake of sugared snacks is generally most common among females, the higher educated and among those residing in urban areas [10,11]. Understanding the influence of demographic factors on peoples' behaviour is crucial in planning intervention programs. This information guides the targeting of health education messages to those who have a high likelihood of performing unhealthy behaviours across demographic factors.
Studies conducted in sub-Saharan African countries have shown that low to moderate proportions of the populations confirm daily intake of sugared foods and drinks [10,12]. Much is desired to counter the cariogenic effects of sugars on teeth by using Fluorides. Moreover it is a well established fact that Fluoride tooth paste has accounted for much of the decline in caries experience in many industrialised countries. In addition, fluoride toothpaste can be distributed to many retailer shops together with other domestic consumables. However, affordability of the toothpaste and availability of the fluoride ion in toothpaste in its bioactive form pause a challenge to many low income countries [13]. In this era of trade liberalization [14], few will disagree that caries experience is likely to increase in low income countries that have access to sugars and minimal exposure to fluorides. Authorities in Tanzania have to strike a balance between opening links with global markets and building healthy public policies that protect oral health. These policies should promote the use and availability of affordable fluoride tooth paste. In addition, they should support efforts aimed at raising the awareness of communities on aetiology and prevention of dental caries in order to improve the rational use of sugar containing foodstuffs and fluoride containing toothpaste.
Studies conducted in Africa indicate that majority and more females than males [15] engage in daily tooth cleaning; with more rural than urban residents using chewing sticks [16]. This indicates residents in the continent of Africa could benefit from routine use of fluoridated toothpaste for dental caries prevention. In early 1990's the lifetime prevalence of dental attendance among adult Tanzanian population was 51% and 43% among men and women respectively [17]. Majority seek dental care for pain relief when the carious lesion has destroyed the tooth crown, and often with abscess formation. In most cases, the treatment rendered is tooth extraction [18]. Therefore health educators need to emphasize on the importance of dental visits for routine check-up to allow early detection and treatment of dental caries by preventive methods such as fissure sealants and restoration.
Regarding life time tobacco use, prevalence across studies in some African countries vary from levels below 10% to over 30% with higher rates observed among males than females and among urban than rural residents [11,19,20]. A common assumption all over sub-Saharan Africa is that cigarette smoking is the habit of affluent people, whereas other forms of tobacco such as snuff tobacco are most common among the less educated and those residing in rural areas. However, these consumption patterns are likely to change with time due to shift of market targets. As correctly put by Legressley et al 2008, the marked decline in the sales of tobacco in the industrialised world seems to have been compensated by a rapid development of new generation of smokers in Sub-Saharan Africa [21].
The link between behaviour and oral health can not be over-emphasized. The task ahead of us is to understand behaviour and its potential determinants that would be targeted by oral health interventions. The government of Tanzania in its Policy guidelines for oral health indicated its intention to intensify oral health education activities in the country [22]. In line with this desire, understanding the behaviours affecting oral health status will be core to the structuring of evidence based oral health education programmes. The aim of this study therefore was to investigate oral health related behaviours and their determinants among Tanzanian adults.
Selection of study sites and Sampling procedure
The national pathfinder survey methodology described in the WHO Oral Health Surveys -Basic Methods [23] was used to select sampling sites and sample size. However the number of rural clusters was slightly increased to redress the urban rural disproportion. Tanzania mainland has 6 geographical zones. From two zones, 2 study sites, Dar es Salaam and Mbeya cities were purposively selected. Six clusters, 4 from Dar es Salaam and 2 from Mbeya cities were purposively selected to represent urban population. From each of the remaining four zones, one region was randomly selected, as a study site from a list of the regions constituting a given zone. From each of the study site, 2 villages were purposively selected as clusters for rural population. All adults in the selected clusters were eligible for inclusion in the study. A total of 14 clusters, 6 from urban and 8 from rural areas were selected. From each cluster, 150 adults aged 18 years and above, were to be interviewed. Therefore, a total of 2,100 subjects (900 urban, 1200 rural) were targeted.
To facilitate stratification of respondents by sex and age, each interviewer was provided with a matrix table for sex: (male and female); and 5 age-groups: (18-25, 26-35, 36-45, 46-55, and 56+). Each age and sex category had a predetermined number of 15 respondents. The interviewer had to tally in the appropriate age and sex category to which an interviewee belonged. At the end of the study period, some age and sex categories in the matrix were not completely filled due to difficulties of getting people in their households during the day time. Only 1,759 out of the targeted 2,100 adults were interviewed, giving a response rate of 84%.
Procedure used to select study participants
This was a house-to-house survey. Cities, towns and villages in Tanzania are divided into smaller administrative units of 10-20 households called streets. Interviewers reported to the city, town or village authorities who assigned one street leader to lead the interviewers from house to house under his/her street until all the adults in a street who were present at the time of the study were interviewed. The street leader then handled over the responsibility of leading the interviewers to the next street leader. This process continued until the interviewers had interviewed the required number of adults of each agegroup and sex category.
Ethical clearance and procedure for obtaining informed consent from respondents
The ethical clearance for conducting this study was obtained from the Ministry of Health of the United Republic of Tanzania. The street leaders who led the interviewers introduced them to the family members and interviewers explained the aim of the visit. After the household members had understood the aim of the study, all members aged 18 years and above were requested to participate in the study by responding to the questions posed by interviewers. Members were informed that they were free to participate/not participate. It was agreed before commencement of the study that a person who would accept to be interviewed after explanations would have consented to participate in the study.
Development of a questionnaire
The data reported in this study is part of a national pathfinder survey. The items measuring oral related behavior and dental pain experience were adopted from the WHO simplified oral health questionnaire for adults [24] and amended for use in Tanzania. Included also in this questionnaire were demographic variables.
The English questionnaire was translated to the national language and pre-tested among 20 adults in each of the 6 administrative zones for meaning and clarity. Pre-testing was conducted in all the 6 zones to capture possible differences in the interpretation of words and phrases. The final version was administered twice to a group of respondents for reliability in terms of temporal stability with kappa values ranging from 0.80-1.0 indicating high temporal stability of the instrument.
Data analysis Construction of dummy variables and coding for analysis
The questionnaire assessed the consumption of sugary snacks, fruits, tobacco and alcohol, dental visit, use of tooth brush, mswaki (chewing sticks), toothpaste, charcoal, tooth picks and dental floss. Consumption of sugared snacks & drinks, and fruits was measured on a scale ranging from (1) = rare/never to (6) = several times a day. Tobacco and alcohol consumption was measured from (1) = never to (6) = every day. To allow for cross-tabulation and logistic regression the ordinal scale was dichot-omized to "category 0" including the original score (1), and "category 1" including the original scores (2-6). Dichotomized score for fruit consumption were then reversed (yes = 0, and No = 1). Dental visit, use of factory made tooth brush, chewing stick, dental floss, toothpicks, tooth paste and charcoal were measured on a dichotomized scale; (0) = yes and (1) = No. There were six items assessing the consumption of sugared snacks and drinks (Biscuits/cakes, doughnuts, sodas, jam/honey, chewing gums and sweets/chocolate; [ Table 1]. The six items were added up to construct a sugar score (mean 14.5; SD 5.5). The score ranged from 6 = rare or no consumption to 36 indicating frequent consumption of all the six food items. The score was dichotomized at approximately median split, into (0) = rare including score 6-13 and (1) = frequent consumption of sugar including score 14 -36.
The independent variables considered in this study were residence, sex, age, education and experience of dental pain/discomfort for the past 12 months. These were coded as follows: Residence (0) = rural, (1) = urban; Sex (0) = female, (1) = male; age was dichotomized at median split into young adults (18-36 years) and older adults (37+ years), then coded as (0) = older adults (1) = young adults, dental pain experience was dichotomized into (0) = no pain and (1) = yes pain experienced. Education was categorized as (0) = secondary education or more, and (1) primary education or less.
Statistical analysis
Data were analyzed using Statistical Package for Social Sciences version 13. Cross-tabulation and chi-square statistics were used to assess bivariate relationships. Multivariate analysis was conducted using multiple logistic regressions. The p-value for statistical significance was set at 0.05. Forced entry multiple logistic regression (enter method) analyses were performed with 95% confidence intervals (CIs) given for Odds Ratios (ORs) indicating a possibility of an effect if both values are either greater or less than 1. To check for the effect of other independent variables on the dependent variable both unadjusted and adjusted ORs and their corresponding CIs were computed (only the adjusted ORs were reported because there were no much differences between the two). To estimate the likelihood of an individual carrying a risk to get a bad outcome, the worst scenario of an independent variable was coded (1) and the bad outcome of the dependent variable was also coded (1), while maintaining (0) as the first and reference category in regression analysis.
Results
The distribution of participants by age and sex in urban and rural areas was determined by pre-stratification into allowable quotas with minor variations in each cluster and strata. Distribution of participant's socio-demographic variables and experience of dental pain is shown in Table 1. Frequency distributions of participants according to alcohol consumption showed that 61.6% of all participants did not have an alcoholic drink for the past 30 days prior to the interview. Among those who reported to have taken alcohol in the past 30 days the rates of consumption per day were 5 or more, 4, 3, 2, and 1 drinks (among 10.2%, 4%, 6.5%, 8.9% and 8.7% respectively). With the exception of cigarette that was reported to ever been consumed by 16.7% of the respondents, other forms of tobacco namely cigars, pipe, snuff and chewing tobacco were consumed by a small proportion of individuals. Frequency distribution of participants' consumption of sugared snacks and fruit is displayed in Table 2.
Cross tabulation of oral health related behaviors by residence and sex is displayed in Table 3 &4. Participants reported to consume fruits at least once a week (88% and 63.7%) for urban and rural respectively, with no significant differences between sexes. Biscuits/cakes were reported to be consumed by 39.9% among urban and 30.1% of rural residents, with more males than females in rural areas consuming these products (p < 0.05). Seventy four percent and 61.6% of urban and rural residents respectively consumed doughnuts with more males than females in rural areas taking them (p < 0.05). More urban (74.6%) than rural (37%) residents consumed soft drinks, with more rural males than females more likely to consume the products. Honey and jam was consumed by small proportions of urban and rural residents. Alcohol consumed in the past 30 days was reported by 33.6% of urban and 41.9% of rural residents with urban males more likely to drink than females (p < 0.001). Fourteen percent of urban and 18.4% of rural residents ever smoked cigarettes, in both urban and rural areas significantly (p < 0.001) more males than females ever smoked.
Mswaki was reported to be used by 15.7% by urban and 49.6% of rural residents with more rural females than males more likely to brush by using mswaki (p < 0.001). Nearly 95%, of urban and 66.4% rural residents reported to use factory made tooth brushes with no significant differences between sexes in both settings. Toothpaste was reported to be used by 94.1% of urban and 66.4% of rural residents with no sex difference across localities of residence. The prevalence of charcoal use was 4.6% and 13.2% among urban and rural residents respectively with rural females than males more likely to brush their teeth using charcoal. About half (49.6%) of urban and 36.8% of rural residents ever visited a dental clinic with more urban females than males more likely to visit the clinic (p < 0.05). Behaviors that were found to be performed by a sizable proportion of participants were subjected to regression analysis as outcome variables.
Multiple regression analysis controlling for age, sex, residence, education level and experience of dental pain or discomfort showed significant effects in terms of Adjusted Odds Ratios (OR) and 95% CIs of most independent variables on the oral health related behaviors shown in Table 5 &6. Corresponding Nagelkerke R 2 for each model are also displayed in Table 5 &6.
Dental pain significantly reduced the likelihood of consuming biscuits, doughnuts, soft drinks and chewing gums, likewise people with pain had a reduced likelihood of not attending dental treatment and more likely to drink alcohol [ Table 5 &6].
Urban residents were significantly more likely to consume biscuits/cakes, doughnuts, and soft drinks, with highest odds ratios observed for soft drinks (OR 4.9, CI 3.9-6.6; p < 0.001). Urban residents were also less likely to take alcohol, or smoke cigarettes. They were less likely not to consume fruits, not to attend dental treatment and not to use The young were more likely to use biscuits/cakes, doughnuts, soft drinks, chewing gums, chocolate, and visit dental clinic, but they were less likely to smoke cigarette and to take alcohol, with low likelihood of not using a factory made toothbrush or not eating fruits than old people [ Table 5 &6].
Males were more likely to take soft drinks (OR 2.2, CI 1.8-2.7; p < 0.001), smoke cigarettes (OR 9.2, CI 6.3-12.9); p < 0.001), alcohol and chewing gums with less likelihood to be those who do not use a factory made toothbrush. The minimally educated individuals were less likely to consume soft drinks (OR 0.6, CI 0.4-0.8; p < 0.01) but were more unlikely; to eat fruits, visit a dental clinic, and unlikely to be none smokers. Moreover; they were more likely to be those who do not use a factory made tooth brush [ Table 5 &6].
The summative index for sugar score also varied with the background variables in that; having dental pain and being minimally educated reduced the likelihood of hav-ing a high sugar score. Whereas being male, urban, and young increased the likelihood of having a high sugar score [ Table 5]
Discussion
This paper documents the associations between oral health related behaviors and socio-demographic factors among Tanzanian adults. The methodological strength of the present study includes the large sample size drawn from all the six geographical zones of mainland Tanzania. Using a WHO simplified oral health questionnaire for adults [24] makes the findings of this study comparable with those of other studies. A diverse range of oral health related behaviors studied offers substantial national baseline information for planning and scientific referencing. This study used the oral health surveys pathfinder methodology [23], which is scientifically less rigorous than the standard probability sampling methods. However; it is widely advocated by the World health organization especially when the information collected is for planning oral health services.
Lack of information about the non-respondents precludes any conclusion about a possible selection bias, although Chi square: * p ≤ 0.05, ** p ≤ 0.001, # .... plant twig used as a tooth brush Multiple logistic regression analyses: * p ≤ 0.05, ** p ≤ 0.001, ns = Not statically significant the response rate was high enough to assume that the target population is reflected with a reasonable degree of accuracy. The clusters were purposively selected to capture the diversity of characteristics, however the individuals were let to participate conveniently until the quota size was attained for each cluster. This could have introduced volunteer bias. Nevertheless the pre-stratification by age and sex in specified quotas might have redressed the bias to some extent. The present study relied on self reported information; a possibility of over and under reporting due to respondents' seeking social desirability could lead to bias. However temporal stability was checked with satisfactory reliability. The data might provide a reflection of oral health related behaviors among adult Tanzanians. However, as the respondents were drawn by non-probability sampling, the findings must be interpreted with caution when making direct generalizations to the whole country. Furthermore; at the point of analysis some ordinal and continuous variables were dichotomized to allow for logistic regression. This to some extent might have reduced the power and a better fitting of the data. The cut off points might have misclassified individuals to categories that they did not belong. Therefore the costs of dichotomization should not be ignored when interpreting these findings. Moreover most of the ORs were modest indicating that the differences between the categories were not very prominent. However the displayed differences could be useful in real life planning situations.
The findings of this study indicated that urban residents showed a high likelihood to snacking sugary foods and drinks, eat fruits, attend dental clinics, use factory made tooth brushes but were less likely to take alcohol or smoke cigarettes than their rural counterparts. The higher tendency of urban than rural residents to consume sugar was also reported in a study among Tanzanian University students [11], South Africans [25] and Ghanaian adolescents [10]. As correctly put by Holmboe-Ottesen [26], urbanization and globalization increase the consumption of sweet soda pops, biscuits and other snacks produced by multinational companies. In addition urban residents in devel-oping countries are easily targeted by food adverts through the media hence become alternative consumers of confectionery that would otherwise not get an easy access to western markets [14]. Healthy public policies are necessary for monitoring the influx of sugary foods and drinks in Tanzania to protect consumers from irrational use of these commodities. Besides; reduction of sugar consumption fits into the common risk factor approach to disease prevention [27]. In this regard, reduction of sugar consumption will not only contribute to the prevention of dental caries but also other chronic lifestyle diseases. In another perspective, fear of high death tolls from the chronic conditions might reinforce the restriction of sugar intake and in so doing contribute to caries prevention.
Health promotion emphasizes the importance of supportive environments in enhancing people to choose healthier lifestyles. Therefore, health educationists have to consider the intricate mediating role of residence environment in shaping snacking behaviors. This study found that only a small proportion of individuals consumed sugary snacks and drinks very frequently. However with trade liberalization, this distribution might scale up to higher values especially in urban areas where the environment is conducive to promote the consumption of varieties of sugary snacks and drinks. Therefore deliberate efforts should be made to maintain these low levels of sugar consumption.
While it is recommended to eat fruits about five times a day [28], this study found 88% of urban and about 64% of rural residents consuming fruits at least once a week. Although fruits are known to be cultivated in rural areas, it was noted with concern that more urban than rural residents eat fruits. As also reported elsewhere [29], knowledge of the recommended frequency and perceived benefits of fruit intake might not be sufficient among the study participants and particularly rural residents. It is also important to note that unreliable transportation in rural areas leads to difficulties in moving goods from place to place. As a result; people depend largely on locally grown fruits of which their availability is seasonal. This disadvantage might have accounted for the low rates in fruit consumption among rural respondents.
Proportionately fewer rural as compared to urban residents used factory made toothbrushes and toothpaste. Alternatively; a higher proportion of rural residents used miswaki and charcoal than their urban counterparts. Rural residents in this study were also disadvantaged as regards utilization of dental services. As rural communities in many aspects represent less affluent societies, affordability and accessibility of dental services could be a challenge to the poor rural residents. Consequently, the immediate options tend to be self medication or hope that dental pain would disappear on its own [30]. Despite of a number of measures deliberated by the Ministry of health in its policy guidelines for oral health [22] studies conducted more than a decade ago on dental attendance rates in Tanzania portray a similar rural-urban disparity [17]. Left with constrained access to modern health facilities; rural residents also seek alternative medicine through traditional healers [31]. This rural-urban socio-economic gradient reflects among other things, a social inequality which puts rural residents at a disadvantage, whereby their opportunities are more or less confined to what can be locally available.
While other forms of tobacco were reported to be consumed by small fractions of the study sample, the prevalence of ever used cigarettes was 16.7%; which is almost similar to the rate reported in another study among Tanzanian university students [11]. This study also found males were more likely to smoke than females. However with the ever enduring multinational tobacco adverts; it will not be surprising in some years to come to have more smokers even among women. Smoking and alcoholism clustering reported by Myers et al [32], has also been found to be associated with rural residents in this study. Unfortunately, this adds up on the risks to the already disadvantaged society. Minimal recreation facilities in rural areas might have been compensated by smoking and alcoholism. Contrary to this line of thinking, Pootinger, [33] reported heavy drinking among sports club members. Exploring alcohol and tobacco information further this study also showed that dental pain increased the likelihood of drinking alcohol. Similar findings were also reported by Lahti [34]. Whether alcohol was used as a means to cub down the dental pain or rather the pain coexisted with other forms of misery which prompted the participants to drink, that is yet to be explored. However, it has been reported elsewhere that dental health detrimental behaviors correlate with the use of marijuana, smoking frequency, and engagement in antisocial behavior [35]. This clustering calls for a careful exploration of determinants of health behaviors. This information will help in structuring health promotion activities that will unearth what is rooted under the clusters of unhealthy behaviors. Although a higher proportion of educated people resided in urban areas and the minimally educated were more likely to smoke cigarettes, controlling for the potential confounders, this study also found that urban residents were less likely to be those who smoke, implying that being a rural dweller in itself added to the likelihood of smoking cigarettes. The whole scenario portrays a limited leeway for rural residents to live healthier lives. Viewing life in terms of its quality and fall into line with those believing in equity and equality in health; rural residents in Tanzania deserve a fresh look if they are to give a significant contribution to the achievement of the National Strategy for Growth and Reduction of Poverty.
The rural-urban disparity displayed by the findings of this study lays a foundation on how to set priorities in planning oral health promotion activities. Both the educational and policy aspects of health promotion have to be sensitive to these disparities in order to enable disadvantaged rural communities to live healthier lives. | 2016-05-12T22:15:10.714Z | 2009-09-14T00:00:00.000 | {
"year": 2009,
"sha1": "cbb1f28ff1fe3067fad6b4e4e23afc0eff6e466c",
"oa_license": "CCBY",
"oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/1472-6831-9-22",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "482c0bb439b82fa5044a0ce7dd26dc4e4aea49b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231947946 | pes2o/s2orc | v3-fos-license | Management of men with ultra-short penile urethral stricture using augmented anastomotic penile skin flap urethroplasty; a retrospective analysis
The management of short anterior urethral stricture is challenging. Our study aims to evaluate the outcome of augmented anastomotic urethroplasty (AAU) for the management of men with ultra-short penile urethral stricture, and to compare it with the dorsal onlay buccal mucosa graft. Databases of two tertiary referral centres were retrospectively reviewed to retrieve data of men with ultra-short penile urethral stricture who underwent urethroplasty from 2013 to 2020. Patients who underwent AAU with ventral onlay pedicled skin flap were considered the study group, while patients treated with the dorsal onlay graft augmentation were included as controls. Surgical outcomes included urethral patency, improvement in the maximum flow rate (Qmax), change in sexual satisfaction, and any reported complications. Thirty-four patients (and 30 controls) with a median age of 26–27 years were included in the study. The maximum flow rate improved significantly in both groups compared to the preoperative value (p < 0.001). The success rate was 88% in the study group compared to 76.7% in the control group. There was no statistically significant difference in the frequency of postoperative penile curvature nor the ventral sacculation between the two groups (p = 0.788 and 0.913). The operative time was statistically significantly longer in the control group (p = 0.044), while the frequency of postoperative void dripping was much higher in the study group (p = 0.007). The success rate and complications of AAU for men with ultra-short penile urethral stricture were comparable to the dorsal buccal graft.
Background
Anterior urethral stricture is a common urologic disorder prevalent in men. It is associated with a significant deterioration in the quality of life and elevation in healthrelated expenditures [1]. It is reported that untreated urethral strictures may lead to acute retention and prostatitis in more than 50% of patients [2]. After bulbar urethra, the penile urethra is the second-most commonly affected site by strictures (30%) [3]. The most commonly reported cause of penile urethral strictures is iatrogenic [4]. However, the strictures may result from traumatic avulsion of the penile urethra in association with a fractured penis, in which sexual trauma is one of the main etiological factors [5].
The current literature describes several surgical procedures to correct penile urethral stricture. However, the choice of optimal treatment is primarily driven by stricture etiology, length, and lumen conditions [6,7]. The repair of penile stricture can be done in one-stage or twostage urethroplasty. Two-stage urethroplasty is required for more complex penile urethral strictures caused by lichen sclerosis (LS) or failed hypospadias repair. On the other hand, Dorsal Inlay (Asopa technique) or dorsal onlay graft (Barbagli technique) are the most common procedures used for one-stage repair; however, these techniques are not suitable for stricture diameter less than 6 mm. Besides, circumferential mobilization of the urethra in the Barbagli procedure may compromise the urethral blood supply [8,9].
The length of the penile urethral stricture is usually long. Thus, data are scarce regarding the short penile stricture less than 1 cm in length. Despite the high success rate of excision and primary anastomosis in the management of short bulbar urethral stricture, its use in the penile urethra is limited due to the risk of penile chordee [10,11]. Augmented anastomotic urethroplasty (AAU) is an alternative technique for the management of long bulbar urethral strictures with an extended area of narrowing and fibrosis [12]. Moreover, the technique combines the advantages of both anastomosis and graft substitution [13]. Up to our knowledge, no previous studies used this technique for the management of penile urethral stricture. The aim of our study is to investigate the outcome of AAU for the management of ultra-short penile urethral stricture (< 1 cm) and to compare this technique with the dorsal buccal mucosa only graft.
Methods
This study is a retrospective analysis of the records of patients with penile urethral stricture who underwent urethroplasty between January 2013 and January 2020 at two tertiary centers. The study was approved by the Local Ethical Committee of the two tertiary centers. The current study included men with penile urethral stricture (≤ 1 cm) [ultra-short penile urethral stricture] who underwent either AAU using ventral onlay pedicled skin [Orandi] flap procedure (study group) or dorsal Buccal mucosa onlay graft augmentation (control group).
Exclusion criteria included patients with a previous history of a visual internal urethrotomy, previous urethroplasty, genital skin diseases (lichen sclerosis), or patients with missed or incomplete data. The study was approved by the ethics committee of Sohag University, Faculty of medicine with approval number REC/09/012,020.
Preoperatively, patients were evaluated by history and local examination. Urethral patency was assessed using retrograde and voiding urethrogram ( Fig. 1), followed by sonourethrography to identify and assess the degree of any periurethral fibrosis. The sexual function was evaluated using the Sexual Health Inventory for Men (SHIM) questionnaire [14]. The maximum flow rate (Q-max) was assessed by uroflowmetry. The retrieved Patients' data included stricture length, site, and etiology.
Augmented anastomotic urethroplasty technique
After the induction of spinal anesthesia, the patients' positions were adjusted to the standard supine position. The procedure was started by applying a stay suture through the glans penis with 4.0 vicryl suture to stretch the penis and urethra. The distal end of the penile stricture was identified using a Nelaton catheter 20F that was passed through the meatus, followed by injection of methylene blue. At the tip of the Nelaton catheter, a ventral longitudinal skin incision was performed till the distal end of the stricture. Next, ventral stricturotomy was performed until the healthy urethra was reached proximally over a guidewire or 4 French ureteric stents. The stricture segment was evaluated for residual lumen and its length. Limited circumferential urethral mobilization was performed allowing for tension-free anastomosis of dorsal urethral ends. The ventral defect is augmented using longitudinal non-hairy ventral skin flap that was designed and mobilized as an Orandi flap, corresponding to the length of the ventral urethral defect [15]. The flap was sutured in a ventral onlay fashion using 5.0 vicryl in two layers using the flap pedicle and adjacent tissue as a 2 nd layer. The repair was performed over a silicon catheter 18F. The wound was closed in layers without drain, followed by a dressing of the penis with compressive dressing [ Fig. 2].
Dorsal Onlay graft augmentation technique
A subcoronal circumcision incision was made, and complete degloving of the penile skin was done. The distal end of the penile stricture was identified using a Nelaton catheter 20F that was passed through the meatus, followed by injection of methylene blue. The urethra was circumferentially dissected from the corpora cavernosa. The urethra is rotated 180°, the dorsal urethral surface was exposed and fully opened. The stricture was then opened for its entire length by extending the urethrotomy into the healthy urethra 1 cm distally and proximally. Once the entire stricture has been incised, the length and width of the remaining urethral plate were measured. The oral mucosal graft was harvested from the inner cheek by another team and fixed over tunica of the corporal bodies using 5/0 vicryl sutures. A 16 Fr silicon catheter was inserted. Interrupted 5-0 polyglactin sutures were used to stabilize the urethral margins onto the corpora cavernosa over the graft at each side. At the end of the procedure, the graft was completely covered by the urethra; then, the classic closure of the skin and underlying dartous was performed over an 18 Fr silicon catheter [16].
Postoperative assessment
All patients were discharged home after 48 h. The urethral catheters left for 3 weeks and were removed after peri-catheter retrograde urethrogram. In the case of extravasation, the catheter was left for one more week. The suprapubic catheters were left in place for few days after urethral catheters to ensure a satisfactory voiding before removal. Voiding cystourethrography was performed at 6 months postoperatively (Fig. 3) while flowmetry was done at 3 and 6 months. Patients were asked to complete the SHIM questionnaire before and 12 months after the surgery. The penile curvature was examined using a goniometer in an artificial erection state and was classified according to Kelami classification [17]. Patients were then interviewed by telephone annually to rule out any change in their LUTS.
The primary outcome was the overall success rate, which was defined according to the urethral patency and the improvement in the maximum flow rate (Qmax). Failure of the procedure was defined as any postoperative urethral intervention or instrumentation. The secondary outcome was the occurrence of complications as a decline in SHIM score and any recorded penile curvature.
Statistical analysis
The statistical analysis was performed using the SPSS software (Statistical Package for the Social Sciences, version 24, SSPS Inc., Chicago, IL, USA). Frequency tables with percentages were used for categorical variables, and descriptive statistics (mean and standard deviation) were used for numerical variables. Paired t tests were conducted to detect the significance of the change of flow rate. A p value of less than 0.05 was considered statistically significant.
Results
In the present study, the records of 78 patients with ultra-short penile urethral stricture were reviewed. A total of 14 patients were excluded due to missed data, previous urethroplasty, and a history of previous visual internal urethrotomy. Thus, 64 patients' records were included in the final analysis; 34 of them had AAU (study group), and 30 patients underwent a dorsal only buccal mucosa graft (control) group. The median (range) of ages in the study groups was 26 years (range 18-34 years), while were 27(18-36 years) for the control groups. There were no statistically significant differences between the two groups in terms of preoperative stricture length, maximum flow rate (Q-max), and the post-void residual urine. The most common cause of stricture was iatrogenic (59.4%) followed by idiopathic (29.7%) in both groups. The most common site of stricture was the mid-penile shaft (Table 1). In the control group, the five patients with distal penile strictures were at the fossa navicularis, while those with proximal penile were bulb-penile strictures. The percutaneous suprapubic catheter was inserted in 22 (64.7%) patients preoperatively owing to upper tract dilatation and for the eradication of urinary tract infections.
Postoperatively, Q-max improved significantly in both groups compared to the preoperative value (p < 0.001). There was no statistically significant difference in the frequency of postoperative penile curvature nor the ventral sacculation between the two groups (p = 0.788 and 0.913). Despite the higher frequency of stricture recurrence in the control group, this was not statistically significant (p = 0.322). The operative time was statistically significantly longer in the control group (p = 0.044), while the frequency of postoperative void dripping was much higher in the study group (p = 0.007) ( Table 2).
During the median follow-up period 31 (12-50) months, 30 (88%) patients were successful, whereas four (12%) patients required interventions in the study group; two patients developed recurrent stricture during the first year, one patient at the second year, and the fourth patients developed urethra-cutaneous fistula at the 3 rd month. All failed patients underwent surgical revision within 6 months after their primary urethral repair. Regarding the control group, 23 (76.7%) patients were successful and 7 (23.3%) suffered recurrent stricture, and staged urethroplasty was performed for them after 6 months.
Fig. 3 Postoperative retrograde urethrogram
The overall complication-free and failure-free were 88% and 70% after 20 months of follow-up according to the Kaplan-Meier function curve for the studied patients.
Discussion
In the current retrospective study, both AUU and dorsal buccal mucosa only graft were comparable regarding the postoperative outcomes. The overall incidence of recurrent stricture, penile curvature, and ventral sacculation was low with a statistically insignificant difference between the two groups. However, the operative time was statistically significant longer in dorsal graft compared to AAU, and the frequency of post-void dripping was statistically and clinically higher in the AAU group.
The choice of surgical repair for penile urethral stricture depends on stricture etiology, the extent of spongiofibrosis, and the surgeon experience [18]. The repair of penile urethral stricture is more challenging because of the small thickness of corpus spongiousum and consequently lack of support for ventrally placed grafts. Furthermore, excessive mobilization of penile urethra could compromise the critical blood supply for the circumferential arteries.
For penile urethral strictures, the surgical options for urethral reconstruction include buccal mucosa urethroplasty using dorsal onlay or dorsal inlay techniques, fasciocutaneous local skin flap urethroplasty or staged approach in longer and more complex cases. Excision and primary anastomosis can lead to tension and chordee, whereas the risk of sacculation or pseudo-diverticulum formation increases in cases of ventral onlay with pedicled flaps [6,10].
The causes of penile urethral stricture in our study were iatrogenic trauma in 59%, Idiopathic in 30%, and traumatic in 11%. While in developed countries, LS and failed hypospadias repair are reported to be the most common causes [19]. This could explain the short stricture length in our patients. Augmented anastomotic urethroplasty is usually employed for longer unequal bulbar urethral strictures that have a segment with dense spongiofibrosis and too narrow residual lumen not suitable for augmentation urethroplasty. It is a combination repair and comprises excision and substitution urethroplasty [13]. In our study, the success rate of the AAU group was 88% in terms of anatomical urethral patency. Three patients (9%) developed mild penile curvature (< 15 degrees), while two patients (6%) developed ventral sacculation.
In 2001, Guralnick and Webster reported a 93% success rate for AAU as shown in their retrospective analysis at a mean 28-month follow-up. However, in their study, they did not state a clear definition for successful repair as they had one patient that did dilatation and one patient underwent visual urethrotomy. 21% of patients noticed some degree of penile shortening however they report that they were "neither measured nor problematic [12]. In 2007, a retrospective analysis done was by Abouassaly and Angermeier, where they reported their results with AAU in 69 patients showing a success rate of 90% in median follow-up period of 34 months. On the other hand, their main complications were UTI and stricture recurrence, with no report of penile curvature. They were using oral mucosa in their surgical repair [13]. In 2008, El-Kassaby and coworkers reported their results in AAU for long bulbar stricture and the overall success rate was 93.7%. They reported an incidence of 40.4% of postoperative dribbling of urine which is similar to the current results (41%). It should be mentioned that they had temporary perioral numbness in most patients [6]. More recently, Hoy and colleagues reported their outcomes in AAU in long bulbar urethral stricture even more than 5 cm. The success rate (no stricture recurrence) was 96.9%, and the main complication was post-void dribbling (41.7%) [20,21]. The success rate in our study was slightly lower than reported in the literature; however, we should emphasize that all previous studies used AAU for bulbar urethral stricture, which has more limits of mobilization and thick supportive corpus spongiousum. Besides, their sample size was larger.
On the other hand, the success rate of dorsal onlay graft augmentation in our study was 76.7%. The success rate varied widely in the literature. In 1998, the success rate of the dorsal onlay graft urethroplasty using penile skin as a substitute material, with a mean follow-up of 21.5 months, was 92% [22]. In 2001, the success rate, with a mean follow-up of 43 months, was 85% [23]. In 2008, the success rate, with a mean follow-up of 111 months, decreased more to 65.8% [24]. Finally, in 2014, the success rate, with a mean follow-up of 190 months, was 63.6% [11]. The use of buccal mucosa is superior to the penile skin in dorsal onlay graft bulbar urethroplasty. The success rate of buccal mucosa used in 6 patients, showed a 100% success rate at 13.5 months mean follow-up [22]. In 2005, the success rate on 23 cases with 42 months mean follow-up, was 85% [25]. Notably, the success rate of the dorsal only graft in this study was lower than reported in the literature. This could be explained by the dense fibrosis and the markedly reduced lumen in this group (< 6 mm).
Our results showed that AAU could be a reliable procedure to manage ultra-short penile urethral stricture with an obliterated lumen (residual urethral plate width less than 6 mm) that appeared unsuitable for one-stage urethroplasty. The pedicled skin flap was used as a ventral onlay. The pedicled skin flap is a long, hairless, and flexible flap that is suitable for restoring the urethral patency without cosmetic disfigurement. In addition, the operative time of AAU was shorter than of the dorsal onlay buccal mucosa graft. But, the frequency of post-void dripping is high due to the risk of ballooning of the ventral flap caused by the lack of ventral support. This finding was in tandem with other studies [26].
The limitation of the current study includes; the small sample size as this type of stricture is infrequent and its retrospective nature. It is important to conduct a prospective or cohort study including a larger number of patients with clear inclusion and exclusion criteria to state clear indications and outcomes for the described technique.
Conclusion
Given the low number of patients and according to type of study (retrospective), the authors cannot affirm that "augmented anastomotic urethroplasty using ventral onlay pedicled skin flap" is an effective treatment. Prospective studies with larger sample size are needed.
Abbreviations AAU : Augmented anastomotic urethroplasty; SHIM: Sexual health inventory for men; Q-max: The maximum flow rate.
Authors' contributions AE designed, analyzed the data, and revised the manuscript. AR designed the study, analyzed the data, and revised the manuscript. MSK designed the study, analyzed the data, and wrote the manuscript. All authors have read and approved the manuscript.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate
The study was approved by the ethics committee of Sohag University, Faculty of medicine with approval number REC/09/012020. Being retrospective analysis of the database, no written informed consent needed from patients | 2021-02-18T14:39:27.833Z | 2021-02-17T00:00:00.000 | {
"year": 2021,
"sha1": "8d21f1f4b7d447990fb4e3cdbea0d206811c74d5",
"oa_license": "CCBY",
"oa_url": "https://afju.springeropen.com/track/pdf/10.1186/s12301-021-00130-4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c77468a48af5ac27d79b8e66247713f3f04f75ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
234280798 | pes2o/s2orc | v3-fos-license | At the Crossroads of European Landscape Changes: Major Processes of Landscape Change in Czechia since the Middle of the 19th Century and Their Driving Forces
: Changes in the cultural landscape provide essential evidence about the manner and intensity of the interactions between humans and nature. Czechia has a specific location in Central Europe. It is positioned at the crossroads of European landscape changes. These changes can be documented based on a unique database that shows the development of land use since the middle of the 19th century. In this study, we aimed to address the major processes of landscape change that occurred during four periods over the past 165 years, at the cadastral level on the territory of present-day Czechia. Further we identify and discuss proximate and underlying driving forces of the landscape changes. We used land use data from the year 1845, 1896, 1948, 1990, and 2010 that correspond to key events in Czech history. The major processes and intensity of landscape change were evaluated based on calculations of increases and decreases in land use classes between the first and last year of each examined period. The period 1845–1896 was the only period in which arable land increased, and the most recent period, 1990–2010, was the only period during which a grassing over process was recorded. Afforestation was recorded in all periods. The communist period was characterized by unified changes—urbanization, afforestation, arable land decrease, and landscape devastation. The post-communist period was, in some respects, beneficial to the landscape (e.g., grassing over and afforestation, particularly in mountain areas), but it also led to negative processes, such as strong urbanization and land abandonment. Such changes lead to landscape polarization. The landscape changes in Czechia during the period 1845–2010 reflect many important historical events in Europe. In our analysis, we demonstrate the essential impact of underlying drivers and also identify driving forces specific to the development of the Czech territory.
Introduction
Long-term landscape changes at national, regional, or local scales reflect different phases of natural, political, economic, social, technological, and cultural development in the broader international context [1][2][3][4][5][6][7][8][9][10]. Czechia, a country located in Central Europe, represents a unique model area in which the impacts of societal driving forces on land use and landscape change can be studied over a long period of time.
A number of scientific studies have dealt with changing patterns of landscape utilization and its driving forces. Some of these examine long periods of time, usually at local or regional levels, but do not study changes at the national level [10][11][12][13][14][15][16] or deal only with selected land use transformations (for example, of forests and agricultural land [8]). On the contrary, studies concerning all kinds of land use alteration usually analyze only short periods of time because data availability does not allow greater analysis. Remote were used for the evaluation of long-term landscape changes and processes. This unique data source, created at the Faculty of Science, Charles University Prague, contains land registry records (provided by archives and by the Czech Office for Surveying, Mapping and Cadastre) at the cadastral level. The data were originally based on the parcel level (cadastral maps), but they were provided and used in the analyses for the whole cadastre as one number, i.e., the total area for each land use category. Our database stores land use data for the years 1845, 1896, 1948, 1990, 2000, and 2010 for the whole territory of Czechia. While the years of analysis are based on data availability, they also represent, in most cases, historical milestones in Czech and European history. Data for 1845 originate from the "Franciscan Cadastre", also known as the Stable Cadastre. These data document land use and the landscape characteristics in the middle of the 19th century. This unique data source is available only for the countries located in the territory of the former Austro-Hungarian Empire. Data for 1896 and 1948 were taken from the datasets of later cadastral mappings (stored in archives), and data for 1990, 2000, and 2010 were provided in a database of the Czech Office for Surveying, Mapping, and Cadastre (for a detailed explanation of the data origins, see [43]). The data were provided for administrative cadastral units. To ensure a consistent area of the analyzed units during the whole period of interest (from 1845 until 2010), cadastral units were amalgamated into so-called stable territorial units (STUs) using a geographical information system (GIS). The year 1990 was chosen as a reference, and the maximal size fluctuation among different years was set at 2%. At present, approximately 13,000 cadastral units exist in the national territory and these were amalgamated into 8,832 STUs for research purposes. In some cases, one STU consists of two or more amalgamated cadastral units, usually in areas where changes of administrative boundaries have occurred. Almost 80% of STUs, however, consist of just one cadastral unit. The STU is the minimal mapping unit for analysis. STUs range in size from 24 ha to 8,000 ha (with the exception of several military areas with sizes ranging from 20,000 to 45,000 ha). The average STU area is 800 ha [43].
The LUCC Czechia Database includes data on eight land use classes: arable land, permanent cultures, meadows, pastures, forest areas, built-up areas, water areas, and remaining areas (total areas of the classes in each STU for each year are stored). Agricultural land equates to the combination of arable land, permanent cultures, meadows, and pastures; permanent grassland equates to meadows and pastures; other areas equate to the combination of built-up areas, water areas, and remaining areas. Numerical data (areas of land use classes) for all time horizons come from the cadastral records (databases) and were calculated by cadastral authorities based on cadastral maps. We used numerical data for our analysis provided by these cadastral authorities (Central Land Survey and Cadaster Archive files and Czech Office for Surveying, Mapping, and Cadastre), not the original cadastral maps. Sources of the data, types, and scales of the maps used for the calculation of land use classes areas are summarized in Table 1. At present, land registry records are updated by land owners, who should provide details of all changes in land use to the Czech Office for Surveying, Mapping and Cadastre. In reality, however, many owners do not update these records. The analysis of landscape changes was undertaken using the LUCC Czechia database described above. The LUCC Czechia Database is connected to the polygon GIS layer of STUs. STUs ensure time consistency of the database and comparability of the records from individual years. Based on land use data in the database, three complementary parameters were calculated/derived for STUs and visualized in maps (cartograms) to undertake the analysis and evaluate the changes. The parameters were (1) major processes of landscape change, (2) class of highest decrease, and (3) intensity of overall change based on calculation of the index of change. These processes and intensity of landscape change are further discussed in the context of the various types of potential driving forces in the particular periods (see Sections 3.1-3.5).
(1) Major processes of landscape change This parameter evaluates the major processes of landscape change [44] in particular time periods (in our case, 1845-1896, 1896-1948, 1948-1990, 1990-2010). This method works with four types of major processes based on the territorial increase in four land use classes (partly aggregated): (1) intensification-increase in arable land (crop cultivation) and permanent cultures; (2) grassing over-increase in permanent grasslands; (3) afforestationincrease in forest areas; and (4) urbanization-increase in built-up and remaining areas. First, the change during the examined period was calculated (as the difference in area in the second and first year) for each category to find out if the category increased or decreased. Next, only increases were taken into consideration. STUs were sorted into the abovementioned categories for major processes of landscape change according to the land use class that showed the highest increase.
Each of the abovementioned types can be further sorted into subtypes according to the grade of the increase. Three grades of changes were distinguished: high change (the "prevailing" change accounts for more than 75% of all changes combined), moderate change (50%-75%), and low change (less than 50%). STUs in which land use changes were recorded in less than 1% of the total territory were not examined [43].
(2) Class of highest decrease The class that showed the highest decrease in the particular period was determined for each STU and visualized in the map as information in addition to the abovementioned typology of major processes of landscape change that is based on increases according to land use classes.
(3) Index of change The index of change indicates the intensity of landscape changes over a certain period of time in the area of interest (STUs in our case) [43]: where IC A-B -index of change between year A and year B; n-number of land use classes; P iA -share of relevant land use class on total area of STU at the beginning of the examined period; P iB -the same share at the end of the examined period.
The index ranges from 0 to 100 and indicates the proportion of area where any change occurred. This is based on the data reflecting the land use at the beginning and end of the examined period (changes that may have occurred during the examined period are not taken into consideration). The higher the index of change, the more intensive the landscape changes in the area when comparing the first and last year of the period. Similar index for landscape change evaluation (landscape change index) was used for example in [45] and [46].
Determination of Driving Forces
Because of the lack of empirical data, the proximate and underlying driving forces for particular periods were determined using methods of qualitative analysis. Specifically, expert assessment was carried out by scientists in our team who are specialists in social science and history, and have long-term experience with the evaluation of land use/land cover changes and their driving forces in Czechia from many case studies at different spatial levels. This approach has been commonly used by other authors [8,9] and can provide valuable information in cases in which appropriate empirical data for statistical analysis are not available. In their review, Plieninger et al. found that 55% of 125 analyzed publications dealing with the driving forces of landscape change in Europe relied on the personal (i.e., expert) interpretation of driving forces [9].
The main driving forces for each period are summarized in Figures A1-A4 according to the scheme used in [9]. This scheme was adopted for the conditions of the Czech territory. Six types of proximate and five types of underlying driving forces were generally identified and specified for each time period. In addition, new landscape functions based on land use changes are noted for each period. It is also important to evaluate changes in the context of natural conditions. For this purpose, we provide a physical map of Czechia here ( Figure 1) and we refer to this map in the chapter 3.
Results and Discussion
In the subsections, we provide the results of the evaluation of major processes and the intensity of landscape change determined between the first and last years of the particular period. The results are discussed in the context of historical events and driving forces estimated based on expert assessment. A detailed overview of estimated proximate and underlying drivers, summary of main land use changes, and lists of new functions
Results and Discussion
In the subsections, we provide the results of the evaluation of major processes and the intensity of landscape change determined between the first and last years of the particular period. The results are discussed in the context of historical events and driving forces estimated based on expert assessment. A detailed overview of estimated proximate and underlying drivers, summary of main land use changes, and lists of new functions for all periods (1845-1896, 1896-1948, 1948-1990, and 1990-2010) is presented in the diagrams in Figures A1-A4. 3.1. Driving Forces, Major Processes, and Intensity of Landscape Change in the Period 1845-1896 The first period of the long-term land use change research (1845-1896) was influenced, from the outset, by the revolutionary movement of 1848 that resulted in the abolition of serfdom, which was the key moment leading to social modernization. People could freely move from the countryside to cities and towns, where labor was needed in newly established factories. To secure food supplies, it was important to increase agricultural productivity and create a functional transportation network. These factors constitute the crucial driving forces in the second half of the 19th century (for an overview of proximate and underlying drivers during 1845-1896, see Figure A1), and also led to the agricultural and transportation revolution as part of the complex revolution of the modern era [47][48][49][50][51].
The railway network was practically completed during the final two decades of the 19th century. Regional (local) railway lines were purpose-built: raw material needed for the food processing industry (sugar beet, potatoes, and grain) was transported on trains. Railways also served fast-growing cities and provided a vital transport link for commuters who regularly travelled to cities and towns. This was the first phase of urbanization, during which all towns grew rapidly.
Austria-Hungary collapsed at the end of World War I, and the whole of Central Europe was completely restructured. The agricultural revolution was completed in the period 1870-1880. More effective farming was gradually developed. However, arable land continued to expand in the period 1845-1896 until the beginning of World War I. This was the most important land use change between 1845 and 1896 in STUs that covered 70.5% of the national territory ( Figure 2a and Table 2). More effective farming processes first appeared in big farms with sufficient funds that could make use of more advanced methods and tools (fertilizers, cultivars, and basic mechanization). As a result, yields rose, and modern techniques were gradually also adopted by smaller agricultural enterprises, particularly in the fertile plains (Polabská nížina, Central Moravia-Prostějov; see Figure 1).
Thus, the direct influence of natural conditions on landscape changes gradually decreased; on the contrary, technological, economic, and social factors became more important. Regarding the value of soil (seen as a natural resource), the economic aspect was crucial in fertile areas, whereas in the less-favored regions, environmental aspects prevailed.
Between the years 1845 and 1896, the total intensity of landscape changes (see Figure 2c) was high in the fertile plains (Polabská nížina, plains in Southern Moravia; Figure 1) where arable land expanded (as shown by the index of change, changes were recorded in about 10%, and in some places more than 20%, of the cadastral territory). However, the increase in arable land differed by region (while in fertile plains of Polabská nížina and Central Moravia the increase reached more than 5%, in the mountains the share of arable land did not increase). As cattle moved into stables, it became necessary to secure forage for animals that was produced on fields (cereals, potatoes, green beans, and green rests). Permanent grasslands, particularly pastures, shrank significantly (decreasing by 3.3 percentage points; see Figure 2b and Figure 6). The area covered by agricultural land peaked in the 1870s, at almost 70%, which was the largest share for the whole considered period 1845-2010. the most important land use change between 1845 and 1896 in STUs that covered 70.5% of the national territory ( Figure 2a and Table 2). More effective farming processes first appeared in big farms with sufficient funds that could make use of more advanced methods and tools (fertilizers, cultivars, and basic mechanization). As a result, yields rose, and modern techniques were gradually also adopted by smaller agricultural enterprises, particularly in the fertile plains (Polabská nížina, Central Moravia-Prostějov; see Figure 1). The three-field system no longer existed and was replaced by more advanced crop rotation supported by manure, then by sodium nitrate and, from the turn of the 20th century, also by fertilizers. As fallow land practically disappeared, arable land increased by up to 30% in some areas and influenced the surplus crisis. An enduring agrarian crisis appeared in the period 1880-1890. In Austria-Hungary, the competition from cheap cereal imports from the United States was largely reduced by the introduction of tariffs levied on imported products.
The decrease in forest cover was a permanent phenomenon of landscape change in the territory of the present-day Czechia from the early plantation period (9th century) until the 1870s. The Emperor's decree of 1852 (the so-called "Forest Law") introduced binding forest management rules that were mandatory for all owners, including nobility. The most important rule stipulated that all areas that were cleared must be reforested within five years. Any change of forest land into another type of land use must also have been approved in advance. As a result, forests started to expand from the 1870s as big landowners introduced afforestation schemes in less fertile areas (nobility owned approximately 50% of the forests at the end of the 19th century). The "Forest Law" can be distinguished as a triggering factor for forest transition (the process introduced by [3]) in Czechia.
Land registry data show (see Figure 2a and Table 2) that afforestation constituted the most important landscape change in many areas of Bohemia and Moravia until 1896 (in STUs that covered 19.3% of the national territory). Forest plantations, however, contributed to the changing composition of forests because fast-growing species, particularly Norway Spruce, were preferred. Thus, desirable stable mixed forests became less frequent. Wood became a valuable material in the construction, furniture, and paper industries.
Driving Forces, Major Processes, and Intensity of Landscape Change in the Period 1896-1948
Political, social, and economic conditions changed fundamentally during the period 1896-1948. This was the period when the territory of present-day Czechia (and the whole of Central Europe) experienced rapid transitions.
Contradictory landscape changes and driving forces of both the proximate and underlying levels can be identified; see Figure A2. Austria-Hungary consolidated after the 1867 Constitution, but other mainly Slavonic "confederation" member nations had much less influence on its administration. After the defeat in World War I, the monarchy disintegrated into new multinational Slavonic states (Czechoslovakia, Hungary, Austria, Yugoslavia).
From the perspective of landscape change, the period before World War I was crucial. Major technological improvements enabled the economic boom at the turn of the 20th century. Thus, the traditional society, which for centuries had been confined to a limited space, was transformed into a modern society with increased mobility and more intensive contacts among producers and consumers (new power plants and power lines, more advanced machines, cars, railway lines, agricultural mechanization, etc.).
World War I had a fundamental impact on the whole of Central Europe and its geopolitical position. These political changes resulted in major economic and social problems. In many cases, economic ties were discontinued-Bohemia and Moravia, formerly the major industrial regions of Austria-Hungary, lost many important markets.
Political and economic turmoil supported aggressive tendencies in Germany. After the Nazis came to power in 1933, Germany started to build an army and made a failed attempt to occupy and govern the whole of Eastern Europe. This attempt came to an end in 1945, when Germany was finally defeated, and the former German territory was occupied according to the agreements made by the victorious Allies. Central and Southeast European states (mostly Slavonic) were (according to the Allies' agreement at the Yalta Conference in 1945) included in the territory administered by the Soviet Union. The Soviets implemented a different style of political and economic cooperation, and the so-called "socialist" bloc was created. While these changes in the political climate should be viewed from a European perspective (proximate drivers), local driving forces (proximate and underlying) can be analyzed for the territory of present-day Czechia.
The period of 1918-1936 was strongly influenced by the Agrarian party, the strongest political entity. Agrarians enforced land reform. Property belonging to the Hapsburgs, German nobility, and a small part of the property of the Catholic church was confiscated (40,202 km 2 in total) and limited the maximum acreage of farms. This was a fundamental change regarding ownership and landscape utilization and structure. Around 60,000 landless rural people became new landowners, farming small plots of land. For a limited period of time after 1920, small farmers expanded the total area covered by arable land to earn a living. The land reform, however, was not enforced in full, and some of the confiscated land was returned to the original owners after the German occupation of Czechia in 1939-1945. Approximately 500,000 of farms were affected by the Great Depression (1929)(1930)(1931)(1932)(1933), of which about 20% were forced into bankruptcy. Many manufacturing businesses also closed down [52].
The events of World War II had a significant influence on the period of 1938-1948 and the post-war period. Rations were introduced and remained in effect until 1953; this system limited consumption but supported corruption and the black market. In 1948, Nazism was replaced by Communism.
An initial analysis suggests there were no major differences in land use between 1896 and 1948. According to our dataset, the total area covered by arable land decreased by only 1.7 percentage points during this period-in 1948, arable land still covered 49.9% of the national territory. Meadows and pastures also shrank slightly (by 1.7 percentage points). On the contrary, permanent cultures rose (+0.4 p.p.), as did forests (+1.25 p.p.; in 1948 forests covered 30.2% of the territory) and built-up and remaining areas. As Figure 3a and Table 2 show, various types of landscape change took place between 1896 and 1948. Afforestation constituted the major process of landscape change (the most common change in STUs covering 39% of the national territory; see Figure 3a), followed by urbanization (in STUs covering 32% of the national territory). Figure 3b shows that arable land, meadows, and pastures were the land use classes most likely to shrink. However, changes varied by region-in some fertile areas of Poohří and Polabí and in Southern Moravia (see Figure 1), slight increases of arable land were even recorded. A major expansion of forests can be seen in the southern half of Czechia and in areas adjacent to the Slovak border (Šumava and foothills, Bohemian-Moravian Highlands and, to a certain extent, Krkonoše; see Figure 1). Figure 3c summarizes the intensity of landscape change. In Prague, Pilsen, Southern Moravia, areas adjacent to the Slovakian border, Liberec, and Jablonec (geographical locations shown in Figure 2), there were changes recorded for more than 10% of the territory; these were areas with the highest intensity of change. By contrast, the peripheral border regions of NW and SW Bohemia, South-Central Bohemia, and the Jeseníky Mountains experienced only modest changes (Figures 1 and 3c). Figure 2), there were changes recorded for more than 10% of the territory; these were areas with the highest intensity of change. By contrast, the peripheral border regions of NW and SW Bohemia, South-Central Bohemia, and the Jeseníky Mountains experienced only modest changes (Figures 1 and 3c).
Driving Forces, Major Processes, and Intensity of Landscape Change in the Period 1948-1990
A centrally planned economy was adopted during the period between the communist coup d'état in 1948 and the collapse of communism in 1989. All of the important strategies were carried out and implemented by the Central Committee of the Communist Party, the dominant political body in the country. This included land use changes.
Urbanization was the dominant process of landscape change in most STUs (67% by area in total) during this period. Afforestation was also important (the most significant change in almost 20% of STUs by area); see Figure 4a and Table 2. In particular, these
Driving Forces, Major Processes, and Intensity of Landscape Change in the Period 1948-1990
A centrally planned economy was adopted during the period between the communist coup d'état in 1948 and the collapse of communism in 1989. All of the important strategies were carried out and implemented by the Central Committee of the Communist Party, the dominant political body in the country. This included land use changes.
Urbanization was the dominant process of landscape change in most STUs (67% by area in total) during this period. Afforestation was also important (the most significant change in almost 20% of STUs by area); see Figure 4a and Table 2. In particular, these general trends were closely connected with the transition of agricultural land and arable land (Figure 4b). The total area of arable land decreased by more than 9% (i.e., by 700,000 hectares). These changes resulted from specific driving forces under communism that were primarily of political and institutional natures. A detailed overview of these driving forces (subdivided into proximate and underlying drivers) is shown in Figure A3. It is also important to note that the cornerstones of political and social life were imported from and controlled by the Soviet Union.
general trends were closely connected with the transition of agricultural land and arable land (Figure 4b). The total area of arable land decreased by more than 9% (i.e., by 700,000 hectares). These changes resulted from specific driving forces under communism that were primarily of political and institutional natures. A detailed overview of these driving forces (subdivided into proximate and underlying drivers) is shown in Figure A3. It is also important to note that the cornerstones of political and social life were imported from and controlled by the Soviet Union. People moved in large numbers from rural areas to cities and towns where living conditions were more favorable. Approximately 3 million people lived in the countryside prior to World War II. Due to this rural-urban migration and the post-war transfer of Czech Germans living in the area near the borderland known as Sudetenland (excluding the border region with Slovakia), the population of Czechia after World War II dropped from 10.7 to 8.8 million by 1947 [53]. Naturally, the process of countryside depopulation continued. Thus, collectivization was taking place at the time as when industrialization was already underway and a large number of people were leaving rural areas.
The large fields that were created were more suitable for newly introduced machinery, but the impacts on the landscape were severe: erosion increased, soil (including fertilizers) was easily washed into streams and lakes, and many habitats (particularly those important for hares, pheasants, etc.) vanished. The biological protection of crops also deteriorated because the number of birds preying on insects was reduced. Sloping land (gradients of 7% or more) and small and irregular plots could not be cultivated by large machines. As a result, arable land areas were abandoned or gradually transformed into forests, giving rise to a form of "new wilderness" in some places.
Vast tracts of agricultural land (including arable land) were lost in areas close to the border, especially on higher grounds (border mountain range; Figure 1). The post-war transfer of Czech Germans was an important driving force because repopulation by native Czechs proved to be inadequate. Consequently, many villages ceased to exist in the frontier region, thus reducing the economic activity in such areas [54][55][56][57]. The Iron Curtain, which was installed along the western border, further limited farming and other activities. The space between the Iron Curtain and the border itself became inaccessible.
Industrialization also had a significant impact on post-war landscape changes as many new industrial complexes were built on greenfield sites. Mining was responsible for great losses of agricultural land: many new mines were created in the Ostrava and Kladno regions (see Figure 1). Of greater impact were the large-scale open pits that opened in northwestern Bohemia (coal basins around Most, Chomutov, and Sokolov; geographical locations shown in Figure 1). The combination of these activities led to environmental devastation (heavy water, soil and air pollution, acid rains, and devastation of forests in Northern Bohemia). From a regional perspective, the general trends are quite clear. A decrease in agricultural land was recorded in more than 90% of STUs over the period 1948-1990. This process was boosted by increased urbanization in the metropolitan areas (Prague, Ostrava, Brno, and the coal basins in northwestern Bohemia; Figure 1) where the index of change often exceeded 20% and occasionally 40%. Figure 4c clearly shows that this index also reached the highest values in coal mining areas.
Subsidies provided by the state that were intended to support agricultural businesses in less favorable natural conditions also constituted an important driving force. The scope of these subsidies depended on natural conditions and could reach 10%-80% of the gross agricultural value. However, agricultural practices in these areas were not beneficial to the landscape. Rather than desirable permanent grasslands, intensive crop production on arable land in mountainous regions, including steep slopes, was undertaken. In addition, cooperatives on fertile land were taxed, and part of these tax revenues was used to subsidize the less-favored areas.
Driving Forces, Major Processes, and Intensity of Landscape Change in the Period 1990-2010
The laws that enabled privatization and restitution of property seized under communism constituted one of the most important driving forces in this period; for a detailed schematic of the driving forces during this period, see Figure A4. These laws fundamentally changed ownership of agricultural and industrial enterprises. In the agricultural sector, property that had been managed by cooperatives and state-owned estates under communism was returned to approximately 3.5 million previous owners. Most of these property transfers were completed by 1995. As a result, landscape utilization fundamentally changed. Many eligible persons were not interested in the reclaimed property, and only a small fraction began agricultural businesses. Most of the restituted land was leased to cooperatives and other agricultural businesses. Former socialist-style cooperatives were gradually transformed into cooperatives based on property ownership. All of these large-scale property transfers had a substantial impact on landscape changes at local and regional scales.
The accession of Czechia into the European Union (2004) also influenced agricultural business. Foreigners were not allowed to purchase land until 2012. Many Czechs, however, were not interested in purchasing agricultural land because of the long-term nature of the investment. Regarding agricultural subsidies, Czechia was eligible for EU funds prior to 2004; money was allocated mainly for restructuring and environmental projects.
Compared to farmers in the former EU states, Czech farmers were eligible for significantly lower subsidies in the period 2005-2012, when land purchases were restricted.
In 2005, these subsidies were only 25% of those in EU-15, and increased by 8% in each successive year. Subsidies provided at the national level were negligible. The resulting lack of funds led to a marked decrease in agricultural production, and to extensification and land abandonment.
In many regions, arable land shrank the most of all of the land use classes (Figure 5b). In the lowlands, particularly in areas close to cities and towns, arable land was subjected to strong pressure from developers. Large parts of arable land were transformed into built-up areas due to intensive suburbanization, particularly from the late 1990s onwards [58,59]. Urbanization was the second major process of landscape change during the examined period (see Figure 5a and Table 2; the most important process occurred in STUs covering 19.7% of the national territory). However, there were marked regional differences regarding changes in agricultural production and the intensity of arable land transformation (arable land was stabilized in lowlands while decreases higher than 5 % were recorded in mountains). On fertile soils, intensive farming, including husbandry (cows, pigs, and poultry), generally continued. The transition of arable land into meadows and pastures was typical in regions with less favorable natural conditions (supported by subsidies from the EU Common Agricultural Policy and other schemes of both the European Union and the state focused on sustainable landscape management) and constituted the major process of landscape change in STUs covering 33.8% of the national territory (Figure 5a and Table 2). The abovementioned property restitution, in addition to the changing nature of agricultural subsidies, The transition of arable land into meadows and pastures was typical in regions with less favorable natural conditions (supported by subsidies from the EU Common Agricultural Policy and other schemes of both the European Union and the state focused on sustainable landscape management) and constituted the major process of landscape change in STUs covering 33.8% of the national territory (Figure 5a and Table 2). The abovementioned property restitution, in addition to the changing nature of agricultural subsidies, played a major role in this process; subsidies were more intended to secure "landscape maintenance" rather than production per se. The expansion of forests was also important.
The intensity of landscape change differed significantly among regions. The most intensive changes (Figure 5c) were recorded in areas where agriculture declined, and fields were transformed into meadows and pastures (typically in mountainous regions flanking the border). Regions that were subject to strong suburbanization (i.e., around cities and along major highways) also showed higher intensity of change (changes were recorded in many cases on more than 20% of total area of STU). Suburban areas became attractive for people seeking better housing, and new residential projects were often built on arable land.
The decline in mining and related activities should also be mentioned as an important factor. After 1990, three new National Parks (Šumava, Podyjí, andČeské Švýcarsko; Figure 1) and a number of Protected Landscape Areas were established. In these areas, commercial exploitation of the landscape was severely limited. By contrast, the fall of the Iron Curtain allowed access to areas along the southern and western border and enabled common activities, including agriculture and forestry. In general, the period 1990-2010 is marked by increased implementation and precision of landscape management, including nature conservation. This shift was also reflected by administrative changes: the Ministry of Environment was established in 1989, and the Agency for Nature Conservation in 1995.
Only a portion of the abovementioned landscape changes was recorded in the Land Registry. In many cases, land use changes are not reflected in the files (according to our estimates, before accession to the European Union, there may have been a total of up to 500,000 hectares of such plots). Part of this was former agricultural land (particularly arable land), now abandoned and gradually turned into "new wilderness" without any human intervention. The authors of [60] argue that this could account for 5% of agricultural land in selected cadastres between 1990 and 1997. Delayed input of land use changes into the cadastre is one example of data shortage that must be taken into consideration when assessing the extent of changes [43].
Summary of the Period 1845-2010
In accordance with previous research [6,[8][9][10]33,39], it can be summarized that landscape changes are determined by certain combinations of political/institutional, cultural, and natural/spatial drivers, rather than only by a single key driver. It should be highlighted that landscape changes, in general, form part of large-scale changes that are linked to societal modernization. The organization of society has changed from tiny units that relied on a small-scale subsistence economy and were confined to a limited space to a multi-level hierarchical society in which territorial units are interconnected by specific links depending on different functions.
These general changes are also reflected in changing land use patterns. In [68], it is argued that land use changes appear last of all, and only after political, economic, and social changes have taken place. Landscape changes are the most complex and depend on social change.
The key findings of our research are as follows: (a) Different processes of landscape change prevailed in observed periods of time. The first period, 1845-1896, was the only period during which the arable land area increased, and the most recent, 1990-2010, was the only period during which permanent grasslands increased. The permanent cultures, forests, and remaining area classes increased in all periods. The communist period was characterized by unified types of changes-urbanization, afforestation, and a large decrease in arable land (by almost 9 percentage points) were the dominant processes. By comparison, the period 1896-1946 was characterized by the most variable changes (Table 2 and Figures 2-6). 8. 1948-1989: nationalization, centrally controlled economy, collectivization, industrialization, special system of subsidies (intensive agriculture at high altitudes was encouraged); 9. From 1990 onwards: reintroduction of market economy, property restitution, privatization, accession to the European Union, new system of agricultural subsidies, boom in urbanization.
(c) In accordance with previous research [9,10,39,41], our analysis shows the crucial role of underlying drivers, mainly related to political and institutional factors; particularly in the first two periods, however, technological drivers also played a significant role. Natural conditions played a very important role at the beginning of the examined period (Differential Rent I). Subsequently, social and economic factors became more important (Differential Rent II)- [69]) (d) Regarding the main findings for the development of particular land use classes, we can summarize as follows: 1. Decrease in agricultural land and arable land was an important long-term trend (with the exception of the period 1845-1896). During the same period, agricultural (b) Landscape changes were influenced by a number of different factors over the whole period 1845-2010 ( Figures A1-A4). The driving forces that likely had the most significant impact on landscape change during the different periods were:
1.
Decrease in agricultural land and arable land was an important long-term trend (with the exception of the period 1845-1896). During the same period, agricultural efficiency and production increased.
2.
Meadows and pastures continued to decrease until 1990; this process was reversed only recently when grassing over took place. Subsidies provided for sustainable landscape management were the essential driver of this process.
3.
Afforestation was also important; in terms of percentage points, the expansion of forests may appear modest (increasing from 29% to almost 34% of forest cover); however, compared to the areas that were developed, forests "invaded" a much bigger space. Such a change reflects trends that are common in economically developed European countries, i.e., the so-called forest transition described by Mather [3].
4.
A marked increase in built-up and remaining areas was recorded; these have expanded three-to four-fold. Urbanization was one of the key processes during all subperiods with the exception of the first (1845-1896).
At the national scale, long-term changes (from the middle of the 19th century) can only be compared with publications from the territory of Slovenia, for which a similar dataset was used. According to Petek and Gabrovec [44], the decrease in arable land was the most extensive process during the period 1896-1999 in Slovenia, followed by urbanization. For the shorter period between 1900 and 2010, Fuchs et al. [70] determined the main land-change processes in Europe to be cropland/grassland dynamics and afforestation, and deforestation and urbanization. Based on 144 European studies with different time scales (about 22% studies focused on a period longer than 100 years), Plieninger et al. [9] determined that land abandonment and agricultural extensification were the most prominent proximate drivers. Burgi et al. [10] analyzed six regions across Europe based on historical and contemporary maps from the nineteenth and twentieth centuries. They call attention to polarization of the landscape between intensification and extensification when agricultural land is employed in both settlement growth and afforestation processes. This polarization of the landscape (reported also in [71] and [39]) was also identified in our analysis for the most recent period in Czechia. However, it was also identified, to some extent, from 1896 onwards, when both urbanization and the processes of afforestation, grassing over, and land abandonment were gradually taking place on former arable land (or permanent grasslands).
(e) At the present time, fertile land-the most important natural asset-is facing significant threats. Tens of thousands of hectares of fertile land are lost each year as a result of commercial and housing developments (including related areas, such as parking lots). Most of this land will probably never be recovered for agricultural use. Substantial growth in urbanization leads to an extensive increase in impervious areas and poses a threat to the future, not only because of the loss of quality land but also because of the acceleration of water flow from the landscape [58,72].
(f) The human impact on the environment has grown over 165 years due to technological development. Devastation of the landscape, and the environment in general, was critical during the communist period, due to the synergy of a number of negative factors. We confirmed our hypothesis that the major changes occurred in the period 1948-1990. After the collapse of communism, landscape and environmental protection were improved. Grassing over and afforestation are beneficial processes for landscape and nature preservation. However, land abandonment can also have negative consequences. Lipský [73], among others, mentions the expansion of invasive species or the extinction of some species of plants and animals. According to Reif and Vermouzek [74], the steep decline in bird populations is currently an extremely serious threat in Czechia and is associated with the intensification of agriculture production, which accelerated after Czechia joined the European Union [74]. The decline in the populations of farmland bird species in recent decades has become a significant problem in the European Union, and may be driven by agricultural intensification and other changes in the Common Agricultural Policies [75,76].
It remains to be seen if agri-environmental schemes will be effective in supporting farmland biodiversity [74,77].
Advantages and Limits of the Used Data
Our analysis and the findings mentioned above are based on historical data from cadastral records. The long-term research presented in our study was made possible mainly by the mapping of the Franciscan Cadastre, which was carried out around the middle of the 19th century. This was crucial to ensuring the consistency of the data during the study period for all of the examined years. As mentioned in the methodology, we used so-called stable territorial units, which were amalgamed from cadastres, and the maximal fluctuation in the size of STUs of different years was set at 2%. Regarding the comparability of land use classes during the study period, we used a simple legend (eight categories) because a more detailed categorization was not available for recent years. As a result, the categorization of the Stable Cadastre, which originally included more than 20 classes, could not be used. There may be some differences in the "quality" of individual land use classes in specific analyzed years because, for example, of the different management practices. However, these potential differences could not be taken into consideration because the appropriate data were inaccessible. Furthermore, the mapping approach/technique may differ (see [43]) in individual years. Regarding the quality and reliability of the data, it should be further noted that partly in 1990 and particularly in 2010, cadastral records were not always updated due to rapid landscape changes after the fall of communism, and some records lagged behind the current state of the landscape. In spite of these possible shortcomings of the data, we must emphasize the value and specificity of the used dataset, which enables the study of landscape changes spanning extensive time and spatial frames. Global research in land use and land cover changes cannot employ such a dataset and is mostly dependent on old maps or aerial photos available for only limited areas or time horizons [10,11,70], or on satellite remote sensing data that are available only from the 1970s [17][18][19][20][21][22][23][24][25].
Considering the potential of the data and methods used for landscape change evaluation in other parts of Europe, one limitation relates to the spatial extent of the Stable Cadastre, which only covers the countries located in the territory of the former Austro-Hungarian Empire. In these countries, our methods can be used for the same time period if data consistency is ensured. Previous comparable studies working with Franciscan Cadastre data were mostly carried out in Slovenia [15,44]. However, the importance of our study carried out for Czechia is mainly in the used methods. Their potential is broader, and the employed parameters can be applied to any area analyzed for any time period at the level of similar administrative units or can be upscaled on the level of districts, regions or countries. Cadastral data for various time horizons or digitized orthophotos are good source for such an analysis.
Conclusions
This analysis documents and evaluates major processes of landscape change, including their driving forces, in a large territory of the present-day Czechia over a long period of time (165 years). Our analysis demonstrates that the territory of Czechia represents a crossroads of historical events, drivers, and diverse types and directions of landscape changes in Central Europe.
In our study, it was demonstrated that changes in the landscape of Czechia during the period 1845-2010 reflect key historical events in Europe. The location of Czechia, in Central Europe and on the margin of the former Soviet bloc, played an important role.
Thus, Czechia represents a unique model area. The existence of two fundamentally different political and economic systems had large impacts on the landscape: "traditional" capitalism, which ruled between the mid-19th century and 1948, was replaced by "bureaucratic socialism" or, more precisely, the communist regime . The current period, since 1990, has been dominated by "modern" (global) capitalism.
Czechia is also a unique model area because detailed land use data for a long period of time are available. Land use data covering 165 years were surveyed for the purposes of the Land Registry, i.e., for taxation and market reasons. Few countries of the former Austrian Empire (for example, Austria, Slovenia, Czechia, Moravia, and Silesia) can provide the most valuable data source-the so-called Franciscan Cadastre, which was established in the middle of the 19th century [51,78]-and can use the data for long-term landscape change evaluation in combination with cadastral records from subsequent time horizons [43,44].
In this article, historical landscape changes were described in detail. Future developments, however, remain a mystery. What kind of factors will influence future changes in the landscape? The key factors will be the drivers that have acted at European and national levels since 1990; however, global factors (climatic change, pollution, biodiversity threats, food security, etc.) should also be taken into consideration.
Czechia is positioned "on the roof of Europe", where water discharges into three different seas. Global warming has caused important changes in precipitation models (regarding volume, structure, and intensity) in recent decades, and this trend is likely to continue in future. Periods of drought appear to be the major challenge for Czech agriculture and forestry (68) [72], and will probably contribute to further diversification among agricultural practices in different regions, thus leading to specific regional impacts on the landscape and its utilization.
External factors are also important. Compared to some other EU countries, natural conditions are less favorable in Czechia and intensification schemes cannot be efficient. However, the trend of global population increase (concentrated particularly in African countries) may soon have opposing effects-increased demand for food and other agricultural products may encourage farmers to cultivate areas that are currently abandoned. Such changes have already occurred in the past [43,70,71]. Whatever changes occur in the future, our findings that contribute for the efficient monitoring of landscape changes in the entire country can be used for an effective management and preservation of valuable natural resources. | 2021-01-07T09:12:12.610Z | 2021-01-02T00:00:00.000 | {
"year": 2021,
"sha1": "758769beac416defba6745d029b549559e9e08e0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-445X/10/1/34/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cc22d96863f4916c1a22a65c5093121fbdc8cdbe",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
119628936 | pes2o/s2orc | v3-fos-license | The stable Morse number as a lower bound for the number of Reeb chords
Assume that we are given a closed chord-generic Legendrian submanifold $\Lambda \subset P \times \mathbb R$ of the contactisation of a Liouville manifold, where $\Lambda$ moreover admits an exact Lagrangian filling $L_{\Lambda} \subset \mathbb R \times P \times \mathbb R$ inside the symplectisation. Under the further assumptions that this filling is spin and has vanishing Maslov class, we prove that the number of Reeb chords on $\Lambda$ is bounded from below by the stable Morse number of $L_{\Lambda}$. Given a general exact Lagrangian filling $L_{\Lambda}$, we show that the number of Reeb chords is bounded from below by a quantity depending on the homotopy type of $L_{\Lambda}$, following Ono-Pajitnov's implementation in Floer homology of invariants due to Sharko. This improves previously known bounds in terms of the Betti numbers of either $\Lambda$ or $L_{\Lambda}$.
1. Introduction 1.1. Motivation. One of the first striking applications of Gromov's theory of pseudoholomorphic curves [33] was that a closed exact Lagrangian immersion Λ ⊂ (P, dθ) inside a Liouville manifold must have a double-point, given the assumption that it is Hamiltonian displaceable. Gromov's result has the following contact-geometric reformulation, which will turn out to be useful. Consider the so-called contactisation (P × R, dz + θ) of the Liouville manifold (P, dθ), which is a contact manifold with the choice of a contact form. Recall that a (generic) exact Lagrangian immersion Λ ⊂ (P, dθ) lifts to a Legendrian (embedding) Λ ⊂ (P × R, dz + θ). One says that Λ is horizontally displaceable given that Λ is Hamiltonian displaceable. The above result thus translates into the fact that a horizontally displaceable Legendrian submanifold Λ must have a Reeb chord for the above standard contact form -i.e. a non-trivial integral curve of ∂ z having endpoints on Λ. A similar result holds for Legendrian submanifolds of boundaries of subcritical Weinstein manifolds, as proven in [41] by Mohnke. In the spirit of Arnold [3], the following conjectural refinement of the above result was later made: the number of Reeb chords on a chord-generic Legendrian submanifold Λ ⊂ (P × R, dz + θ) whose Lagrangian projection is Hamiltonian displaceable is at least 1 However, as was shown by Sauvaget in [48] by the explicit counter-examples inside the standard contact vector space (R 4 × R, dz + θ 0 ), θ 0 = −(y 1 dx 1 + y 2 dx 2 ), the above inequality is not true without additional assumptions on the the Legendrian submanifold; also, see the more recent examples constructed in [16] by Ekholm-Eliashberg-Murphy-Smith. The latter result is based upon the h-principle proven in [26] by Eliashberg-Murphy for Lagrangian cobordisms having loose negative ends in the sense of Murphy [42].
On the positive side, the above Arnold-type bound has been proven using the Legendrian contact homology of the Legendrian submanifold, under the additional assumption that the Legendrian contact homology algebra is sufficiently well-behaved. Legendrian contact homology is a Legendrian isotopy invariant independently constructed by Chekanov [8] and Eliashberg-Givental-Hofer [24], and later developed by Ekholm-Etnyre-Sullivan [18]. This invariant is defined by encoding pseudoholomorphic disc counts in the Legendrian contact homology differential graded algebra (DGA for short) which usually is called the Chekanov-Eliashberg algebra of the Legendrian submanifold. In the case when the Chekanov-Eliashberg algebra of a Legendrian admits an augmentation (this should be seen as a form of nonobstructedness for its Floer theory), the above Arnold-type bound was proven by Ekholm-Etnyre-Sullivan in [20] and by Ekholm-Etnyre-Sabloff in [17]. In [14], the authors generalised this proof to the case when the Chekanov-Eliashberg algebra admits a finite-dimensional matrix representation, in which case the same lower bound also is satisfied.
The above Arnold-type bound is also related to the one regarding the number of Hamiltonian chords between the zero-section in T * L (or, more generally, any exact closed Lagrangian submanifold of a Liouville manifold) and its image under a generic Hamiltonian diffeomorphism. Namely, such Hamiltonian chords correspond to Reeb chords on a Legendrian lift of the union of the Lagrangian submanifold and its image under the Hamiltonian diffeomorphism. In fact, as shown by Laudenbach-Sikorav in [38], the number of such chords is bounded from below by the stable Morse number of the zero-section (and hence, in particular, it is bounded from below by half of the Betti numbers of the disjoint union of two copies of the zero-section). Arnold originally asked whether this bound can be improved, and if in fact the Morse number of the zero-section is a lower bound. However, this question seems to be out of reach of current technology. On the other hand, we note that the stable Morse number is equal to the Morse number in a number of cases; see [11] as well as Section 2.1 below for more details.
Finally, we mention the remarkable result by Ekholm-Smith in [22], which shows that the smooth structure itself can predict the existence of more double points than the original bound given in terms of the homology. Namely, a 2k-dimensional manifold Σ 2k for k > 2 that admits a Legendrian embedding having precisely one transverse Reeb chord in the standard contact space must be diffeomorphic to the standard sphere unless χ(Σ 2k ) = −2. Also see [23] for similar results in other dimensions.
1.2. Results. In this paper, we will explore a priori lower bounds for the number of Reeb chords on a Legendrian submanifold Λ ⊂ (P × R, dz + θ), given that it admits an exact Lagrangian filling L Λ ⊂ (R × P × R, d(e t (dz + θ))) inside the symplectisation. Recall that the condition of admitting an exact Lagrangian filling is invariant under Legendrian isotopy; see e.g. [5]. The bound will be given in terms of the simple homotopy type of L Λ . First, we recall that such a Legendrian submanifold automatically has a well-behaved Chekanov-Eliashberg algebra; namely, an exact Lagrangian filling induces an augmentation by [15]. In the case when the projection of Λ to Λ ⊂ P is displaceable, the aforementioned result can thus be applied, giving the above Arnold-type bound. However, in this case, there are even stronger bounds that can be obtained from the topology of the exact Lagrangian filling L Λ (and without the assumption of horizontal displaceability). See Section 1.3 below for previous such results as well as an outline of the proof, which is based upon Seidel's isomorphism in wrapped Floer homology. This is also the starting point of the argument that we will use in order to prove our results here.
In the following we assume that a Legendrian submanifold Λ ⊂ (P × R, α := dz + θ) is chord-generic and has an exact Lagrangian filling L Λ ⊂ (R × P × R, d(e t α)). Here t denotes the coordinate on the first R-factor. In particular, the set of Reeb chords Q(Λ) of Λ is finite. Further, the set of Reeb chords c in degree |c| = CZ(c) − 1 ∈ Z/Zµ L Λ will be denoted by Q |c| (Λ), where the grading is induced by the Conley-Zehnder index modulo the Maslov number µ L Λ ∈ Z of L Λ as defined in [21]. Observe that µ L Λ = 0 in particular implies that the first Chern class of (P, dθ) vanishes on H 2 (P ).
For a group G being the epimorphic image of π 1 (L Λ ), consider the Morse homology com- twisted by the fundamental group, where R is a unital commutative ring and f : L Λ → R is a Morse function satisfying df (∂ t ) > 0 outside of a compact set. (The generators of this complex are graded by the Morse index, and the differential counts negative gradient flow lines.) ) be an exact Lagrangian filling of an ndimensional closed Legendrian submanifold Λ ⊂ (P × R, α) with fundamental group π := π 1 (L Λ ) and Maslov number µ L Λ ∈ Z.
(i) In the case when the filling is spin and when µ L Λ = 0, the Morse homology complex Here we can always take R = Z 2 , while we are free to choose an arbitrary unital commutative ring in the case when L Λ is spin.
We prove Theorem 1.1 in Section 3. Now let stabMorse(M) denote the stable Morse number of a manifold M with possibly non-empty boundary, see Definition 2.5. Using Theorem 1.1 and the adaptation of [11,Theorem 2.2] to the case of manifolds with boundary (see Proposition 2.9), the following result is immediate: Corollary 1.2. Suppose that Λ ⊂ P × R is a chord-generic closed Legendrian submanifold admitting an exact Lagrangian filling L Λ which is spin and has vanishing Maslov number. It follows that the bound is satisfied for the number of Reeb chords on Λ.
By using the long exact sequence in singular homology of the pair (L Λ , ∂L Λ = Λ), where L Λ denotes the compact part of L Λ , we obtain the following inequalities for any field F. Obviously, Inequality (1.1) is a strengthening of the original Arnold-type bound. For a discussion about how to construct examples of exact Lagrangian fillings for which our obtained lower bound is strictly greater than previously known bounds in terms of the homology of the filling, we refer to Section 6.1. Note that in forthcoming work [27], Eriksson Östman also obtains an improved version of the above Arnold-type bound for certain horizontally displaceable Legendrian submanifolds. The bound is obtained in terms of the homotopy type of the Legendrian submanifold itself. It does not assume the existence of an exact Lagrangian filling, but rather assumes the existence of an augmentation of a version of the Chekanov-Eliashberg algebra having twisted coefficients that is defined in the same article.
In the course of showing the above result, we also obtain the following generalisation of the aforementioned result by Sikorav-Laudenbach [38], which also is related to the theory of stable intersection numbers as introduced by Eliashberg-Gromov in [25, Section 2.3]. Theorem 1.3. Consider a closed exact Lagrangian submanifold L ⊂ (P, dθ) which is spin and has vanishing Maslov number. For any k ≥ 0, the exact Lagrangian submanifold L×R k ⊂ (P × C k , dθ ⊕ ω 0 ) with ω 0 = dx 1 ∧ dy 1 + · · · + dx k ∧ dy k satisfies the property that Hs (L × R k ) ≥ stableMorse(L), given that the above intersection is transverse, and that the Hamiltonian is of the form H s = f s + Q, where: • Q(x 1 + iy 1 , . . . , x k + iy k ) = Q(x 1 , . . . , x k ) is a non-degenerate quadratic form on R k ⊂ C k ; and • f s : P × C k → R, s ∈ [0, 1], satisfies the property that max s∈[0,1] f s C 1 is bounded for a product Riemannian metric of the form g P ⊕ g std on P × C k . Here we moreover require g P to be invariant under the Liouville flow on (P, dθ) outside of a compact subset, while g std denotes the Euclidean metric.
Remark 1.4. Damian's examples in [11] (see Theorem 2.6) can be used to produce a Hamiltonian for which (L × R k ) ⋔ φ 1 Hs (L × R k ) is strictly less than the Morse number of L, given that k ≫ 0 is sufficiently large.
We also get the following two theorems which are consequences of Theorem 1.1 together with the algebraic machinery developed by Ono and Pajitnov in [43]. For a finitely presented group G, we denote by d(G) ∈ Z ≥0 the minimal number of generators of G. Theorem 1.5. Let µ L Λ = 0. Assume that π 1 (L Λ ) admits a finite epimorphic image G, which is a simple or solvable group.
(i) Under the above assumptions, we have (ii) If moreover π 1 (L Λ ) is a finite perfect group, then Here we have to use the field F = Z 2 unless L Λ is spin, in which case it can be chosen arbitrarily.
Theorem 1.6. Assume that π 1 (L Λ ) admits a finite epimorphic image G, which is a simple or solvable group.
(i) Under the above assumptions, we have Here we have to use the field F = Z 2 unless L Λ is spin, in which case it can be chosen arbitrarily.
Note that the estimates presented in Theorems 1.5 and 1.6 are in general weaker than the estimate described in Corollary 1.2. On the other hand, the estimates from Theorems 1.5 and 1.6 hold in the less restrictive settings compared to the settings of Corollary 1.2. Theorems 1.5 and 1.6 will be proven in Section 5.
In Section 6, we provide a construction of exact Lagrangian fillings with a given finitely presented fundamental group. This leads to examples where the estimate described in the second part of Theorem 1.5 coincides with the stable Morse number of an exact Lagrangian filling and, moreover, such that this bound is better than the estimate coming from the homological data of the filling. Finally in Section 6.4, we provide a series of examples of exact Lagrangian fillings for which the estimates for the number of Reeb chords provided by Theorems 1.5 and 1.6 are arbitrary far from the estimates coming from the so-called Seidel's isomorphism in Theorem 1.7 (i.e. coming from homological data of the filling).
1.3. Previous results obtained using wrapped Floer homology. It was previously known that a Legendrian submanifold Λ ⊂ (P × R, dz + θ) admitting an exact Lagrangian filling L Λ in the symplectisation satisfies a stronger form of the above Arnold-type bound. It should also be noted that, in this case, the bound is in fact true also without the assumption of horizontal displaceability of Λ. Namely, as outlined in [15,Conjecture 1.2] and later developed in [12] and [13] by the first author and by both authors, respectively, the number of Reeb chords for such a chord-generic Legendrian submanifold Λ is at least In the case of exact Lagrangian fillings inside a more general subcritical Weinstein domain, this result was proven by Ritter in [45,Theorem 11.1].
These results are all proven using roughly the same idea, based upon computations of the wrapped Floer homology of the filling which in these cases is acyclic. Wrapped Floer homology, originally defined in [2] by Abouzaid-Seidel and in [15] in a different form by Ekholm, generalises Floer's original Lagrangian intersection Floer homology [28] to the setting of exact Lagrangian fillings. Note that wrapped Floer homology is always acyclic in our setting, since the exact Lagrangian fillings considered here are displaceable in the appropriate sense.
Since the argument in the proof of the above bound is the starting point of the method that we will be using here, we now give a brief outline: First, the wrapped Floer homology computed for the pair (L Λ , L ′ Λ ), where L ′ Λ is any Hamiltonian push-off of L Λ , is acyclic. For a suitable push-off L ′ Λ , the wrapped Floer complex can thus be made into an acyclic mapping cone of a chain map from the Morse homology complex of L Λ , as follows from Floer's original computation, to a subcomplex whose underlying vector space is given by F Q n−• (Λ) . This acyclic mapping cone gives rise to the so-called Seidel's isomorphism: Theorem 1.7 (Seidel). Let Λ be a Legendrian submanifold of P × R with the property that Λ admits an exact Lagrangian filling L Λ . Then there is a quasi-isomorphism where the right-hand side is the homology of a complex with underlying vector space F Q •−1 (Λ) . In the case when char F = 2, we must assume that L Λ is spin and choose an appropriate spin structure.
We refer to [13] for a proof in the setting under consideration here, based upon the ideas in [15] and [12].
In particular, the above isomorphism implies that the number of Reeb chords on Λ is at least i b i (L Λ ; F), given that Λ is chord-generic. Our main result Theorem 1.1 can be interpreted as an upgrade of Seidel's isomorphism to a simple homotopy equivalence.
Related ideas were also present in [6,Theorem 4.7], where the authors together with Chantraine and Ghiggini used wrapped Floer homology with twisted coefficients in order to show that a Legendrian submanifold in P × R with a single Reeb chord satisfies the property that any of its exact Lagrangian fillings must be contractible.
Preliminaries
2.1. Basics from symplectic and contact topology. By a Liouville manifold we mean a pair (P, θ) consisting of an even-dimensional smooth manifold P and a one-form θ ∈ Ω 1 (P ) for which dθ is a symplectic form, i.e. for which (dθ) ∧ dim P/2 is a volume form on P . For us, a Liouville manifold will moreover always have a cylindrical convex (sometimes called positive) non-compact end. In other words, we will assume that (P, dθ) is of the form ((0, +∞) × Y, d(e s α Y )) in the complement of a compact sub-domain with smooth boundary. Here s is the standard coordinate on the (0, +∞)-factor and α Y ∈ Ω 1 (Y ) is a one-form on Y . (The latter exact symplectic manifold is the half of the the symplectisation of the closed contact manifold (Y, α Y ).) Recall that a Liouville manifold possesses the so-called Liouville vector field X ∈ ΓT P which is dθ-dual to θ, i.e. satisfying the equation i X dθ = θ.
Given an exact symplectic 2n-manifold (P, dθ), we define its contactisation to be (P × R, dz + θ), where z is a coordinate on the R-factor. It is not difficult to see that α := dz + θ satisfies α ∧ (dα) ∧n = 0, and hence α is a contact form on P × R which defines a contact structure ξ := ker α ⊂ T (P × R).
An n-dimensional submanifold Λ ⊂ P × R is called Legendrian given that T Λ ⊂ ξ, and a smooth 1-parameter family of Legendrian submanifolds is called a Legendrian isotopy.
The Reeb vector field R α on P × R is uniquely determined by the equations i Rα α = 1, i Rα dα = 0, and is in this case given by R α = ∂ z . A non-trivial integral curve of R α having endpoints on a Legendrian submanifold Λ is called a Reeb chord on Λ, the set of which will be denoted by Q(Λ). We define the length of a Reeb chord c ∈ Q(Λ) to be ℓ(c) := c dz > 0. In this case, Reeb chords are obviously in bijective correspondence with the double-points of the image of Λ under the canonical projection to P . We call a Legendrian submanifold chord-generic given that this projection is a generic immersion. In particular, a closed chordgeneric Legendrian submanifold has a finite number of Reeb chords in the current setting. Observe that any Legendrian submanifold can be made chord-generic after an arbitrarily C ∞ -small perturbation through Legendrian submanifolds.
The symplectisation of the contactisation (P × R, α) is the exact symplectic manifold (R × P × R, d(e t α)), where t denotes the standard coordinate on the first R-factor. An exact Lagrangian filling inside the symplectisation is a central object of this article. It is a special case of the following more general concept.
Definition 2.1. Given two Legendrian submanifolds Λ − , Λ + ⊂ (P × R, α), we call a proper embedding L ⊂ (R × P × R, d(e t α)) an exact Lagrangian cobordism from Λ − to Λ + given that there exists a number T > 0 for which: • The pull-back of e t (dz + θ) to L is exact and, moreover, admits a primitive which is globally constant on L ∩ {t ≤ −T } as well as on L ∩ {t ≥ T }.
Λ + is called the positive end of L, and Λ − is called the negative end of L. If Λ − = ∅, we call L an exact Lagrangian filling of Λ + .
Observe that the property of admitting an exact Lagrangian filling is a Legendrian isotopy invariant, as shown in [5]. Indeed, a Legendrian isotopy from Λ to Λ ′ induces an exact Lagrangian cobordism from Λ to Λ ′ . Furthermore, two exact Lagrangian cobordisms L a , L b ⊂ R × P × R from Λ − a to Λ and from Λ to Λ + b , respectively, can be concatenated to form an exact Lagrangian cobordism
2.2.
Floer homology with twisted coefficients. The Floer homology of an exact Lagrangian manifold L with itself can be defined using coefficients twisted by the fundamental group as first described by Sullivan in [53], and later by Damian in [10] as well as Abouzaid in [1]. This construction is analogous to the definition of Morse homology with twisted coefficients. We will rely on the formulation of Floer homology carried out in [53]. Due to the noncompact setting, we have to consider only compatible almost complex structures J on R × P × R which are of a particular form outside of a compact subset, see [13]. We start by fixing a compatible almost complex structure J P on (P, dθ) for which the standard coordinate s on the non-compact cylindrical end ) is a symplectic form tamed by J P . The so-called cylindrical lift J P of J P is the compatible almost complex structure on (R × P × R, d(e t α)) defined uniquely by the following properties: • The canonical projection R × P × R → P is ( J P , J P )-holomorphic; and • The almost complex structure J P is cylindrical, i.e. it is invariant under translations of the t-coordinate, and satisfies J P (∂ t ) = ∂ z , as well as J P (ξ) = ξ.
The compatible almost complex structures J on the symplectisation R × P × R that we will be considering here will all be taken to coincide with a fixed cylindrical lift J P outside of a compact subset. In this situation, the SFT compactness theorem [4], together with the monotonicity properties for the symplectic area of pseudoholomorphic curves [50], imply that Lagrangian intersection Floer homology can be defined as usual, and that invariance is satisfied for compactly supported Hamiltonian isotopies. To that end, we recall the following fact. Let L 0 ⊂ R × P × R be an exact Lagrangian filling, let L s 1 , s ∈ [0, 1], be a smooth family of exact Lagrangian fillings that is fixed outside of a compact subset, and let J s be a smooth family of tame almost complex structures on R × P × R all which coincide with the above almost complex structure J P outside of a fixed compact subset. Further, we assume that all intersection points L 0 ∩ L s 1 are contained in a compact subset. Lemma 2.2 (Lemma 4.1 [13]). There exists a fixed compact subset K ⊂ R × P × R that contains any J s -holomorphic Floer strip having a compact image inside R × P × R, boundary on L 0 ∪ L s 1 , and both punctures mapping to intersection points L 0 ∩ L s 1 . The Floer complex can be defined in many different ways, but we will use the approach taken in [53]. In addition, see [13], [6] for the set-up of Floer homology in the same setting as considered here. For a pair of transversely intersecting exact Lagrangian fillings L 0 , L 1 ⊂ R × P × R, the underlying module of the Floer complex will be given by for a unital commutative ring R, where each generator has a well-defined grading modulo the Maslov number of L 0 ∪ L 1 , given that we have fixed a choice of Maslov potential; see Section 2.2.1 below for more details.
The differential is defined roughly as follows. The coefficient ∂(y), x is the signed count of rigid (i, J)holomorphic Floer strips of the form having boundary on L 0 ∪ L 1 and asymptotics to intersection points x and y as s → −∞ and +∞, respectively. Each such strip moreover contributes with the coefficient in π 1 (L 0 ) ⊂ R[π 1 (L 0 )] obtained by completing the path u(s + i0) to a loop via the concatenation with fixed capping paths in L 0 that connect each intersection point in L 0 ∩ L 1 with the base point. Recall that we must take R = Z 2 in the above count unless both L 0 , L 1 are spin. In the latter case, the signed count moreover depends on the choice of a spin structure on L 0 ∪ L 1 .
. We here restrict attention to the special case when Λ 1 and Λ 0 differ by a translation in the R-coordinate (i.e. by the Reeb flow). In this case, the following uniquely defined grading convention modulo the Maslov number of L 0 ∪ L 1 will be used.
Consider the Legendrian lift of the disconnected exact Lagrangian immersion L 0 ∪ L 1 to the contactisation (R × P × R) × R of the symplectisation. We choose a lift where the component L 1 has been translated sufficiently far in the negative Reeb direction (of the latter contactisation (R × P × R) × R) so that all Reeb chords start on L 1 . Recall that there is a bijective correspondence between Reeb chords c p on this lift and intersection points p ∈ L 0 ∩ L 1 .
Fix points we then prescribe the grading |p| := n + 1 − CZ(c p ) for the Conley-Zehnder index as defined in [19], where the choice of (disconnected) capping path consisting of the path γ cp 0 followed by the path −γ cp 1 has been used. We end by noting that, using the above grading conventions, the differential ∂ is of index −1.
2.3.
The stable Morse number. We here briefly discuss the notion of the Morse and stable Morse number. We refer to [11] for more details concerning the closed case. Definition 2.4. Let M be a compact manifold possibly with non-empty boundary. A function F : M × R k → R is called almost quadratic at infinity given that there is a nondegenerate quadratic form Q on R k , and a Riemannian metric g M on M, satisfying the properties that where g std denotes the Euclidean metric on R k ; and • For the standard coordinate t : Finally, the following statement holds: Proof. The inequality "≥" is immediate, since a function almost quadratic at infinity on M × R N can be suitably stabilised to give a function almost quadratic at infinity on We continue with the inequality "≤". By the definition of a function F : (M ×D k ) ×R N → R being almost quadratic at infinity, we can write it as Q + f for a quadratic form Q on R N together with a function f with a uniform bound on its differential.
Consider the quadratic form Q 0 (x) := x 2 on R k ∋ x together with a bump function By the assumptions on F , given that ǫ > 0 is sufficiently small, it can readily be seen that the number of critical points of F and F agree. The inequality now follows.
2.4. Simple homotopy theory. In the following we let (C • , ∂) be a free and finitely generated chain complex over a group-ring Z[G] with a preferred graded basis. We also assume that the grading is taken in the integers Z. Observe that we allow generators in negative degrees. In this setting Whitehead defined the notion of a simple homotopy equivalence between such complexes in [54,Section 5]; also see Milnor's survey in [40]. Roughly speaking, two such complexes are simple homotopy equivalent if and only if they are related by stabilisation by trivial complexes, together with a simple isomorphism, i.e. an isomorphism for which the Whitehead torsion vanishes.
Floer homology has been shown to preserve the simple homotopy type in certain situations, as first shown by Sullivan in [53]. These ideas were later successfully used in work by Suárez in [52]; also, see the related results in [7] by Charette. Observe that the setting considered here is similar to that in [52] in the sense that we also consider non-compact Lagrangian submanifolds. On the other hand, our setting is simpler, and we do not need the restrictions on the almost complex structure made there.
The following result due to Sullivan is central to us. Proof. The proof follows from the analysis used to prove [53, Theorem 3.1]. Strictly speaking, the latter result only states that the Whitehead torsion is well-defined for an acyclic Floer complex. The main point of its proof, however, establishes an invariance proof of Floer homology based upon bifurcation analysis that leads to the sought statement.
In particular, [53, Corollary 3.14] concerns the invariance in the case when a handle-slide occurs, while [53,Lemma 3.15] concerns the case when a birth/death of an intersection point occurs. Both statements combine to show that the simple homotopy type of the complex is preserved under these moves. That these cases suffices follows from the techniques in [53, Section 3.3] (in particular, see Theorem 3.12 therein). Namely, there it is established that, after taking a suitable stabilisation, we can perturb the isotopy to one consisting of a sequence of handle-slides and birth/deaths of the required form.
We end with the following remark. As written, the proof in the aforementioned paper only deals with the case R = Z 2 . However taking [53,Remark 5.5] into account, together with the construction of coherent orientations in e.g. [20], the general case follows as well.
In order to deduce Corollary 1.2, we will need the following result, which was proven in [11] by Damian in the case of a closed manifold.
for a Morse function F : M × R k → R which is almost quadratic at infinity for some k ≥ 0.
Proof. The case when M has boundary follows by the same proof as the case when M is closed. Roughly speaking, the proof consists of the following steps. First, we stabilise the function f by a non-degenerate quadratic function Q : R k → R, thus obtaining a Morse function f + Q : M × R k → R for some sufficiently large k > 0. This function is almost quadratic at infinity by construction. We may then realise the simple homotopy equivalence by a sequence of Morse theoretic handle-slide moves together with birth/death moves applied to this stabilised Morse function. Since these moves all may be realised by compactly supported modifications (a gradient flow line connecting two critical points is disjoint from a fixed neighbourhood of ∂M × R k by assumption), we may assume that the Morse function is kept fixed in a neighbourhood of ∂M × R k throughout the modification. See e.g. [11,Lemma 2.3]. The resulting function produced will hence also be almost quadratic at infinity in the sense of Definition 2.4.
Group theoretic background.
Here we remind the reader of some definitions and facts from group theory that will become useful later.
A group G is called an extension of a group Q by a group N, if N is a normal subgroup of G and the quotient group G/N is isomorphic to the group Q.
A group G is called solvable if it admits a subnormal series whose factor groups are all abelian, that is, if there are subgroups Let π denote a set of primes, then a Hall π-subgroup is a subgroup whose order is a product of primes in π, and whose index is not divisible by any primes in π. In [31], Hall proved the following theorem.
Theorem 2.10 (Hall). Given a finite solvable group G and a set of primes π, then any two Hall π-subgroups of G are conjugate.
A group G is said to be perfect if it equals its own commutator subgroup [G, G]. A group is said to be superperfect when its first two homology groups are trivial, i.e.
The property of being superperfect is stronger than a property of being perfect, since perfect can be translated into H 1 (G, Z) = 0.
The following fact follows from the classification of finite simple groups. Finally, we recall the following realisation result due to Kervaire [35]. We also observe that if G is a fundamental group of a smooth homology sphere M, then it satisfies conditions (i), (ii), (iii). Conditions (i) and (ii) are automatic and condition (iii) follows from Hopf's theorem [34] which says that H 2 (G; Z) ≃ H 2 (M; Z)/ρ(π 2 (M)), where ρ : π 2 (M) → H 2 (M; Z) is the Hurewicz homomorphism.
Given a finitely presented group G, let d(G) denote the minimal number of generators of G, and δ(G) be the minimal number of generators of the augmentation ideal I G of G as a Z[G]-module (i.e. the two-sided ideal of Z[G] generated by elements of the form g − e with g ∈ G and e being the group unit in G). It is not difficult to see that d(G) − δ(G) ≥ 0.
Observe that d(G) = δ(G) holds for a large class of groups. There are also examples of finitely presented groups, where d(G) − δ(G) > 0, see [9].
The proof of Theorem 1.1 and its consequences
Using the Reeb flow of (P × R, α := dz + θ) we can displace every filling from itself inside the symplectisation (R × P × R, d(e t α)) (this symplectic manifold is subcritical). We will exploit this displacement in order to create a Hamiltonian isotopy that does the following. We take two copies of the filling suitably perturbed, so that the Floer complex becomes equal to the Morse homology complex for a Morse function on the filling. The goal is then to create a compactly supported Hamiltonian isotopy after which the intersection points are in bijective correspondence with the Reeb chords on the Legendrian end of the filling. The simple homotopy equivalence in Theorem 1.1 can now be seen to follow from Theorem 2.8, i.e. the bifurcation analysis proof of the invariance of Floer homology as performed in [53].
3.1. The main geometric construction. In the following we assume that we are given an exact (n + 1)-dimensional Lagrangian filling L Λ ⊂ R × P × R of a closed Legendrian ndimensional submanifold Λ ⊂ P ×R in the symplectisation of a contactisation. For simplicity, we moreover assume that L Λ ∩ {t ≥ −1} = [−1, +∞) × Λ is cylindrical. (This can always be achieved after a translation of the symplectisation coordinate.) Recall that the Hamiltonian flow φ s e t : (R × P × R, d(e t α)) → (R × P × R, d(e t α)) induced by the autonomous Hamiltonian e t coincides with the Reeb flow of the contact manifold, which in this case simply is a translation of the z-coordinate by s.
More generally, for any smooth function g : R → R, we observe that the flow is a Hamiltonian flow (and φ s e t = Φ s ∂z ). 3.1.1. Constructing a small push-off. First, we consider the Hamiltonian push-off L Λ ǫ := φ ǫ e t (L Λ ), ǫ > 0, of L Λ , which hence is a translation of the z-coordinate by ǫ > 0. Observe that we have where Λ ǫ = φ ǫ e t (Λ) is obtained by the time-ǫ Reeb flow applied to Λ. In particular, L Λ ǫ is an exact Lagrangian filling of Λ ǫ . See Figure 1 for the case of a one-dimensional filling of a zero-dimensional Legendrian submanifold. Take a Weinstein neighbourhood of L Λ , i.e. an extension of the Lagrangian embedding L Λ ֒→ (R × P × R, d(e t λ)) to a symplectic embedding of a neighbourhood of the zero-section L Λ ⊂ (T * L Λ , −dλ L Λ ). Using this identification, and assuming that we are given an ǫ > 0 that is chosen sufficiently small, we may identify L Λ ǫ ⊂ T * L Λ with a section df for a function f : L Λ → R which satisfies df (∂ t ) > 0 outside of a compact subset. After a compactly supported Hamiltonian perturbation L ′ Λ ǫ of L Λ ǫ we may assume that the latter function is Morse. The following computation is standard.
Lemma 3.1. Using the grading convention in Section 2.2.1, it follows that the intersection point p c ∈ L Λ ∩ L ′ Λ ǫ ⊂ CF (L Λ , L ′ Λ ǫ ) corresponding to c ∈ Crit(f ) has grading given by its Morse index, i.e. |p c | = index f (c).
3.1.2.
Wrapping. We now choose g to be of the form g(t) = −1 for t ≤ −1, g ′ (t) > 0 for t ∈ (−1, 0), while g(t) = 0 for t ≥ 0. Given that we take 0 < ǫ < min c∈Q(Λ) ℓ(c) sufficiently small and S ≫ 0 sufficiently large, it follows that . Since g(t) ≤ 0 and g ′ (t) > 0 holds in the subset {−1 < t < 0}, the flow Φ s g∂z (t, p, z) there has the effect of "wrapping" the Lagrangian L Λ in the negative z-direction. In the same subset L Λ and L Λ ǫ are cylindrical over Λ and Λ ǫ , respectively, and every intersection point thus corresponds to a Reeb chord (i.e. an integral curve of ∂ z ) starting on Λ ǫ and ending on Λ. Note that the latter Reeb chords are in a natural bijective correspondence with the Reeb chords on Λ. For S ≫ 0 sufficiently large, we hence get an induced bijection Let Λ 0 , Λ 1 ⊂ (P × R, α) be small open subsets of the sheets of Λ containing the end point and starting point of the Reeb chord c, respectively. The sought lifts can now be constructed in the following manner. First, we take the product of Λ 0 ∪ Λ 1 ⊂ (P × R, dz + θ) with R ⊂ (T * R, −pdq), and in this way obtain the Legendrian submanifold R × (Λ 0 ∪ Λ 1 ) ⊂ (T * R × P × R = J 1 R × P, dz − pdq + θ) of one dimension higher. Second, we deform the component R ×Λ 1 by the addition of the one-jet of the function −Z + ǫq on R for Z ≫ 0 (the number ǫ > 0 here corresponds to the translation taking Λ to Λ ǫ , while −Z ≪ 0 corresponds to a translation of the Legendrian lift of L Λ ǫ by the negative Reeb flow), giving rise to the Legendrian submanifold L 1 . Third, we deform the component R × Λ 0 by the addition of the one-jet of the Morse function q 2 on R (this corresponds to the wrapping of L Λ ), giving rise to the Legendrian submanifold L 0 . The local model produced has a unique Reeb chord c ′ from L 1 to L 0 contained above q = ǫ/2, the latter being the unique and non-degenerate critical point of the function f (q) = q 2 − (−Z + ǫq).
The sought identity Proposition 3.3. Given that ǫ > 0 is sufficiently small, after a generic and arbitrarily small compactly supported perturbation L ′ Λ ǫ of L Λ ǫ , there is an equality ), ∂ f ) of complexes with grading modulo the Maslov number of L Λ , given that we use the grading convention specified in Section 2.2.1. Here f : L Λ → R is a generic Morse function satisfying the properties that: • All critical points are contained in the subset {t < 0}; and • Its differential satisfies df (∂ t ) > 0 for {t ≥ 0}.
Proof. As described in Lemma 3.1, there is a natural identification of the bases of the underlying graded vector spaces. The calculation of the differential is standard. It was first performed by Floer in [28], whose computation shows that the Floer homology of a C 1 -small Hamiltonian push-off of an exact Lagrangian submanifold is equal to a Morse complex. The case with twisted coefficients as considered here was carried out in [10, Section 2.3] by Damian. Also, see the identification in [52,Proposition 2.4]. Recall that, when using the ring R = Z, extra care must be taken when choosing the spin structure in order to obtain the correct signs.
Using Proposition 3.3 together with Theorem 2.8, Theorem 1.1 is now a direct consequence. Together with Proposition 2.9 we then conclude Corollary 1.2.
The proof of Theorem 1.3
This result roughly follows the ideas above, albeit under a slightly different setting. We start by describing the setup.
4.1.
Lagrangian fillings of Legendrian submanifolds inside P × S 2k−1 . The exact Lagrangian submanifold that we will be considering here is of the form L × R k ⊂ (P × C k , dθ ⊕ dα 0 ), where and dα 0 = ω 0 is the standard symplectic form. This Lagrangian submanifold can be considered as an exact Lagrangian filling of a Legendrian submanifold inside (P × S 2k−1 , θ ⊕ α 0 ) in the following way, where we recall that (S 2k−1 , α 0 ) is the standard contact form on the sphere S 2k−1 ⊂ C k . Choose a Weinstein neighbourhood of L ⊂ (P, dθ) which symplectically identifies a neighbourhood of L with a neighbourhood of the zero-section of (T * L, dλ L ), where λ L denotes the Liouville form. The exactness of L ⊂ (P, dθ) implies that λ L = df + θ holds inside this neighbourhood for some smooth function f : T * L → R. Using a bump-function ϕ supported in a neighbourhood of L and replacing θ with form θ + d(ϕf ), we may thus assume that the primitive of the symplectic form vanishes along L (recall that L is embedded!). In other words, the non-compact Lagrangian submanifold L × R k ⊂ (P × C k , dθ ⊕ ω 0 ) is a cylinder over the Legendrian embedding We will call a (possibly time-dependent) Hamiltonian H s : P × C k → R homogeneous at infinity if it coincides with a function satisfying H(p, rz) = r 2 H(z) outside of a compact subset of P ×C k , where r ∈ R ≥0 , p ∈ P , and z ∈ C k . Observe that the image of L×R k under the isotopy induced by a homogeneous Hamiltonian is still cylindrical over a Legendrian submanifold outside of a compact subset. The reason for this is that (t, z) → e t/2 z is the Liouville flow on (C k \ {0}, dα 0 ). In other words, the latter symplectic manifold can be identified with the symplectisation of (S 2k−1 , α 0 ), and a homogeneous Hamiltonian induces an isotopy which is the lift of a contact isotopy on the latter contact manifold.
Recall that Floer homology again can be defined for Lagrangian fillings that are cylindrical over Legendrian submanifolds in the above sense, given that we e.g. choose an almost complex structure which is cylindrical with respect to the convex end of the product Liouville manifold (P × C k , dθ ⊕ dα 0 ). As usual, invariance of the Floer complex holds for compactly supported Hamiltonian perturbations.
Making the Hamiltonian homogeneous at infinity (the proof of Theorem 1.3).
Equip P × C k with the product metric g P ⊕ g std . By the assumptions of Theorem 1.3, the Hamiltonian H s : P × C k → R satisfies H s = f s + Q, where f s : P × C k → R, s ∈ [0, 1], has a uniform bound on f s C 1 , and where Q : R k → R is a non-degenerate quadratic form. In particular, the Hamiltonian vector field associated to H s is of the form where Y s ∈ T (P × C k ) is uniformly bounded and where ∇Q denotes the gradient of Q with respect to the Euclidean metric.
Hs (L × R k ) ⊂ P × C k for any s ∈ [0, 1] are all contained inside a fixed compact subset K ⊂ P × C k , where this compact subset moreover may be taken to only depend on the norm max s∈[0,1] df s C 0 .
Proof. Fix a constant C > 0. Given that R ≫ 0 is sufficiently large, the Hamiltonian vector field Y s + i∇Q may be supposed to satisfy i∇Q ≥ C in the complement of In particular, the term i∇Q may be assumed to be considerably larger than the Hamiltonian vector field induced by f s in the same complement. Since the image of Hs , s ∈ [0, 1], can be assumed to be contained inside a compact subset K as in the assumption. The statement now follows.
Lemma 4.2. After deforming the Hamiltonian H s : P ×C k → R outside of a compact subset, we may obtain a Hamiltonian G s : P × C k → R which is homogeneous at infinity and for which We can moreover take G s = g s + Q for g s : P × C k → R compactly supported and Q equal to the above non-degenerate quadratic form.
Proof. The sought Hamiltonian will be taken to be of the form G s := χ · f s + Q, with the corresponding Hamiltonian vector field Y s + i∇Q, for a smooth cut-off function χ : P × C k → [0, 1] having compact support. It follows that this Hamiltonian is homogeneous at infinity. Observe that the vector field Y s has a uniform C 0 -bound expressed in terms of dχ C 0 , f s C 0 , and df s C 0 . The required behaviour concerning the intersections can be achieved in the following way. Take a smooth cut-off function satisfying χ ≡ 1 in a sufficiently large subset, while satisfying the uniform bound dχ C 0 ≤ 1.
In particular, we require that χ ≡ 1 holds in the compact subset Hs ((φ 1 Hs ) −1 (K)) ⊂ P × C k , foliated by Hamiltonian trajectories, where K denotes the compact subset produced by Lemma 4.1. This is done in order to ensure that the latter Hamiltonian trajectories all are unaffected by the cut-off function χ.
After choosing the compact subset K even larger, we may further assume that φ s χ·fs+Q ((L× R k ) \ K) is contained in a subset where i∇Q ≥ C holds, for an arbitrary fixed constant C > 0. (In particular, the term i∇Q can again be assumed to be considerably larger than the Hamiltonian vector field induced by either f s or χ · f s in the complement of K.) The sought property of the intersection points follows from this. Lemma 4.3. Let G s : P ×C k → R be a Hamiltonian of the form g s +Q, where g s : P ×C k → R is compactly supported and Q is a non-degenerate quadratic form on R k . For each ǫ > 0 sufficiently small, one can construct a Hamiltonian G s : P × C k → R, where (1) G s coincides with ǫG ǫs outside of P × B 2k R for some R ≫ 0 sufficiently large; (2) The intersection points satisfy The two Lagrangian submanifolds φ 1 Gs (L × R k ) and φ ǫ Gs+hs (L × R k ) are compactly supported Hamiltonian isotopic for any smooth and compactly supported Hamiltonian h s : P × C k → R.
Proof. For any small ǫ > 0 we choose a suitable smooth function ρ ǫ : R ≥0 → R ≥0 satisfying ρ ′ ǫ (t) > 0, ρ ǫ (t) = t for all t ∈ [0, A], while ρ ǫ (t) = √ ǫt for all t ≥ B, and where B > A > 0 have been chosen sufficiently large. Using this function we then construct the Hamiltonian Observe that, since x → (ρ ǫ ( x )/ x )x is a diffeomorphism of R k fixing the origin, the critical points of Q((ρ ǫ ( x )/ x )x) correspond bijectively to the critical points of Q. More precisely, the unique critical point is still the origin x = 0.
(1): This is immediate, given that A ≫ 0 was chosen sufficiently large.
(2): Again given that A ≫ 0 was chosen sufficiently large, we may assume the following. In the subset of P × C k , where G s and G s differ, these Hamiltonians are of the form Q(x) and Q((ρ ǫ ( x )/ x )x), respectively. Moreover, we may assume that neither function has a critical point in this subset. The property now follows.
(3): Consider the image of L × R k under the one-parameter family Proof. This can be seen by using the following standard technique. The function Q : L×R k → R has a non-degenerate critical manifold L × {0} in the Bott sense. We proceed to construct a suitable Morse function of the form χ · f + Q that gives rise to the sought complex.
First we compute ∇(χ · f + Q) = χ∇f + X + ∇Q where supp X ⊂ supp dχ and also X C 0 ≤ dχ C 0 f C 0 are satisfied. Choosing the cut-off function χ to satisfy dχ C 0 ≤ 1 together with supp dχ ⊂ L × (R k \ B k R ) for some sufficiently large R ≫ 0 the statement follows. Namely, in this case, all critical points of χ · f + Q are contained inside L × {0} ⊂ L × R k and correspond to critical points of f . Moreover, we can assume that a gradient flow line which connects two critical points cannot leave L × {0} for the product metric.
We are now ready to prove Theorem 1.
where G s is produced by the same lemma applied to G s . Finally, using part (2) of Lemma 4.3, we may assume that , where we recall that the latter intersection points are equal to (L × R k ) ∩ φ 1 Hs (L × R k ) by the construction of G s . It is now simply a matter of applying Proposition 2.9 in order to obtain the sought inequality.
Adaptation of the estimates of Ono-Pajitnov
In this section, we describe lower bounds for the number of Reeb chords on Λ in terms of d(G) and δ(G), where G is a group which is an epimorphic image of π 1 (L Λ ).
The estimates here are all obtained by direct applications of the results from [43] by Ono-Pajitnov concerning the number of generators of π 1 -equivariant complexes enforced by the complexity of the fundamental group π 1 . These results are related to invariants due to Sharko [49]. The algebro-topological results from [43] applies to complexes that are π 1equivariantly homotopy equivalent to a complex induced by a π 1 -equivariant CW-complex being the universal cover of a connected space having fundamental group equal to π 1 (after possibly reducing the grading Z → Z/µZ). In particular, they can be applied to our setting.
In fact, the latter article also provided important inspiration to our work here. The original application of the algebro-topological results therein was to establish lower bounds for the number of periodic orbits of time-dependent Hamiltonian vector field on a closed and weakly monotone symplectic manifold under the assumption that all periodic orbits are non-degenerate.
In the following we will use the notation c i := |Q i (Λ)| for the number of Reeb chords in degree i modulo the Maslov number of L Λ . We also write n := dim Λ.
5.1.
Proof of Theorem 1.5. Let L Λ be an exact Lagrangian filling of a Legendrian Λ ⊂ P × R such that L Λ is spin and µ L Λ = 0. In this case, the first part of Theorem 1.1 holds. Now we adapt the results of Ono-Pajitnov from [43, Section 5.1] to this settings.
Propositions 5.1 and 5.2 lead to Theorem 1.5.
5.2.
Proof of Theorem 1.6. In the case when we do not make the assumption that µ L Λ = 0 and that L Λ is spin, then the second part of Theorem 1.1 holds. Then using algebraic machinery described in [43,Section 4], we get the following estimates, which are reformulations of the estimates from [43, Section 5.2].
Using Propositions 5.3 and 5.4, we get Theorem 1.6.
Examples of exact Lagrangian fillings
Here we describe a general construction of Lagrangian fillings in the symplectisation of the standard contact vector space (R 2n+1 , dz + θ 0 ), where θ 0 := −(y 1 dx 1 + . . . + y k dx k ). The goal is to construct examples of fillings diffeomorphic to N × R k+1 , where N is a closed manifold, to which our results can be applied in order to produce non-trivial lower bounds for the number of Reeb chords on the Legendrian end N × S k ⊂ R 2n+1 . 6.1. Fillings in the symplectisation of the standard contact space. Consider the standard filling L k+1 , α 0 := dz + θ 0 ) by a disc. In particular, the Maslov class of L k+1 0 vanishes for topological reasons. The filling L k+1 0 can either be constructed by hand, or by observing that the contact manifold J 1 R k = R 2k+1 endowed with the standard contact structure is contactomorphic to the complement of a point (S 2k+1 \ {pt}, ξ 0 ) of the standard tight contact sphere. Namely, under this identification, Λ k 0 is identified with R k+1 ∩ S 2k+1 ⊂ C k+1 ∩ S 2k+1 while L k+1 0 can be identified with a compactly supported perturbation of R k+1 ⊂ C k+1 that misses the origin.
First, we observe that is an exact Lagrangian filling of N × Λ k 0 ⊂ (J 1 (N × R k ), dz + λ N + θ 0 ) diffeomorphic to N × S k , where λ N denotes the Liouville one-form on T * N. Observe that the Maslov class of N × L k+1 0 also vanishes. Second, given that N embeds into R dim N +k with trivial normal bundle, it follows that there exists a contact-form preserving embedding For example, this embedding can be obtained starting from the canonical open exact symplectic embedding T * (N × R k ) ֒→ T * R dim N +k induced by a choice of open embedding N × R k ֒→ R dim N +k , and then taking the canonical lift to the corresponding jet spaces.
Remark 6.1. We recall that the standard representative of Λ k 0 ⊂ R 2k+1 has a single transverse Reeb chord in degree k. Using this fact, we see that the Legendrian embedding N × Λ k 0 ֒→ R dim N +k produced above has a Bott manifold of Reeb chords that, moreover, is diffeomorphic to N. A generic perturbation of this Legendrian embedding can be seen to produce precisely a number Morse(N) of transverse Reeb chords.
6.2. Constructing exact Lagrangian fillings with a given fundamental group. For any finitely presented group G, the above method can be used to construct an exact Lagrangian filling with fundamental group equal to G. Proposition 6.2. For a given finitely-presented group G, there exists a closed n-dimensional manifold M G such that π 1 (M G ) = G and a Legendrian submanifold M G ×Λ k 0 ⊂ (R 2(n+k)+1 , dz+ θ 0 ) which admits an exact Lagrangian filling diffeomorphic to M G × R k+1 for any k ≫ 1 sufficiently large, where this Lagrangian filling moreover is spin and has vanishing Maslov number.
Proof. Recall that the strong Whitney embedding theorem implies that a stably parallelisable manifold M embeds into R 2 dim M with a stably trivial normal bundle. In particular, M embeds into R 2 dim M +1 with a trivial normal bundle. (See [36,Lemma 3.3].) The construction in Section 6.1 together with the following standard result thus proves the claim. Lemma 6.3. Given a finitely presented group G, there exists a stably parallelisable closed manifold M G for which π 1 (M G ) = G.
Proof. Observe that there exists a cell complex X with π 1 (X) = G, see [32, Proposition 1.26, Corollary 1.28]. Then we embed X into R n for some n and thicken it to the open manifold X op so that X op is a compact manifold of codimension 0 in R n which is stably parallelisable. Then, we define M G to be a double of X op in R n+1 , i.e. we smoothen the corners of the boundary of X op × [0, 1] ⊂ R n+1 , thus producing a closed submanifold of R n+1 . Since X op is stably parallelisable, X op × [0, 1] is stably parallelisable as well. Since M G is the boundary of a stably parallelisable manifold, it is hence itself stably parallelisable. 6.3. Exact Lagrangian fillings diffeomorphic to stabilised homology spheres. In this section, we discuss (integral) homology spheres, as well as related exact Lagrangian cobordisms having interesting fundamental groups. Recall that any (integral) homology sphere is stably parallelisable by [39]. (The proof of this statement is a modification of the proof of stably parallelisability of homotopy spheres, see [36].) Using this fact together with Proposition 6.2 we conclude the following. For a given homology sphere S, there exists a Legendrian S × Λ k 0 admitting an exact Lagrangian filling diffeomorphic to S × R k+1 with fundamental group π 1 (S × R k+1 ) = π 1 (S), given that k ≫ 1 is chosen sufficiently large.
6.3.1. The Poincaré homology sphere. The Poincaré homology 3-sphere (or the dodecahedral space of Poincaré) is a classical space that has received particular attention. Namely, it was the first example of a homology sphere which is not a sphere, and it also lies in a class of three manifolds closely related to Platonic solids.
The Poincaré homology sphere can be described as a Brieskorn 3-sphere. More precisely, consider a polynomial f : C 3 → C given by f (z 1 , z 2 , z 3 ) = z 2 1 + z 3 2 + z 5 3 and a singular complex variety f −1 (0). Note that f −1 (0) is singular only at the origin, i.e. when z i = 0 for i = 1, 2, 3. The Poincaré homology 3-sphere S 3 P is defined to be S 5 ∩ f −1 (0), where S 5 ⊂ C 3 is a small 5-sphere around the origin. There are many other interesting descriptions of the Poincaré homology sphere, see [37].
Note that 2, 3, 5 is the unique stem extension, where the base normal subgroup is cyclic group Z 2 and the quotient group is the alternating group A 5 . We also observe that A 5 is the smallest non-abelian finite simple group.
The Poincaré sphere S 3 P embeds into R 5 by the above, and hence does so with a trivial normal bundle by topological reasons. Using the construction in Section 6.1, we can thus produce a Legendrian embedding S 3 P × Λ k 0 ⊂ R 2(3+k)+1 which admits an exact Lagrangian filling diffeomorphic to S 3 P × R k+1 for any k ≥ 2. Now we show that bounds described in Corollary 1.2 and in Theorems 1.5 and 1.6 are stronger than the bound coming from the homological data of the filling. Part (ii) of Theorem 1.5 applied to the perfect group 2, 3, 5 implies the bound |Q(S 3 P × S 1 )| ≥ 6 for all representatives. In addition, note that Morse(S 3 P ) = 6. This follows from the fact that S 3 P admits a Heegaard splitting of genus 2, see [47, Section 9D]. Moreover, combining Theorem 1.5 with [11, Proposition 2.1] we get that even stableMorse(S 3 P ) = 6 holds. Lemma 2.7 now shows that we have stableMorse(S 3 P × D k+1 ) = 6 as well. In conclusion, the bound we get using the method of Ono-Pajitnov actually equals the Morse number of S 3 P × D k+1 . Finally, note that Seidel's isomorphism only predicts that |Q(S 3 P × S k )| ≥ 2. 6.3.2. High-dimensional homology spheres. We again consider the binary icosahedral group 2, 3, 5 . Note that the binary icosahedral group is a finitely presented superperfect group, and by the work of Kervaire [35], see Theorem 2.12, there exists an n-dimensional smooth homology sphere that we call M 2,3,5 , where n ≥ 5, with the property that π 1 (M 2,3,5 ) ≃ 2, 3, 5 . Using the same arguments as we described in Section 6.3.1, we produce a Legendrian manifold M 2,3,5 × S k for k ≫ 1 sufficiently large, for which the bound |Q(M 2,3,5 × S k )| ≥ 6 thus is satisfied for all representatives. Finally, observe that Seidel's isomorphism again predicts that |Q(M 2,3,5 × S k )| ≥ 2. Example. Let F q be a finite field with q elements, where q > 2 is a prime number. We define a group G m := F m q ⋊ ϕ F * q , where ϕ : F * q → Aut(F m q ) acts on the additive group F m q by ϕ(f ) ((f 1 , . . . , f m )) := (f f 1 , . . . , f f m ), f ∈ F * q , f i ∈ F q . It is easy to see that G m is a solvable group. This follows from the existence of the following subnormal series {1} < F q < · · · < F m−1 q < F m q < G m , where G m /F m q ≃ F * q and F i q /F i−1 q ≃ F q . Then we observe that from Formula 6.1 it follows that G m /[G m , G m ] ≃ F * q is a non-trivial cyclic group. Therefore, d(G m /[G m , G m ]) = 1.
Finally we prove that d(G m ) = m + 1. Since d(F m q ) = m and d(F * q ) = 1, we get that d(G m ) ≤ m + 1.
We first show that d(G m ) ≥ m. Assume that G m has a generating set S Gm = {(v i , g i ) | 1 ≤ i ≤ k} with v i ∈ F m q , g i ∈ F * q and k < m. Then, note that every element in the group generated by S Gm has a form ( 1≤i≤k a i v i , h), where h ∈ F * q , a i ∈ F q , and v i ∈ F m q . This leads to the contradiction with the fact that d(F m q ) = m. Then we take a set of generators S Gm and (v, g) ∈ S Gm with the property that g is a generator of F * q (F * q is a cyclic group). Such an element definitely exists since if all the elements of S Gm are of the form (w, h), where h is not a generator of F * q , then S Gm is not a generating set of G m . Again, F * q is a cyclic group of order q − 1, and the order of g that we denote by ord(g) is coprime to |F q |. This implies that ord((v, g)) = ord(g) is coprime to |F q |, and hence none of the primes which divide ord((v, g)) will divide [G m : (v, g) ] = |F m q |. Let π be the set of primes which divide ord(g). Then (v, g) and (0, g) are two Hall πsubgroups, and hence by Theorem 2.10 they are conjugate by some element x ∈ G m . This implies that S Gm , after conjugation, contains an element x(v, g)x −1 = (0,g), whereg is a generator of F * q . We also would like to mention that it is possible to find x explicitly without relying on the theory of Hall π-subgroups. Then, already knowing that d(G m ) ≥ m, we can apply the previous argument and see that, in fact, d(G m ) ≥ m + 1. Together with the fact that d(G m ) ≤ m + 1 we get that d(G m ) = m + 1.
Using Proposition 6.2, we construct an exact Lagrangian filling M Gm ×L k+1 0 of a Legendrian M Gm × Λ k 0 inside the standard contact vector space. Then Theorem 1.5 tells us that is satisfied for all representatives.
On the other hand, the bound given by Seidel's isomorphism is Finally, note that the difference between the previous two bounds gets arbitrarily large as m → ∞. | 2019-01-29T20:16:30.000Z | 2015-10-29T00:00:00.000 | {
"year": 2015,
"sha1": "cf5c53bb7ecd80031d2357ed2ad3a59f5a84b292",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1510.08838",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cf5c53bb7ecd80031d2357ed2ad3a59f5a84b292",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
89556051 | pes2o/s2orc | v3-fos-license | Effect of drift, selection and recombination on the equilibrium frequency of deleterious mutations
We study the stationary state of a population evolving under the action of random genetic drift, selection and recombination in which both deleterious and reverse beneficial mutations can occur. We find that the equilibrium fraction of deleterious mutations decreases as the population size is increased. We calculate exactly the steady state frequency in a nonrecombining population when population size is infinite and for a neutral finite population, and obtain bounds on the fraction of deleterious mutations. We also find that for small and very large populations, the number of deleterious mutations depends weakly on recombination, but for moderately large populations, recombination alleviates the effect of deleterious mutations. An analytical argument shows that recombination decreases disadvantageous mutations appreciably when beneficial mutations are rare as is the case in adapting microbial populations, whereas it has a moderate effect on codon bias where the mutation rates between the preferred and unpreferred codons are comparable.
Introduction
Synonymous codons are those nucleotide triplets that code for the same amino acid.For example, both AAA and AAG code for lysine, while six different codons encode the amino acid leucine.Although the synonymous codons represent the same amino acid, they do not occur in equal frequencies (Hershberg and Petrov, 2008;Plotkin and Kudla, 2011).To quantify the extent of codon bias, a simple population genetic model with selection, reversible mutation and random genetic drift was introduced (Li, 1987;Bulmer, 1991).In a gene coding for a two-fold degenerate amino acid (such as lysine), while selection favors the preferred codon, reversible mutations between preferred and unpreferred codons and random genetic drift can maintain the unpreferred one (Li, 1987;Bulmer, 1991).Assuming that the sites in the sequence evolve independently, a simple result for the equilibrium frequency of unpreferred codons has been obtained (Li, 1987;Bulmer, 1991), see ( 11) below.However it is known that the evolutionary dynamics at a genetic locus are affected by other loci (Hill and Robertson, 1966;Comeron et al., 2008;Charlesworth, 2012).Therefore a proper theory for codon bias must account for the interference between sequence loci, and some understanding of the multilocus model has been obtained using numerical simulations (Comeron et al., 1999;McVean and Charlesworth, 2000).
The selection-reversible mutation-drift model (Li, 1987;Bulmer, 1991) described above has also been proposed as a possible mechanism to stop the degeneration of asexual populations (Lande, 1998;Whitlock, 2000;Goyal et al., 2012).In a finite non-recombining population, if rare beneficial mutations are completely ignored, deleterious mutations accumulate irreversibly due to stochastic fluctuations by a process known as Muller's ratchet (Muller, 1964).However beneficial mutations are not as rare as previously thought, and experimental data on E. coli and yeast suggest that as much as five percent of all mutations can be beneficial (Sniegowski and Gerrish, 2010).Fitness loss due to Muller's ratchet (Chao, 1990;Howe and Denver, 2008) and its recovery due to beneficial mutations (Estes and Lynch, 2003;Silander et al., 2007;Howe and Denver, 2008) has also been experimentally observed.When rare beneficial mutations are allowed to occur in a finite asexual population, the population distribution reaches an equilibrium in which the effect of deleterious mutations is compensated by selection and beneficial mutations.The mutation load in this situation has been calculated for a single locus problem (Kimura et al., 1963;Lande, 1998), and can be related to the result in (11).In a recent work (Goyal et al., 2012), the stochastic dynamics of an asexual population with an infinitely long sequence, in which both beneficial and deleterious mutations occur at a constant rate, were studied.The authors argue that stationarity is achieved when the rate at which the deleterious mutation at the edge (or 'nose') of the population distribution is fixed equals the establishment rate of a beneficial mutation due to reverse mutation.A similar argument has been used previously by Bulmer (1991), but in a single locus setting, to arrive at the equilibrium fraction of deleterious mutations given in (11).In recent years, some analytical understanding of the rate at which an asexual population declines in fitness (Stephan et al., 1993;Gordo and Charlesworth, 2000;Jain, 2008;Rouzine et al., 2008) and adapts (Gerrish and Lenksi, 1998;Wilke, 2004;Rouzine et al., 2008;Desai and Fisher, 2007;Park et al., 2010) has become available in multilocus models.Using these results and the rate balancing argument described above, Goyal et al. (2012) found analytical expressions for the amount of beneficial mutations required to achieve an equilibrium.However the implications of their results for codon usage problem were not discussed.
In this article, we study the multilocus selection-reversible mutationdrift model which is described in Sec. 2, and obtain results for the number of disadvantageous mutations at equilibrium using analytical argument of Goyal et al. (2012) and numerical simulations.As in previous works on codon bias problem (Li, 1987;Comeron et al., 1999;McVean and Charlesworth, 2000), we work with a sequence of finite length, and assume that the fitness is non-epistatic and depends only on the number of deleterious mutations in a sequence (fitness class).If the mutation probability per site is small, the total probability of a beneficial (deleterious) mutation increases (decreases) linearly with the fitness class.Goyal et al. (2012) considered the same fitness function as used here, but assumed the sequence to be infinitely long and the rate at which a mutation occurs to be independent of the fitness class.As a result of their latter assumption, for a given population size and selection coefficient, the population reaches a stationary state only at a critical value of the mutation rates, which was calculated using an analytical argument.In contrast, for the model considered here, an equilibrium state is obtained for arbitrary set of population genetic parameters, and we are interested in finding how many deleterious mutations the population carries in the stationary state.
For infinitely large populations, equilibrium frequency of preferred codons is known for single-site models (Li, 1987;Bulmer, 1991).Goyal et al. (2012) obtained an approximate solution of their deterministic equations for the population fraction in the steady state, but their solution does not preserve the total population fraction and can become negative.In Sec. 3, we exactly solve the dynamics as well as the steady state of the deterministic model with fitness-dependent mutation rates.The stochastic model with finite population is studied in Sec. 4 for both weak and strong selection, and we mainly focus on the dependence of the equilibrium fraction of deleterious mutations on the population size.We find that for large populations, the minimum fraction of deleterious mutations decreases to zero exponentially fast with population size but for smaller populations, it approaches a constant logarithmically slowly.Since the synonymous sites are generally found to be under weak selection (Hershberg and Petrov, 2008), our latter result may help in explaining the observation that the codon bias is similar for populations with very different sizes (Powell and Moriyama, 1997;McVean and Charlesworth, 2000).In Sec. 5, we also consider a model in which not all sites in the sequence can undergo reverse mutations, and deleterious mutations accumulate at the sites with irreversible mutations.Such a model has been used recently to understand the rapid degeneration of neo-Y chromosome due to Muller's ratchet and background selection (Charlesworth et al., 1993) arising from nonsynonymous coding sites at which reversible changes can occur (Kaiser and Charlesworth, 2010).Background selection is a type of Hill-Robertson effect (Charlesworth, 2012) and is known to increase the rate at which the Muller's ratchet clicks.If the background sites stay at mutation-selection equilibrium, their effect on the evolutionary dynamics at other loci may be understood as a reduction in the effective population size.As a result, the Muller's ratchet with background selection clicks at a faster rate than that without it (Gordo and Charlesworth, 2001;Söderberg and Berg, 2007;Kaiser and Charlesworth, 2010).Using the results for the model in which all sites can have reversible mutations, we calculate the effective population size and find that for a wide range of population sizes, the effective population size and hence the click time of the ratchet remains roughly constant.Finally we close the article with a discussion in Sec. 6.
Model
We consider a haploid population of size N consisting of biallelic sequences of finite length L, where an allele is represented by either a zero (wild type) or one (deleterious mutant).A deleterious mutation at a sequence locus occurs with a probability µ and a reverse beneficial mutation with probability ν < µ.We assume that each site contributes to the sequence fitness in a multiplicative fashion and the fitness of a sequence depends only on the number of deleterious mutations in the sequence.Thus the fitness of a sequence with j deleterious mutations is given by w(j) = (1 − s) j where 0 < s < 1 is the selection coefficient.The population is evolved using the discrete time Wright-Fisher process in which an individual is chosen to replicate depending on its fitness.Once it reproduces, the offspring may undergo mutations with mutation rates that depend on the present state of the sequence.In a sequence with j deleterious mutations, as a beneficial mutation can happen at any one of the j sites, the rate of beneficial mutation is jν.Similarly a deleterious mutation can occur at rest of the L − j sites and therefore the deleterious mutation rate is given by (L − j)µ.To find the number of beneficial (b) and deleterious (d) mutations acquired by an individual, random variables were drawn from Poisson distribution with mean jν and (L − j)µ respectively.The total number of deleterious mutations in the offspring is then given by j ′ = j + d − b.If j ′ turns out to be greater than L or less than zero, we ignore that individual and repeat the process until N individuals in the next generation are obtained.We performed numerical simulations for various initial fitnesses and observed that a unique and nontrivial steady state is obtained.In each run, we ran the Wright-Fisher process for about 40, 000 generations and ensured that the stationary state is reached.In the equilibrium state of each run, we measured the minimum, maximum and total number of deleterious mutations present in the population and averaged them over about 10, 000 generations.The data were also averaged over 100 independent stochastic runs.
If the population is infinitely large, the dynamics and equilibrium state of the population fraction can be determined using a rate equation.For later discussion, it is useful to consider the fraction X J (j, t) in the jth class at time t, when the minimum number of deleterious mutations at t = 0 is J.In continuous time, we then have where is the average fitness and j = J, ..., L. In the above equation, the first term on the right hand side (RHS) represents the contribution to the change in X J (j, t) due to reproduction and the second term gives the loss in the population fraction in the jth class due to mutations.The last two terms are the gain terms due to deleterious and beneficial mutations respectively.For L = 1, the above equation reduces to equation (1) of Bulmer (1991).
Exact solution of the multilocus deterministic model
For an infinitely large population, in the limit of infinitely long sequence and absence of beneficial mutations, the dynamics of the population fraction have been solved exactly in discrete (Maia et al., 2003) and continuous time (Etheridge et al., 2009).In the presence of beneficial mutations, for the special case of equal forward and backward mutation probabilities, the average population fraction in the stationary state has been found (Woodcock and Higgs, 1996).We show below that both the dynamics and the steady state of the deterministic model defined by ( 1) is exactly solvable.
Dynamics
Equation ( 1) is nonlinear in the population fraction due to the first term on the RHS.This nonlinearity can be eliminated by a change of variables from X J (j, t) to an unnormalised population variable Z J (j, t) which is defined as (Jain and Krug, 2007a; Jain and Seetharaman, 2011) Then the unnormalised population fraction obeys the following linear differential equation: with boundary conditions at all times.The RHS of ( 3) is a three-term recursion relation (in j) with variable coefficients, which is usually not easy to solve.Inspired by the results of Woodcock and Higgs (1996), we assume that the population fraction Z J (j, t) is of the following form where r 1 , r 2 are calculated in the Appendix.The normalised fraction X J (j, t) is then given by (Jain and Krug, 2007a; Jain and Seetharaman, 2011) where r = r 1 /(r 1 + r 2 ) lies between zero and one.It should be noted that the above form for the population fraction of a class is obtained, if it is assumed that each locus in the sequence contributes independently to the population fraction of a sequence.
Stationary state
In the steady state, the population fraction can be found by taking the limit t → ∞ in the expressions of r 1 (t), r 2 (t) obtained in the Appendix.Using the fact that the eigenvalue λ − given by ( 28) is negative, from ( 27), we find that which can be inserted in ( 7) to find the steady state population fraction.
If the reverse mutation probability ν = 0, a transition occurs at µ c = s (Wiehe, 1997) since we have while for µ > s, the fraction X (0) Because the RHS of ( 7) is a binomial distribution, the following cases may be considered for nonzero ν: 1.If the parameters µ, ν, s are kept fixed but the sequence length is increased, the population fraction is a Gaussian centred about rL (Feller, 2000).
2. If the deleterious mutation rate per genome U d = µL is held fixed while µ → 0, L → ∞, the ratio r ≈ µ/(s + ν) approaches zero for finite ν and s.In this limit, the population fraction is a Poisson distribution given by (Feller, 2000) The above solution has also been obtained recently by Pfaffelhuber et al. (2012) for J = 0. 3. We next consider the limit in which both µ, ν → 0 and L → ∞ such that the product U d = µL, U b = νL remains finite.In this case, taking ν → 0 in (10), we immediately find the population fraction to be a Poisson distribution with mean U d (1 − J L )/s, which is independent of the beneficial mutation rate.To understand this rather surprising result, we first note that when beneficial mutations are absent, due to (9), the mean number of deleterious mutations does not increase with L, and therefore the fraction of deleterious mutations goes to zero as the sequence length increases.When beneficial mutations are present, the number of advantageous mutations that can occur is given by jν = (j/L) U b which also approaches zero with increasing L and thus the population remains unaffected by beneficial mutations.This result is in contrast with that in case 2 where ν and hence jν remain finite, as sequence length increases.
Our exact solution (7) and the limiting solutions derived from it are positive throughout.In contrast, assuming that the mutation rates do not depend on the fitness class, Goyal et al. (2012) obtained a solution for the equilibrium fraction which can be negative in some parameter range, and interpreted their result as a lack of true stationary state in the deterministic theory.
Equilibrium fraction of deleterious mutations in finite population
We now consider the three cases discussed above when the population size is finite.The limit in which µ, ν, s are kept fixed but the sequence length is increased has been studied in previous works to gauge the effect of number of loci (and hence interference) on the fraction of preferred codons, and it was found that for a given Ns, the average frequency of preferred codons decreases with increasing sequence length (Li, 1987;Comeron et al., 1999;McVean and Charlesworth, 2000).Our simulation data (not shown) is also consistent with this observation, and we note that while the average fraction of deleterious mutations in an infinitely large population is a constant in L given by r (refer last section), it increases with the sequence length for finite populations.In the case when U d and ν are kept finite and sequence length is increased, the fraction of deleterious mutations in the high fitness 'nose' of the population distribution approaches zero simply because with increasing L, the deleterious mutation probability decreases while the beneficial one remains the same.Thus a finite fraction of deleterious mutations is not obtained in either of the cases discussed above.
In the rest of the article, we consider the biologically relevant limit in which the mutation rates per genome U b and U d remain finite, as the number of loci in the sequence is increased (Drake et al., 1998).Unlike for an infinitely large population, the steady state properties of a finite population depend on the beneficial mutation rate U b because a finite population can accumulate a finite fraction of deleterious mutations due to Muller's ratchet and become sensitive to beneficial mutations.To find where the population equilibrates in the presence of forward and backward mutations, one can match the degeneration rate at which the fitness class at an edge of the population distribution is lost due to deleterious mutations and the rate at which it is regenerated by back mutations (Goyal et al., 2012).Assuming that the sites evolve independently, the degeneration and regeneration rate are given as Nµ(1 − j1 )π(−s) and Nν j1 π(s) respectively (Bulmer, 1991;Lande, 1998), where j1 is the frequency of deleterious mutation in the single locus problem and π(s) = (1−e −2s )/(1−e −2N s ) is the fixation probability of a rare mutant with selection coefficient s (Kimura, 1962).On matching these rates, one obtains the frequency of deleterious mutations to be (Bulmer, 1991) which decreases from µ/(µ + ν) to zero exponentially, as Ns is increased.
The above expression is expected to hold for weak selection (Kimura, 1962;Lande, 1998) but numerical simulations suggest that it is valid for a wide range of 2Ns provided 2Nµ, 2Nν < 1 (Kondrashov, 1995).A somewhat more general result for j1 has been obtained using Wright's result for the distribution of equilibrium frequency in a single locus model (Wright, 1931) which is given by (Kimura et al., 1963) where 1 F 1 (a, b, z) is the confluent hypergeometric function.For small Ns, it yields (11) above (Li, 1987;Kondrashov, 1995) while for large Ns, it approaches 2µ/s (Kimura et al., 1963).In Fig. 1, the results of our numerical simulations for the frequency q 1 = 1 − j 1 of advantageous mutations, using the model described in Sec. 2 when the sequence length is one, are compared with the above expressions.We find that for Ns ≪ 1, the frequency q 1 is well approximated by ( 11) and ( 12), but for strong selection, (11) fails as expected.Equation ( 12) agrees reasonably well over a range of Ns but for very large populations, it predicts q 1 to be twice the value seen in the simulations.However for infinitely large populations, using the deterministic theory discussed in the last section, we find that for L = 1, the fraction j1 equals r ≈ µ/(s + ν) in agreement with the numerical data in Fig. 1.
For the multilocus model, we will find the fraction of deleterious mutations in the 'nose' of the population distribution by matching the degeneration and regeneration rates, the appropriate expressions for which are discussed in the following subsections.As the above argument applies to the 'nose' of the population distribution, it does not allow us to find the average fraction j of deleterious mutations.However as noted by Li (1987), the average population fraction is distributed over a narrow range of fitness classes and therefore we expect j to be close to the fitness classes in the edge, see below for a further discussion.
Degeneration rate
In the absence of beneficial mutations, the Muller's ratchet clicks at a constant rate for an infinitely long sequence and non-epistatic fitnesses (Haigh, 1978).If the number of individuals in the least loaded class given by Ne −U d /s is larger than unity, the ratchet clicks slowly and this rare clicking regime has been investigated using a diffusion theory (Stephan et al., 1993;Gordo and Charlesworth, 2000;Jain, 2008;Etheridge et al., 2009), a path-integral formulation (Neher and Shraiman, 2012) and an effective two locus model (Waxman and Loewe, 2010;Metzger and Eule, 2013).A basic result that has emerged from these studies is that the ratchet rate is exponentially small in the parameter Nse −U d /s (Jain, 2008;Neher and Shraiman, 2012;Metzger and Eule, 2013).When the population in the least loaded class is smaller than one, deleterious mutations accumulate quickly and an accurate expression for the ratchet rate has been obtained by Rouzine et al. (2008).However for a sequence of finite length L, as shown in Fig. 2, the average rate r − J to lose all the individuals in the Jth class decreases with increasing J.This is a direct consequence of the fact that the number of deleterious mutations that can occur in an individual in the Jth class decreases linearly with increasing J, which can therefore support more individuals.From (9), we see that the number of individuals in the Jth class given by n J = NX (0) J (J) = N(1 − µ/s) L−J grows with J and as a result, an initially fast-clicking ratchet crosses over to a slow-clicking ratchet when n J equals unity.For the data shown in Fig. 2, the Muller's ratchet is slow to begin with for N = 2000.But for N = 50, the number n J crosses unity when J = 23 and lies in the range 17 − 50 for 80 ≤ J ≤ 100.
For a slow ratchet characterised by n J ≫ 1, the loss rate r − J can be estimated by generalising a diffusion theory for infinitely long sequences (Stephan et al., 1993;Gordo and Charlesworth, 2000;Jain, 2008).A straightforward calculation gives where ψ J (y) = e −2 y dx a J (x)/b(x) , and a J (x), b(x) are drift and diffusion coefficient respectively given by ( 14) and ( 15) below.The drift coefficient a J (x) is expected to vanish when the fraction in the least loaded class is either zero or equals that at mutation-selection balance (Stephan et al., 1993;Jain, 2008;Jain and Nagar, 2013).Then we can write where a deterministic argument gives the proportionality constant ã to be s− µ (Jain, 2008;Jain and Nagar, 2013).For infinitely long sequences, it has been shown that this expression for ã is not accurate as it does not take into account that fitness classes do not evolve independently (Neher and Shraiman, 2012) The degeneration rate r − J obtained on numerically integrating ( 13) is compared with numerical simulations in Fig. 2 for two population sizes, and we observe a good agreement for classes with n J ≫ 1.If the fraction in the least loaded class is small, we can write b(x) ≈ x/N.On using this in (13), the Muller's ratchet rate turns out to be (Jain, 2008; Neher and Shraiman, 2012; Metzger and Eule, 2013) where is obtained on setting ν = 0 in (10).
For a fast ratchet for which n J ≪ 1, the click rate for a finite sequence can be approximated by the corresponding results obtained for infinitely long sequence (Rouzine et al., 2008).However as we shall see in the following sections, the rate at which the slow ratchet clicks is only relevant to the discussion in this article and therefore we will not discuss the behavior of fast ratchet further.
Regeneration rate
When deleterious mutations are absent, a maladapted population adapts by acquiring beneficial mutations.As Fig. 2 shows, while the degeneration rate r − J decreases with J, the regeneration rate r + J increases since the beneficial mutation rate increases linearly with J.The adaptation rate depends on the number of beneficial mutants produced per generation which is given by NU b .For NU b ≪ 1, the beneficial mutants arise one at a time and go to fixation sequentially.In this parameter regime, one may assume the loci to act independently and obtain the rate to be Note that the above expression shows a linear dependence on N and U b .However when NU b ≫ 1, the single locus approximation can not be used as the beneficial mutants interfere.In recent years, the rate of adaptation for large populations has been studied (Gerrish and Lenksi, 1998;Wilke, 2004;Rouzine et al., 2008;Desai and Fisher, 2007;Park et al., 2010) and for an infinitely long sequence, the adaptation rate has been obtained using various approaches (for a clear review, see Park et al. (2010)).For single selection coefficient, as is assumed here, the dependence of the regeneration rate on selection coefficient depends on the details of the model, but the variation with N and U b may be summarised as (Park et al., 2010) which shows that the rate r + J depends weakly on N and U b .In this parameter regime, we have tried to obtain an expression for r + J when sequence length is finite using a traveling wave approach (Rouzine et al., 2008), but the resulting equations do not appear to be amenable to analysis.For this reason, some of our results (see ( 20) and ( 23) below) are valid in the NU b ≪ 1 regime.Figure 2 shows the results of our numerical simulations for regeneration rate and we see that the data for N = 50, for which NU b ≪ 1, compares well with (18).For N = 2000, our data fits to a function of the form r + J = δ 1 √ J + δ 2 J but at present, we do not have an understanding of this result.
Rate matching condition
We first discuss how the population size affects the average minimum number J m of deleterious mutations when both forward and backward muta-tions are present.If the population is infinitely large, as it spreads over all the fitness classes, the number J m = 0.In a large but finite population, since the Muller's ratchet operates slowly and adaptation occurs at a fast rate, we expect the population to carry a small fraction of deleterious mutations in the stationary state.When all the individuals in the J m th class are lost due to deleterious mutations, the population in the fitness class J m+1 quickly equilibrates to the deterministic mutation-selection equilibrium with frequency ( 17) and creates individuals in the J m th class by back mutations.For NU b ≪ 1, on matching the rates r − Jm (N) and r ) given by ( 13) and ( 18) respectively, we can obtain the fraction j m of deleterious mutations numerically.However to find an analytical expression, we use ( 16) instead of ( 13) and immediately obtain where β = csNe −U d /s .The above equation can be solved for j m iteratively: we first set j m = 0 on the RHS of the above equation and then find the correction j (1) m to the resulting solution j (1) m .The term j (1) m is calculated by noting that the second term in the exponential on the RHS of (20) can be neglected in comparison to the first one since the latter grows faster with j m .On carrying out these steps, we finally get From the above equation, we see that j m decreases exponentially fast with population size N, and thus a large population will carry a small fraction of deleterious mutations.For NU b ≫ 1, using the numerical conjecture for r + J discussed in the last subsection, we find that the exponential behavior of j m holds in this regime also.The above prediction is tested against the simulation results in the inset of Fig. 3 for NU b ≫ 1 and we see that it is borne out by numerical data.
When the population size is decreased, the Muller's ratchet clicks at a faster rate but the adaptation rate decreases, refer Fig. 2. As a result, smaller population carries more deleterious mutations and j m increases.As discussed earlier, the Muller's ratchet may initially click at a fast rate but slows down as more deleterious mutations are accumulated.Therefore even for small populations, where J m is expected to be large, the relevant degeneration rate is given by ( 13) or ( 16).The regeneration rate is given by ( 18) or ( 19) which increases logarithmically or at most linearly with the population size.As before, an expression for j m can be obtained by matching the rates.But as the degeneration rate decays fast with N whereas regeneration rate depends weakly on population size, we may treat the rate r + Jm as a constant in N.This simplification implies that r − J ∼ e −csN X (0) J (J) ∼ 1 which immediately leads to Our analytical result ( 22) is compared with the results of numerical simulations in Fig. 3 and for a wide range of population sizes, we see a good agreement.When the population size is reduced further such that 2Ns < 1, Fig. 3 shows that j m is roughly constant in N and close to the single locus theory expression (12) for j. Figure 3 also shows the variation of maximum fraction j M of deleterious mutations with population size and we observe that j M also decreases logarithmically with N but with a prefactor which is larger than −s/U d .Figure 1 shows our numerical results for the average frequency q = 1 − j of advantageous mutations as a function of the population size.For very small populations (2Ns < 1), as for j m and j M , the results from single locus and multilocus theory coincide, as also observed, in albeit different context, by Gordo and Charlesworth (2001).For very large populations, we expect q to approach the deterministic value since the sites act independently in this limit, refer Sec. 3.For moderately sized populations, we see that q < q 1 as already observed in previous studies (Li, 1987;McVean and Charlesworth, 2000).Our data shows that the fraction q also increases logarithmically with N but with a somewhat larger (smaller) prefactor than that obtained for j m (j M ).
We now turn to a discussion of the U b -dependence of the fraction of deleterious mutations when NU b ≪ 1.For large U b , as j m approaches zero, one may set J = 0 in the degeneration rate ( 16).Equating this to the expression (18) for the regeneration rate, one immediately obtains an inverse relationship between j m and U b .More precisely, from (21), we get For small U b , when beneficial mutations are rare, we expect the single locus theory to work.The behavior of minimum (j m ), maximum (j M ) and average ( j) fraction of deleterious mutations with beneficial mutation rate is shown in Fig. 4 and we observe that all three behave in a similar manner (Li, 1987).
For small U b , the simulation data matches reasonably well with (12) while for large U b , we see an inverse relationship between j m and U b .
Effect of background selection
The results obtained in the last section are useful in understanding the equilibrium properties of a population in which reverse mutations do not occur at all the sites in the sequence (Kaiser and Charlesworth, 2010).We consider an asexual population of N individuals, each with an infinitely long sequence.At L of the sites in the sequence, as described in Sec. 2, both deleterious and beneficial mutations occur with probability µ and ν respectively and the selection coefficient is s.At rest of the loci, only deleterious mutations occur that are distributed according to a Poisson distribution with mean U ′ d and each mutation reduces fitness by a factor 1 − s ′ .We first consider the case when the sites where only deleterious mutations can occur are under strong selection.In the absence of other linked sites, deleterious mutations accumulate exponentially slowly at such loci (refer (16)) and the population remains close to the deterministic mutation-selection equilibrium with the frequency of the least-loaded class being e −U ′ d /s ′ .If these background selection sites (BGS) remain at equilibrium in the presence of other linked loci also, they affect the evolutionary dynamics at other sites, and their effect is quantified by a reduction in the effective population size to the number of individuals carrying the minimum number of deleterious mutations at BGS (Charlesworth, 2012;Gordo and Charlesworth, 2001).For the model described at the beginning of this section, we find that the population reaches a stationary state with the minimum fraction j m (Ne −U ′ d /s ′ ) of deleterious mutations in the reversible-mutation loci, which is larger than that in the absence of BGS.In our simulations, for a parameter set with s = 0.02, , the minimum number of deleterious mutations was found to increase from 9.7 to 38.8 and 0.2 to 4.5 for the population size N = 100 and 2000 respectively when the BGS were included.
A more interesting situation arises when the loci with reversible mutations act as the background selection sites and increase the rate at which Muller's ratchet clicks due to a reduction in the effective population size (Kaiser and Charlesworth, 2010).We numerically measured the ratchet time (inverse of rate) and as shown in Fig. 5, we find that it is considerably decreased from the situation when there are no background selection sites.We also verified that the ratchet time with background selection for a population of size N is well approximated by the ratchet time without BGS for a population of size N e given by As seen in the last section, with increasing population size N, the minimum fraction j m of deleterious mutations remains roughly constant for N ≪ (2s) −1 , decreases logarithmically with a prefactor −s/U d and finally for large populations, it decays to zero exponentially fast.As a result, we expect N e in (24) to increase linearly with N for small and large populations.But for the intermediate range of population sizes, using ( 22) in ( 24) above, we find the effective population size to be independent of N.These predictions are tested in Fig. 5 where the actual population size is varied over three orders of magnitude, but the effective population size and the ratchet time remain roughly constant.
Conclusions
In this article, we examined the stationary state of a model, in which forward and backward mutations can occur, for both infinitely large and finite populations.For the deterministic theory, we find that the sequence loci act independently which allows us to exactly solve for both dynamics and equilibrium state.For finite populations, although some analytical results are known if the sites in the sequence are assumed to act independently (Li, 1987;Bulmer, 1991), little is known when the interference between sequence loci is accounted for (Hill and Robertson, 1966;Comeron et al., 2008;Charlesworth, 2012).Here we used a rate balancing argument (Goyal et al., 2012) and numerical simulations to understand how the equilibrium fraction of deleterious mutations depend on the population genetic parameters, viz.population size N, selection coefficient s and the mutation rates U b and U d .
We find that the minimum (j m ), maximum (j M ) and average ( j) fraction of advantageous mutations behave in a similar manner and are an S-shaped function of N and U b .Our analysis shows that the minimum fraction j m of deleterious mutation approaches zero exponentially fast for very large populations, but increases logarithmically for moderately large ones.A heuristic understanding of the logarithmic dependence may be obtained if we assume that the population fraction at the edge of the population distribution can be approximated by the deterministic frequency X J (J) given by (10) in the limit ν → 0. For a finite population, since the frequency can not fall below 1/N (Jain and Krug, 2007b), on setting X J (J) ∼ N −1 , we immediately obtain (22).We also analysed how the fraction of deleterious mutations depend on the beneficial mutation rate, and find that it is roughly constant for small U b but decreases as U −1 b for large beneficial mutation rates.Although the rate balancing argument used here explains the population size and beneficial mutation rate dependence of the fraction of deleterious mutations, we have not been able to obtain a complete analytical understanding of how it varies with s and U d , since not enough is known about the function c(s, U d ) that occurs in the degeneration rate in (16).However our numerical simulations show that the average fraction of advantageous mutations is an S-shaped function of selection coefficient also.We also performed numerical simulations keeping the product Ns constant (= 10) and observed that j is not a function of Ns unlike the one locus theory prediction (11).We find that if Ns is kept fixed by decreasing N and increasing s, the average fraction j decreases which suggests that it depends more strongly on s than N.For s = 0.005, we find that j = 13.7 which increases to 28.8 when s is halved (and N is doubled).Since the average fraction of deleterious mutations depends logarithmically on N (as Fig. 1 shows), this suggests that j decreases linearly with increasing s, similar to the behavior (22) for j m .We also numerically studied the U d -dependence on the fraction of /tcbluedeleterious mutations and find that it increases with increasing U d advantageous mutations and find that it decreases with increasing U d and for large j, it fits well to an inverse U d dependence, as seen in ( 22).It has been pointed out in McVean and Charlesworth (1999) that the frequency of preferred codons in the multilocus model is unlikely to depend on the ratio U b /U d , unlike that predicted by ( 11) for the single site model.Our expression ( 23) is consistent with this expectation due to the factor e −U d /s in β.We have also carried out simulations keeping the ratio U b /U d fixed and find that j increases with U d when Ns > 1.
Our results have implications for the codon usage bias problem in the context of which the model studied here was introduced (Li, 1987;Bulmer, 1991).Previous numerical results (Li, 1987;McVean and Charlesworth, 2000) show that the preferred codon frequency q changes slower than that predicted by (11) which neglects interference effects arising due to linkage.Our results shown in Fig. 1 also support this conclusion and for weak selection, where codon bias problem lies (Hershberg and Petrov, 2008), we find that q depends weakly on the population size and thus the interference between linked loci can maintain intermediate codon bias levels for a wide range of population sizes (Powell and Moriyama, 1997;McVean and Charlesworth, 2000).Our result ( 22) is also useful in understanding the click rate of Muller's ratchet with background selection that arises due to a finite portion of the sequence in equilibrium due to deleterious and back mutations (Kaiser and Charlesworth, 2010).We find that the effective population size, that determines the click rate, has the interesting feature that it remains almost a constant while the actual population size is varied for several orders of magnitude.
Experimental (Silander et al., 2007;Howe and Denver, 2008) and theoretical (Lande, 1998) studies suggest that asexual populations do not inexorably decline in fitness and can reach a fitness plateau.Here due to the presence of back mutations, the finite population achieves an equilibrium state.The model discussed here assumes that the number of beneficial mutations increase linearly with average number of deleterious mutations.However in an adaptation experiment on bateriophage (Silander et al., 2007), a nonlinear relationship between these two quantities has been observed.Although at present, we do not know how a more general model of compensatory mutations affect our results, some qualitative features of the experiment of Silander et al. (2007) can be captured by the work presented here.We performed numerical simulations starting with three different initial fitnesses and found that after many generations, the population reaches a steady state fitness which is independent of the initial fitness, a feature also seen in the experiment of Silander et al. (2007).The fitness of the population was also observed to increase with population size in Silander et al. (2007) as seen here.Moreover the experimental data on the population fitness shows that when the population size was increased by a factor 10, the logarithmic fitness increased mildly, which is also consistent with the weak N-dependence seen here.The model studied here is different from that by Goyal et al. (2012) in which the mutation rates do not depend on the fitness class.For large populations where few deleterious mutations are present, one may neglect the fitness-dependence of the mutation rates and that is why we find that our results are essentially the same as equation ( 7) of Goyal et al. (2012) for large populations .But for smaller populations that accumulate many deleterious mutations before reaching an equilibrium, it is important to take the fitness dependence of the mutations rates into account.
Throughout this article, we assumed free recombination so that the loci are completely linked as the effects of interference are expected to be strongest in tightly linked loci.For large recombination, we expect the interference effects to be negligible and the single-site theory to work well.The effect of recombination on codon bias and diversity have been investigated numerically in McVean and Charlesworth (2000).An extension of the analytical results presented here by including recombination would be interesting.
Figure 1 :
Figure1: The change in average frequency of advantageous mutations with 2Ns obtained in numerical simulations for L = 100 (cross) and 1 (triangle).The frequency in multilocus simulations increases logarithmically with population size but with a prefactor smaller than that for j m in (22).The prediction (7) from single locus deterministic theory, and (11) and (12) from single locus stochastic theory are also shown.Parameters: s = 0.01, U d = 0.1, U b = 0.001, e −U d /s = 4.539 × 10 −5 .
Figure 2 :Figure 3 :Figure 4 :Figure 5 :
Figure 2: Figure shows the rate of degeneration (open symbols) and regeneration (closed symbols) for population size N = 50 (square) and 2000 (circles).The degeneration and regeneration rate are calculated in numerical simulations starting from all the individuals in the zeroth and Lth fitness class respectively.The diffusion theory prediction (13) for degeneration rate is shown in solid lines and uses c = 0.35.The broken lines show the regeneration rate (18) for N = 50 and a fit to the function δ 1 √ J + δ 2 J where δ 1 = 0.0013, δ 2 = 8.02 × 10 −5 for N = 2000.Other parameters: L = 100, U d = 0.05, U b = 0.005, s = 0.01, e −U d /s = 6.737 × 10 −3 .
. A better estimate of the ratchet rate can be obtained by writing ã = cs where c is a decreasing function of U (Neher and Shraiman, 2012;Metzger and Eule, 2013)013).The diffusion coefficient b(x) is independent of the class J and given by b | 2015-02-24T06:38:45.000Z | 2013-08-05T00:00:00.000 | {
"year": 2013,
"sha1": "3ea44c00119ef5ab6598df1692055e768d9360c6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3ea44c00119ef5ab6598df1692055e768d9360c6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Physics"
]
} |
139460652 | pes2o/s2orc | v3-fos-license | Effects of Different Heat Treatment Process on Mechanical Properties and Microstructure of Q690 Steel Plate
Taking high strength steel Q690 as the research object, the mechanical properties, microstructure and XRD of Q690 steel plate after treated by different processes are studied. The result shows that the mechanical properties of the Q-P-T heat treatment process have been greatly improved, the yield strength is 1112.5MPa, the tensile strength is 1285.9MPa, the elongation is 8.23%, and the surface hardness is about 39HRC. The strengthening and toughening mechanism of Q690 steel after Q-P-T process is discussed. Studies found that the good comprehensive mechanical properties of Q-P-T steel are made up of martensite, retained austenite and dispersed carbide, which are of great significance to improve the strength of the steel.
Introduction
High strength steel with yield strength of 690MPa has been widely used in energy, transportation, construction and other industries due to its excellent performance and economic benefits. European is well ahead of the rest of the world in research and application of marine steel. At present, Marine steel is widely used in the construction of offshore platforms in S355, S420, S460 and S690 [1]. Germany's Rhine Bridge Dusseldorf-ILverish and France's Millau Viaduct Bridge are all made up with S460 high strength steel. In order to reduce the size of the pier and meet the requirements of appearance, the S690 high strength steel is used in the Nesenbachtalbruke Bridge in Germany [2]. Q690 steel plate, such as high strength low alloy steel has superior strength and toughness than traditional carbon manganese steel [3]. At present, TMPC technology has been widely used in the production of high strength low alloy steel [4]. When the yield strength of high strength steel is higher than 690MPa, TMPC technology has certain limitations [5]. However, the steel plate after processing (quenching+high temperature tempering) has good organization and mechanical properties [6]. Therefore, high strength steel which is processed by the modulation process occupies a large share in the production of high-strength steel in the world.
Quenching is the main method to obtain some special properties of the steel. The purpose is to get martensite as much as possible after the austenitizing workpiece. Tempering temperature will affect the microstructure and mechanical properties of high strength steel [7]. Quenching medium [8] of the important factors that affect the quenching process of steel. Selecting proper quenching medium plays an important role in improving quenching quality and obtaining stable quenched structure. Common heat treatment processes consist of quenching and annealing process [9], isothermal quenching and etc. [10] The Quenching-Partitioning treatment proposed by Peer [11] can improve the overall mechanical properties of steel significantly [12]. The performance of high strength steel can be further improved by adopting "quench-carbon distribution -tempering (q-p-t) process" [13,14].
In this paper, the effect of different heat treatment processes on the mechanical properties and microstructure of Q690 high strength steel with thickness of 12mm is studied.
Experimental materials and methods
The experiment takes Q690 steel plate as raw material, and the thickness of steel plate is 12mm. The composition is shown in Table1. Mechanical properties of Q690 steel plates treated by different heat treatment processes are tested. The tensile sample is processed into a standard sample with a nominal width of 81mm according to the standard, and the tensile strain rate is 0.002mm/s on the Zwick/Roell standard tensile testing machine. Hardness test is 10x10x100 (mm) sample, which is tested by 500RA Rockwell hardness tester. The samples 10x10x55(mm) with the standard impact measurement purpose are taken in the radial direction and conducted the impact test on PTM2200-D1 automatic impact testing machine. The microstructure of the samples is observed by using a Nikon ECLIPSEMA200 optical microscope (OM). The content of the tissue of the samples is measured by a Rigaku/max2550VB/PC X-ray diffraction (XRD).
Different heat treatment processes
The Q690 steel plate is heated to 930 thermal insulation for 30min to make the austenitizing, then treated by different heat treatment process. The heat treatment process is shown in Fig.1. (a)oil-quenching for 60s+air-cooling to room temperature (b)one step Q-P-T process
Mechanics property analysis
The tensile curves of Q690 steel plate after different heat treatment are shown in Fig.2. Compared with the stretching curves in Fig.2, the strength of the steel plate treated by Q-P-T heat treatment process is higher than the original steel plate. The reason is that there is a certain amount of martensite after the quenching stage of Q-P-T heat treatment process which makes the strength of steel increase. According to the tensile performance data of different heat treatment process in Table 2, c sample treated by Q-P-T heat treatment process has superior comprehensive mechanical properties.
The yield strength of c sample treated by Q-P-T process is 1112.5MPa, the tensile strength is 1285.9MPa, its strong plastic product is 10582.96MPa•%. However, the yield strength of b sample treated by oil quenching is 1080.8MPa, the tensile strength is 1243.5MPa, its strong plastic product is only 3295.23MPa•%. It can be concluded that the comprehensive mechanical properties of the samples treated by Q-P-T process are much higher than oil quenching and the original state, which shows the superiority of Q-P-T heat treatment.
From the surface hardness after treated by different heat treatment processes, it can be seen that the surface hardness of the steel plate after oil quenching and Q-P-T heat treatment has been increased, compared to the original plate surface hardness. But the surface hardness after Q-P-T process is the highest which is far higher than the parent metal. The reason why the material has high hardness is that the initial quenching has a higher cooling rate, which leads to higher martensite content in the tissue. The microstructure of the Q690 steel plate before heat treatment includes: normalizing state (or hot rolling state, generally pearlite+equiaxed ferrite), cold rolling state (deformed ferrite+pearlite structure), spheroidized structure (granular carburizing body+ferrite, etc.). The experiment takes Q690 steel plate as raw material and the thickness of steel plate is 12mm. The microstructure is uniform tempered martensite, and there are small carburized particles dispersed in ferritic matrix, as shown in Fig.3 (1-c). From the microstructure of raw material can also be seen that the grain is relatively small, so the material has good toughness.
By comparing the microstructure of the samples treated by the Q-P-T heat treatment process, as shown in Fig.3 (3), there are retained austenite and martensite in Q-P-T samples, the martensite grain boundaries become blurred and carbides are attached to the martensitic crystal interface. In the process of Q-P-T, carbon will diffuse outward from martensite. In this process, the carbon near the martensitic crystal interface diffuses first, which leads to the low carbon content of martensite. In the process of diffusion, part of the carbon precipitates as carbide and adheres to martensite crystal interface. The other part of carbon dissolves in austenite which increases the abundance of carbon in austenite and improve its stability. The results of XRD in Fig.4 shows that the content of retained austenite in Q-P-T sample is 4.3%, raw sample is 8.7%, while oil sample is extremely low. It is further revealed that a certain amount of retained austenite is the important reason of good toughness.
It can be seen from the diffraction peaks of martensite after treated by different heat treatment process that the content of martensite in Q-P-T sample is up to 81.6%, oil sample is about 20%. The content of martensite is the main factor to determine the strength of steel. The way of Q-P-T process to improve the strength of materials is revealed by analyzing the content of martensite.
Conclusion
This paper studies the Q690 steel plate after treated by different heat treatment processes, and the main conclusions are as follows.
(1)Through the comparison and analysis of the mechanical properties of Q690 steel plate treated by different heat treatment processes, the strong plastic product of Q-P-T steel plate is higher than some traditional processes, the yield strength is 1112.5MPa, the tensile strength is 1285.9MPa and the elongation is 8.23%. It shows the superiority of the Q-P-T process in improving the comprehensive mechanical properties of materials.
(2)Through the analysis of microstructure and XRD after treated by different heat treatment processes, it can be seen that the content of austenite in Q-P-T sample is significantly higher than oil sample.The main reason is that the distribution of carbon from martensite to retained austenite in the process of carbon distribution, which increases the carbon content of retained austenite, thus improving the stability of retained austenite. Owing to martensite determines the strength of steel, retained austenite determines the toughness of steel, the microstructure of martensite and retained austenite in Q-P-T steel reveal the mechanism of microstructure on mechanical properties. | 2019-04-30T13:08:03.329Z | 2018-08-07T00:00:00.000 | {
"year": 2018,
"sha1": "27cb20622faa02eeedfaf92302374ac29c39046f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/394/2/022017",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4a15a27bbf0e850b1a79028689195b346414748f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
248392396 | pes2o/s2orc | v3-fos-license | RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training Quantization
We introduce a Power-of-Two low-bit post-training quantization(PTQ) method for deep neural network that meets hardware requirements and does not call for long-time retraining. Power-of-Two quantization can convert the multiplication introduced by quantization and dequantization to bit-shift that is adopted by many efficient accelerators. However, the Power-of-Two scale factors have fewer candidate values, which leads to more rounding or clipping errors. We propose a novel Power-of-Two PTQ framework, dubbed RAPQ, which dynamically adjusts the Power-of-Two scales of the whole network instead of statically determining them layer by layer. It can theoretically trade off the rounding error and clipping error of the whole network. Meanwhile, the reconstruction method in RAPQ is based on the BN information of every unit. Extensive experiments on ImageNet prove the excellent performance of our proposed method. Without bells and whistles, RAPQ can reach accuracy of 65% and 48% on ResNet-18 and MobileNetV2 respectively with weight INT2 activation INT4. We are the first to propose the more constrained but hardware-friendly Power-of-Two quantization scheme for low-bit PTQ specially and prove that it can achieve nearly the same accuracy as SOTA PTQ method. The code was released.
Introduction
In recent years, convolutional neural network (CNN) has been widely used in computer vision tasks. The improvement of hardware computation considerably accelerates model evolution, which produces deeper and more complex CNN models to pursue even higher accuracy. However, the deep CNN models are difficult to be deployed on resource-limited edge devices. How to reduce the model scale while maintaining the model accuracy is a trending topic in current research. In this paper, we study quantization which aims to reduce bit-width of weights and activations to enable fixed-point computation and less memory space.
Based on data usage, model quantization can be divided into three categories: (1) quantization-aware-training (QAT), (2) post-training quantization (PTQ) and (3) data-free quantization (DFQ). QAT requires fine-tuning the model on the whole dataset, which inevitably requires a large amount of GPU resources and time cost. In contrast, PTQ demands only a small set of readily available calibration data. Although DFQ achieves quantization without dataset, its accuracy has not reached the desired level and it is difficult to apply in industrial scenarios. Therefore, this paper focuses on improving PTQ performance.
Moreover, scale factors that are constrained to the form of Power-of-Two make quantization and dequantization convert to simple bit-shift. However, compared with the float scale factors,the Power-of-Two value is essentially a discrete approximation of the float value, which will cause more rounding error or more clipping error. This significantly reduces performance of the quantized model.
We are the first to implement hardware-friendly Power-of-Two low-bit PTQ and surprisingly observe that constrained Power-of-Two PTQ can achieve nearly the same accuracy as SOTA PTQ method.
2 Related Work 2.1 Network Quantization QAT [Krishnamoorthi, 2018] mainly adopts the STE for gradients approximation to solve the non-differentiable round problem. [Gong et al., 2019] uses a differentiable quantizer to gradually approach the round function. However, QAT usually depends on the whole dataset and GPU resources to train the model. Without high cost, most models can be safely quantized to 8-bit even lower-bit by PTQ. AdaRound [Nagel et al., 2020] proposed to learn the rounding way by layer reconstruction brings much improvement of accuracy. BRECQ focused more on block reconstruction with better accuracy at 2-bit weight quantization than AdaRound.
The Power-of-Two scale factor [Miyashita et al., 2016] has the advantage of reducing computation complexity. But it meanwhile produces accuracy loss. To solve this problem, proposed an efficient method APoT for the weights and activations with bell-shaped and long-tailed dis-arXiv:2204.12322v2 [cs.CV] 25 Sep 2022 0 16 Figure 1: Visual explanation of the asymmetric uniform affine quantization grids for a bit-width of 4. tribution in neural networks. But APoT is a non-uniform quantization QAT scheme.
Motivation
For simplicity, we omit the analysis of bias as it can be merged into activation. In this way, the forward propagation of CNN can be expressed by Equation (1).
where * represents the convolution. R (·) is the activation function. x (k) is the input of the k-th layer, x (k+1) is the output activation of the kth layer, and y (k) is the convolution result of the k-th layer.
The essence of quantization is to map floating-point number to low-bit fixed-point number. The quantization and dequantization process of non-uniform quantization often brings huge computing burden. So in this paper, we focus on uniform affine quantization. To quantize vector x, with s denoting the scale of the floating-point number mapped to the fixed-point number, and z denoting the zero-point of shifting the number to the specified range, the process of quantization of input x can be described by Equation (2).
where · represents the rounding. The quantized variables are marked byˆ, clip(·) represents that input will be clipped into [n, p], in the case of asymmetric quantization n = 0 and p = 2 b − 1, b is the bit-width. If we use a grid diagram to represent the range of fixedpoint numbers, we can interpret s as the step length between two grids, and z determines the utilization interval of fixedpoint numbers [Nagel et al., 2021], as shown in Figure 1.
Relationship Between Scale and Rounding Error & Clipping Error
So far, there is no Power-of-Two quantization framework specifically for low-bit PTQ. But the hardware requirement for Power-of-Two scale factors usually occurs. The naive method is to first quantize the model with float scale factors, and replace them with the closest Power-of-Two scale factors. Expressed in Equation (3) iŝ where x min is the mapped smallest floating-point number in the vector x. Equation (4) is the Power-of-Two scale replaced by the naive method, and Equation (5) is the zero-point updated after scale change.
The discussion on Power-of-Two scale can be divided into the following two cases: • When s pow−2 < s, log 2 s rounded down to log 2 s , the grid step length decreases, resulting in more clipping error, i.e., there are more data clipped at the maximum number of fixed points, and the values of outliers all become the same value after the dequantization. • When s pow−2 > s, log 2 s rounded up to log 2 s , the grid step length increases, resulting in more rounding error, i.e., there are more data laying at every grid, and they all become a same number after dequantization.
To visualize the Power-of-Two scale factor caused clipping and rounding errors, we illustrate with specific quantization data distributions. We use 6-bit symmetric uniform affine quantization for the pre-training weights of the DarkNet53 [Redmon and Farhadi, 2018] on Hand dataset [Mittal et al., 2011]. Figure 2 shows the histogram of the data distribution of the quantized weights at 73rd layer. The orange mask is the distribution of normal scale quantized data, and the blue histogram is the distribution of Power-of-Two scale quantized data. Figure 2(a) shows the data distribution using 2 log 2 s quantization, and Figure 2(b) shows the data distribution using 2 log 2 s quantization. The relationship between clipping error and rounding error is that they always trade with each other.
The selection of scale in Power-of-Two quantization is essentially a trade-off between rounding error and clipping error. The naive method simply finds a Power-of-Two scale with the smallest value difference from the original scale, which has no connection to model accuracy. Reducing the numerical difference between the Power-of-Two scale and the original scale in every layer does not necessarily decrease the task loss of the model. Besides, even using the greedy strategy to directly select the best Power-of-Two scale of a single layer often can only obtain the local optimal scale factor. So we think there should be a solider method, which adopts task loss as the criterion to choose Power-of-Two scale, theoretically trades off the clipping error and rounding error and eventually obtain the optimal solution of the whole network.
Regression Loss Function for Reconstruction
Recent excellent PTQ work AdaRound [Nagel et al., 2020] performed a second-order Taylor expansion of difference between the task losses before and after quantization by two strong assumptions. It finally convert the difference to a L-2 loss of feature map before and after quantization between layers. changed the assumptions of [Nagel et al., 2020] and applied this theory to block reconstruction, eventually converting task loss to an L-2 loss minimization of feature map before and after quantization between blocks. Their crude assumptions bring crude conclusion, and the reconstructed regression loss functions all become L-2 loss functions. For L-2 loss minimization, it is easy to show that the regression value is actually the mean value of the array. The mean value is sensitive to outliers of the array, which means that using L-2 loss minimization will produce more rounding error than clipping error. If other reconstruction schemes are used instead of L-2 loss minimization, this comes back to the problem of trading off rounding and clipping errors mentioned in Sec 3.1.
Method
The two challenges mentioned in Sec 3.1 and Sec 3.2 have led to a collapse in model accuracy for Power-of-Two low-bit PTQ, a phenomenon that is more pronounced in light-weight networks like MobileNetV2. In this section, we propose two methods to rescue Power-of-Two PTQ from accuracy collapse. These two methods are theoretically well-founded and show significant performance improvement in practice.
Power-of-Two Scale Group
To address the first problem mentioned in Sec 3.1, we abandon the naive method of determining the Power-of-Two scale layer by layer and look for the Power-of-Two scale group of the entire network or block instead.
The goal of trading off rounding error and clipping error is to make the model more accurate, i.e., the model has a lower task loss. So we directly use the task loss as metric to evaluate performance of quantization.
where L(·) is the loss function of the model,ŵ represents the quantized model weights and D Q is the discrete space of fixed-point numbers. In QAT, this process is easier to optimize by stochastic gradient descent for updating quantization parameters and weights. But in case of PTQ, we can only calibrate the model with a small portion of dataset. To solve this problem, [Nagel et al., 2020] degenerated the loss difference using a second-order Taylor expansion into the Equation (7). where x is the input to the model, tgt is the ground truth, w is the original weight of the model. ∆w is the perturbation brought by the model quantization to the model weights.
However, the calculation of the Hessian matrix is too complex and solving this problem is not allowed by the computing resources of PTQ. Thus, they assumed that layers are mutual-independent and the second-order derivatives of preactivation are constant diagonal matrix. In this way, they transformed the problem into an L-2 loss minimization with layer-by-layer feature map reconstruction. Solving this problem only requires focusing on the current layer and solving each subproblem as shown in Equation (8).
arg min [Li et al., 2020] generalized this work. They ignored interblock dependencies, and used the diagonal Fisher information matrix (FIM) instead of the pre-activation Hessian matrix [LeCun et al., 2012]. Our optimization objective can be converted into a block-by-block feature map reconstruction problem, as shown in Equation (9).
Where ∆y (k) is the change of output by quantization. The middle term is the diagonal Fisher information matrix.
According to Equation (9), if we want to calculate the optimal Power-of-Two scale and weights for the whole block, we no longer need to calculate the extremely complicated Hessian matrix, but still need to solve the NP-hard discrete optimization problem. Because the calculation of ∆y (k) requireŝ y (k) from the discrete quantization space. Therefore, we degenerate Equation (9) to a continuous optimization problem Equation (10) which is based on soft quantization variables, in order to solve it using the back propagation algorithm.
arg min where · 2 F denotes the L-2 loss and y (k) is the result of W and input calculation. W is weights of the soft quantization. The so-called soft quantization is to first replace the discrete quantized variables with continuous float variables in order to back propagate and help the model converge. With the help of differentiable regularizer λf reg (U ) and µf reg (V ), the data will eventually converge or clip to a truly quantized fixedpoint discrete space. Expanding W , we have Equation (11).
(11) where · is the downward rounding operation, W is the weight of soft quantization, and s pow−2 is the soft Power-of-Two scale. h 1 (·) and h 2 (·) are differentiable functions that take values between [0, 1] and serve to process the trainable tensor U ,V in soft quantization and map them to 0 or 1 eventually.
However, we cannot guarantee that the trainable variables U of scale and V of weight in Equation (10) converge at the same time. If V converges before U it will lead to the problem that the converged Power-of-Two scale does not match the converged weight. Therefore, we convert the nonlinear programming problem into two binary constrained optimization problems to be solved in two steps.
1. Look for the optimal solution for the Power-of-Two scale group by Equation (12).
Freeze the Power-of-Two scale after the variable V converges. 2. Look for the optimal solution to the quantized weight W by Equation (13).
After convergence of the variable U , the quantizedŴ are stored.
Thus problem (11) is transformed into two binary constrained optimization problems. Both problems are large scale combinatorial problems, and referring to the work of [Nagel et al., 2020] we solve these two problems using an efficient approximation algorithm Hopfield methods [Hopfield and Tank, 1985]. For the training functions h 1 (·), h 2 (·) we adopt the rectified sigmoid function, as shown in Equation (14), which is proposed in [Louizos et al., 2018].
where ξ and γ are the stretching parameters, fixed at 1.1 and -0.1, respectively. They help the rectified sigmoid function to converge more easily to the extremities 0 and 1, and are not as prone to gradient vanishing as sigmoid function. For regularizer we choose: Because in this regularizer, we can achieve annealing by controlling the β. h(x) can easily converge to 0 or 1 when the value of β drops to a low value. The activations cannot be quantized using adaptive rounding because they vary with different input. Thus, we can only adjust its zero-point and Pow-of-Two scale. Referring to back propagation of TQT [Jain et al., 2019], when calculating the gradient, we approximate by taking s ≈ 2 log 2 s and x s + z ≈ x s + z. Then Power-of-Two scale s pow−2 can be back-propagated by Equation (16).
BN-based L-P Loss
To address the second problem mentioned in Sec 3.2, we introduce the minimum L-P loss problem, as shown in Equation (17), to measure the difference between the feature maps before and after quantization.
where P ∈ [1, +∞).There is the following conclusions: • L-1 loss is M edian regression. It is not sensitive to outliers since M edian simply has an equal number of positive and negative deviations.
• L-2 loss is M ean regression, which is more sensitive to outliers since it has a positive and negative deviation of the sum of 0.
• L-+∞ loss is M idrange N umber regression, highly sensitive to outliers, with a maximum positive deviation and a minimum negative deviation summed to 0.
From these three special cases it is easy to see a proven theory that the sensitivity of the regression value of L-P loss to outliers increases as the P-value increases.
It is not reasonable to use one L-P loss for all of layers because the data distribution of each layer activation is very different. For example, when quantizing the last layer, we should keep his outlier characteristics because these outliers are about to become the most important scores to predict the classification; when quantizing the middle layer, if the regression is insensitive to outliers, we can let the data distribution before and after quantization more similarly.
where y (k) is the original activation of the layer, and y (k) is the activation after BN. µ B ,σ 2 B are the Mean and Variance of the mini-batch while training. When the model stops training, µ B and σ 2 B are replaced by µ running and σ 2 running ,which are the running Mean and running Variance of the statistics in training. γ and σ 2 running reflect, to some extent, the statistical Variance of the data when the model was previously trained. The Variance is used to measure the degree of deviation between a random variable and its mathematical expectation and can reflect the degree of deviation of the data.
γ reflects the Variance information of the data after BN, while σ 2 running reflects the Variance information of the data before BN. In practical hardware inference, the BN-layer is fused in the convolutional layer, so we need the Variance information after BN. Taking block as the unit, the layers that do not belong to block take each layer as a single unit. We take the parameter γ of BN information of the last layer of each unit, and find the average γ of all channels as the deviation degree flag of that layer. We define the formula for calculating the P-value of k-th layer BN-based L-P loss: where α, β are two adjustable parameters,α ∈ (0, 1], β ∈ R.
In order to satisfy the L-P norm definition, P > 1 is necessary, but the model is difficult to converge and prone to large clipping errors when P > 2. Therefore, we introduce such a differentiable function with a value domain of [1, 2]. The larger P-value is, the harder model converges. But when the number of iterations is sufficient, we can increase the disparity of P-values by turning up the α. When the model has extreme γ values, β is needed to help reduce the problem of gradient vanishing of the sigmoid function. For ease of calculation, we usually keep the P-value to 1 2 decimal places.
Experiments
It is widely recognized by peers that for CNN-CV tasks, excellent results of quantization method on classification task models are the basis for awesome results of other kind of task models. So we have conducted extensive experiments based on ImageNet [Russakovsky et al., 2015] dataset to demonstrate the superiority of our method. We randomly pick a total of 1024 images for PTQ calibration in each experiment. To end if 8: end while 9: while i = 1, 2, · · · ,N-th do 10: I a iterations to find s pow−2 group of activation 11: end while 12: return Power-of-Two Quantized Model be fair, we optimized each model with 80,000 weight iterations and 5000 activation iterations in order to fully converge. At this point, the parameter α corresponding to Equation (19) is set to 0.9 and β is set to 1. Although it is not listed in the table, we have confirmed that we can achieve better results than the data in the table if we fine-tune the α,β in Equation (19) for different models. The optimization of the Powerof-Two scale group will be done during the weight warm-up process. This section is divided into four parts. The first part is an ablation study to demonstrate the effectiveness of our two methods. The second part is a comparison with Powerof-Two quantization work, which demonstrates that our work achieves SOTA. The third part is a comparison with SOTA PTQ work, which demonstrates that we can still achieve a performance close to that of other unconstrained quantization work while satisfying the constraint of hardware-friendly property. The fourth part is set for the time-limited scenario. In order to obtain better quantization results quickly, then the L-P loss and number of iterations setting in the fourth part can be used.
Ablation Study
It is generally accepted that MobileNetV2 [Sandler et al., 2018] is one of the most difficult lightweight networks to quantize because its weights are extremely susceptible to perturbations. To show the superiority of our method, we conduct ImageNet experiments using MobileNetV2. As shown in Table 3, we perform ablation study limited with weight IN2 activation INT4( W2/A4 ). It is easy to see that Powerof-Two scale group(Po2 SG) method rescues the accuracy of MobileNetV2. BN-based L-P loss can further improve their accuracy.
Comparison with SOTA Power-of-Two
As shown in Table 1, we compared our experiment results with TQT and APoT. TQT uses a uniform affine QAT scheme and APoT uses a non-uniform affine QAT scheme while we use a uniform affine PTQ scheme. Both TQT and APoT need the whole dataset while we need only 1024 of it. They spend more than 10 times as long as we do to quantize the model. APoT has achieved better results with W4/A4 than we have [Li et al., 2021]. We are the first in the Power-of-Two quantization work to achieve quantization for MobileNetV2 with W2/A4.
Comparison with SOTA PTQ
As Table 2 shows, we compare our hardware-constrained PTQ work with the PTQ work without constraint. Our experiment results on RegNet [Radosavovic et al., 2020] are better than those of SOTA PTQ without hardware constraint, thanks mainly to Method 4.2, because the performance of Power-of-Two scale is worse than that of ordinary scale.
Quick Mode
Many application scenarios are not extreme in terms of accuracy requirement, and they value shorter quantization time.
For such scenarios, we introduce a Quick Mode with only 20,000 weight iterations and 1,000 activation iterations. Correspondingly, the parameter α in Equation (19) is set to 0.1 and β is set to 1. This scheme takes only 10 minutes to quantize Resnet18 with Intel i9-10980XE + Nvidia RTX3090. As shown in Table 4, although it takes very short time, it also has a notable accuracy performance.
Conclusion
In this paper, we propose RAPQ, a Power-of-Two low-bit post-training quantization framework. At first, we analyze the reasons for the accuracy collapse of Power-of-Two PTQ, i.e., the failure to theoretically trade off rounding error and clipping error and the rough setting of the regression loss while reconstructing. For the first reason we propose a method for finding Power-of-Two scale group of CNN model. For the second reason we propose a method to formulate the regression loss based on the BN information of each unit. The experiments show that our work not only reaches SOTA in the field of Power-of-Two quantization, but also does not fall short of other unconstrained quantization methods. When quantizing MobileNetV2 with W2/A4, our work can achieve an accuracy of 48%, which was not achieved by all previous work on Power-of-Two quantization (including QAT). | 2022-04-27T06:47:52.907Z | 2022-04-26T00:00:00.000 | {
"year": 2022,
"sha1": "6d7d91738e3ca04b9bcf7edcfc82ebe5bd063a44",
"oa_license": null,
"oa_url": "https://www.ijcai.org/proceedings/2022/0219.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "6d7d91738e3ca04b9bcf7edcfc82ebe5bd063a44",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213181193 | pes2o/s2orc | v3-fos-license | Rapid Discovery of Aspartyl Protease Inhibitors Using an Anchoring Approach
Abstract Pharmacophore searches that include anchors, fragments contributing above average to receptor binding, combined with one‐step syntheses are a powerful approach for the fast discovery of novel bioactive molecules. Here, we are presenting a pipeline for the rapid and efficient discovery of aspartyl protease inhibitors. First, we hypothesized that hydrazine could be a multi‐valent warhead to interact with the active site Asp carboxylic acids. We incorporated the hydrazine anchor in a multicomponent reaction and created a large virtual library of hydrazine derivatives synthetically accessible in one‐step. Next, we performed anchor‐based pharmacophore screening of the libraries and resynthesized top‐ranked compounds. The inhibitory potency of the molecules was finally assessed by an enzyme activity assay and the binding mode confirmed by several soaked crystal structures supporting the validity of the hypothesis and approach. The herein reported pipeline of tools will be of general value for the rapid generation of receptor binders beyond Asp proteases.
Pharmacophore searches that include anchors, fragments contributing above average to receptor binding, combined with one-step syntheses are a powerful approach for the fast discovery of novel bioactive molecules. Here, we are presenting a pipeline for the rapid and efficient discovery of aspartyl protease inhibitors. First, we hypothesized that hydrazine could be a multi-valent warhead to interact with the active site Asp carboxylic acids. We incorporated the hydrazine anchor in a multicomponent reaction and created a large virtual library of hydrazine derivatives synthetically accessible in one-step. Next, we performed anchor-based pharmacophore screening of the libraries and resynthesized top-ranked compounds. The inhibitory potency of the molecules was finally assessed by an enzyme activity assay and the binding mode confirmed by several soaked crystal structures supporting the validity of the hypothesis and approach. The herein reported pipeline of tools will be of general value for the rapid generation of receptor binders beyond Asp proteases.
The discovery and development of novel drugs is a highly time, resource and investment-intensive undertaking with very low success rate if compared with other industrial development processes. Often it starts with a high throughput screening campaign, but the final discovery of a bioactive lead involves many different disciplines, including biochemistry, cell biology, pharmacology, structural biology and computational chemistry. Bottlenecks of early-stage discovery are often the time consuming and expensive high-throughput screening and the subsequent delineation and expansion of hits. We recently introduced a specialized pharmacophore search technology, AnchorQuery that brings interactive virtual screening of novel protein-protein interaction inhibitors to the desktop. [1,2] The technology is based upon a > 30 million database of virtual compounds. Every library compound is accessible through onestep multi-component reaction (MCR) chemistry and contains an anchor motif that is bioisosteric to an amino-acid residue. An anchor is defined as an amino-acid side chain in the interface of a protein-protein interaction which is contributing above average to its energetics, for example a side chain that buries a large fraction of surface area at the core of the binding interface. [3] Anchors are usually part of energetic hot spots. [4] The value of AnchorQuery has been proven by the discovery of multiple novel and bioactive MCR scaffolds as direct or allosteric modulators of p53/MDM2 [5] or PDK1. [6] The current limitation of AnchorQuery is that it was designed for small molecules mimicking amino acid side chains. However, the concept of an anchor combined with one-pot MCR chemistry could be useful not only in protein-protein interactions, but, as demonstrated in this report, it can be applied in other contexts such as fragment-based drug discovery. Thus, we provide here a generalized AnchorQuery pipeline of tools implemented for the discovery of novel Asp protease inhibitors ( Figure 1).
We chose endothiapepsin as an archetypical Asp protease, which although is not a drug target per se, has received considerable attention as a relevant surrogate in drug discovery programs. Moreover, the enzyme can be easily obtained in large amounts and remains stable and active even after 20 days at room temperature. [7] The ease of crystallization, together with the considerable sequence similarity and folding architecture with related drug targets, explains its use in a hit-to-lead project for β-secretase inhibitors. [8] Interestingly, also renin inhibitors could be co-crystallized with endothiapespin, providing valuable information for the binding mode of the compounds. [9] Endothiapepsin is a monomer, with two structurally similar domains. Each domain contributes one aspartic acid to the catalytic dyad; D35 and D219 ( Figure 2A). In the first step of the catalytic mechanism D35 is believed to be deprotonated, whereas D219 is protonated. [10] Typical warheads for Asp proteases include primary and secondary amines, guanidines, amidines, hydrazides, carboxylic acids, alcohols, imidazoles and pyrazoles. [11] However, it is surprising the absence of a warhead with equal interaction to the two oxygens of an aspartic acid residue. The simplest structure in organic chemistry able to interact with two carboxylic acids bears two nitrogens, thus creating a hydrazine moiety ( Figure 2B). While endothiapepsin is active in acidic pH, the hydrazine moiety has the advantage of being protonated under these conditions, thus forming ionic interactions with the carboxylic acids. NMR studies and quantum chemical calculations for alkyl-and arylhydrazines indicate that protonation is possible either with the exo-or the endo-nitrogen, providing a diverse arrangement of possible interactions ( Figure 2B). [12] Hydrazine has unique attributes not present in common warheads for the potential of combined ionic and hydrogen bonds toward all four oxygen atoms of the catalytic dyad. Thus, we choose hydrazine as our warhead moiety.
We designed a scaffold that could be easily accessible with multi-component reaction chemistry (MCR) incorporating hydrazine as the warhead motif ( Figure 3A). [13,14] Hydrazine is used as the amine component, in an Ugi-tetrazole reaction. The Ugi-tetrazole reaction was chosen due to shape complementarity of the scaffold with the target protein. [15] Synthetically, the scaffold is accessed in a two-step synthesis, starting from a 4component Ugi-tetrazole reaction, followed by Bocdeprotection. [16] Diversity can be easily achieved through the oxo-component (aldehydes and ketones) and the isocyanides. The target compounds are isolated as HCl salts, due to the activity of the enzyme at acidic conditions. Initially, we screened a small library of 17 derivatives of which five showed inhibitory activity ( Figure 3B). For the biochemical evaluation, we employed a fluorescence-based assay adapted from an established HIV-protease assay. [17] Five compounds of the first set showed low to moderate inhibitory activity. In order to gain structural insights, a crystal structure for compound 3 a was obtained by soaking ( Figure 5A, SI Figure S2). In this case, only the exo-nitrogen of the hydrazine warhead is interacting with the catalytic dyad. Interestingly, the tetrazole ring is forming a hydrogen bond with Gly80.
Next, we aimed to optimize the scaffold using the hydrazine moiety as an anchoring fragment. Thus, we developed a protocol for tailor-made virtual library screening. The workflow of this protocol has not been automated, but in contrast to AnchorQuery, there is no limitation to the design of the library, as long as the chemistry is deterministic (detailed protocol described in SI). Moreover, in contrast to public compound databases, a particular scaffold of interest can be optimized, by including commercially available starting materials.
The first step of the protocol is the enumeration of a virtual library, starting from commercially available starting materials (in this case: aldehydes and ketones). Isocyanides based on syntheses using primary amines or oxo components were included: starting from amines with the Ugi procedure [18] or from aldehydes/ketones with the Leuckart-Wallach procedure [19] or from the reaction of the glycine isocyanide (methyl 2isocyanoacetate) with primary amines towards extended isocyanoacetamides. [20] The virtual libraries were created using Reactor [21] software including the post-modification of Bocdeprotection. In our library design, we included~150 aldehydes/ketones and 120 isocyanides thus representing a chemical space of 18.000 possible combinations, not including stereoisomers. The Reactor-generated molecules were converted into 3D conformers using Moloc software. For the 3D anchoring of the hydrazine fragments, different protonation states and orientations between the catalytic aspartic acid residues (D35, D219) were considered and were used to position ("fix") the library against the fragments within the catalytic site. Pharmit software was used to remove clashes occurring during positioning of the library. [22] Moreover, at this stage geometrical cut-off criteria were applied, discarding molecules that clashed with the receptor. Lipinski's rule of five was applied to further filter putative candidates. A final energy minimization was performed with Moloc. [23] Twelve optimized hits were selected, first by visually inspecting the poses and then by using the Scorpion software for quantitatively scoring the interactions. [24] In the end, the predicted compounds were synthesized and tested in the fluorescence-based assay and for the most active compounds, the IC 50 values were determined (Figure 4).
3D structural geometries are key to understand the binding mode of the active compounds and to validate our approach regarding the docking workflow and the correlation between the docking poses and the crystal structures. We were able to obtain a crystal structure by soaking for the most active compound of the 2 nd set, compound 8 b ( Figure 5B, SI Figure S3). In this case, compound 8 b interacts with both the exoand endo-nitrogens of the hydrazine warhead with the catalytic dyad. As in the case of compound 3 a, the tetrazole ring is involved in a hydrogen bond with the backbone NH of Gly80. Moreover, the benzodioxolic motif is involved in a hydrogen bond with the OH group of Tyr226. The molecule is also involved in multiple hydrophobic interactions. One more crystal structure was obtained for compound 3 b from the 2 nd set (SI Figure S4). This smaller and more hydrophobic compound, although is still able to interact with the catalytic dyad, is lacking the formation of the hydrogen bond with Gly80. In the fluorescence-based assay, compound 3 b showed very low inhibitory activity. The data from the crystal structures, together with the fluorescence-based assay results, gave valuable insight regarding the binding mode of the compounds and the structural features that are required for inhibition.
Since our aim is to evaluate the accuracy of the predictions regarding the docking workflow, we compared the obtained crystal structures with the docking poses of the compounds. In virtual screening, for each compound 10 conformers were generated ( Figure 5C,D). A comparison of the crystal structures with the different docking poses showed that the overlap of the warhead was almost perfect and differences were mainly observed in the conformation of the terminal cyclohexyl ring. From the enumerated library, we immediately excluded compounds that were clashing with the receptor and we focused on compounds that had the right size and orientation to bind to the active site. Although, very weak binders, such as compound 3 b could not be excluded at this stage of the docking selection, they still provide interesting structural information for further optimization of the scaffold. It should be noted that accurate correlating of the binding poses with the biological activity is not possible and is beyond the aim of the developed workflow. However, this anchor-based approach shows how an anchor warhead can be incorporated in an MCR scaffold and be optimized without major synthetic effort.
In summary, we introduced a generalized protocol for the AnchorQuery approach which overcomes current limitations of amino-acidogenic anchors. Anchors are significantly affinity contributing fragments in protein binding and more general in receptor-ligand interactions. Thus, anchor fragments comprise valid starting points for growing leads that can be validated rapidly if combined with a high diversity convergent chemistry, such as MCR.
Thus, we designed an MCR scaffold with a novel warhead for aspartic proteases. In this approach, the scaffold could be accessed with a simple two-step methodology. The biological evaluation of the hits together with the determined crystal structures, indicate that the design and optimization of our libraries was successful. Although these are yet not highly potent inhibitors for this enzyme, we were able to analyze the interactions of our MCR scaffold and gained valuable insights regarding the adopted binding modes.
Moreover, the docking protocol for tailor-made virtual libraries can be applied to different chemical reactions and fragments, enabling computational evolution of libraries that are not part of public databases. The choice of the fragmentanchor is the determining step in this protocol and should include a sequence of atoms that are present as a common motif throughout the entire library. These atoms should significantly contribute to the binding interactions between the designed ligands and the protein. For instance, the anchor could be the motif binding in the enzyme's active site, whereas in protein-protein interactions, it could be a moiety deeply buried in the interface.
To the best of our knowledge, currently available docking software cannot optimize a specific scaffold/chemistry of interest by focusing on the possible combinations of commercially available starting materials. The libraries in this approach are not limited to multi-component reaction (MCR) scaffolds only but any sequence of organic reactions would work similarly. Broader chemistry schemes can be applied, including post-modifications. We envision future applications either for docking of novel scaffolds towards biological targets or for optimizing a scaffold of interest. As shown in this case study, departing from commercially available starting materials, thousands of compounds could potentially be accessed. Our protocol can significantly support the decision-making process of prioritizing docking hits as subsequent candidates for chemical synthesis and will lead to the requirement of fewer resources and in shorter times compared to strategies that still involve a significant serendipity and random trial component.
Experimental Section
See the Supporting Information for experimental details. | 2020-03-20T13:05:27.346Z | 2020-03-18T00:00:00.000 | {
"year": 2020,
"sha1": "addce3cacfea390f19b7b67325e5526ee5b4fd7a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1002/cmdc.202000024",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d71cd6928e524b00fd0d2756ee0030a8c0323889",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
1097562 | pes2o/s2orc | v3-fos-license | No N=4 Strings on Wolf Spaces
We generalize the standard $N=2$ supersymmetric Kazama-Suzuki coset construction to the $N=4$ case by requiring the {\it non-linear} (Goddard-Schwimmer) $N=4~$ quasi-superconformal algebra to be realized on cosets. The constraints that we find allow very simple geometrical interpretation and have the Wolf spaces as their natural solutions. Our results obtained by using components-level superconformal field theory methods are fully consistent with standard results about $N=4$ supersymmetric two-dimensional non-linear sigma-models and $N=4$ WZNW models on Wolf spaces. We construct the actions for the latter and express the quaternionic structure, appearing in the $N=4$ coset solution, in terms of the symplectic structure associated with the underlying Freudenthal triple system. Next, we gauge the $N=4~$ QSCA and build a quantum BRST charge for the $N=4$ string propagating on a Wolf space. Surprisingly, the BRST charge nilpotency conditions rule out the non-trivial Wolf spaces as consistent string backgrounds.
Introduction
The critical (non-topological) 3 N = 4 strings are known since 1976 [1], but they received little attention in the literature because of their apparently 'negative' critical dimension. By the critical dimension one actually means the formal number of irreducible 2d scalar N = 4 multiplets whose contribution to the conformal anomaly cancels the contribution of N = 4 ghosts that arise in gauge-fixing the N = 4 superconformal supergravity multiplet. A closer inspection of the argument reveals at least two relevant things: (i) it is implicit that the N = 4 string constraints have to form the 'small ' linear N = 4 superconformal algebra (SCA) having the su(2) affine Lie subalgebra, and (ii) the background space in which such N = 4 strings are supposed to propagate is flat.
In this paper, we are going to challenge both assumptions in an attempt to find new consistent N = 4 string theories. First af all, we replace the 'small' linear N = 4 SCA by the more general non-linear N = 4 quasi-superconformal algebra (QSCA) found by Goddard and Schwimmer [2] and closely related with the 'large' linear N = 4 SCA, having two affine su (2) subalgebras. Second, we choose a coset G/H as the embedding space. The embedding space should be general enough to accomodate as much as possible representations of the underlying QSCA, but not to be too general in order to still allow an explicit treatment. Cosets perfectly satisfy both requirements, as is well known in (super)conformal field theory (SCFT). Requiring N = 4 supersymmetry severely constrains the cosets in question, and it is one of our main purposes to determine which cosets are compatible with the N = 4 non-linear QSCA.
We first generalize the standard N = 0, 1 Goddard-Kent-Olive (GKO) [3] and N = 2 Kazama-Suzuki (KS) [4] coset constructions to the N = 4 case (sects. 2 and 3). Next, we require the N = 4 supersymmetry in the general 2d non-linear scalar field theory and in the Wess-Zumino-Witten-Novikov (WZNW) models (sect. 4), which complements the N = 4 SCFT construction of sect. 3. As far as the linear N = 4 SCA's are concerned, Sevrin and Theodoridis [5] found an N = 4 generalization of the GKO and KS coset constructions in SCFT by imposing the 'large' linear N = 4 SCA in N = 1 superspace. They found coset solutions of the type W ⊗ SU(2) ⊗ U(1), where W is a Wolf space. We take a different approach by requiring a coset to support the non-linear N = 4 QSCA, and using components. Our constraints allow very simple geometrical interpretation, and have just the Wolf spaces as their solutions. Our SCFT results are perfectly consistent with the standard results about the 2d non-linear sigma-models (NLSM's) with Nextended supersymmetry. To solve our N = 4 constraints completely, we provide their alternative derivation, by constructing the relevant N = 4 WZNW models on Wolf spaces. Based on the triple system construction of the N-extended SCA's developed by Günaydin [6], we express the quaternionic structure, appearing in the N = 4 coset solution, in terms of the symplectic structure associated with the underlying Freudenthal triple system (FTS). Next, we promote the symmetry realized by the N = 4 QSCA to the local level in order to get the corresponding N = 4 string, and build the string BRST charge. Requiring its nilpotency is shown to lead to severe constraints on the cosets in question. Finally, we briefly discuss a connection to the known results [7,8] about the on-and off-shell structure of matter couplings in extended supergravities in four and two dimensions (sect. 5). Our conclusion and outlook are summarized in sect. 6. The defining equatons of the N = 4 QSCA are collected in Appendix.
Supersymmetric Coset Constructions
In this section we review some well-known standard constructions in 2d SCFT, including the KS construction for N = 2. This gives the necessary pre-requisite for the N = 4 SCFT coset construction to be discussed in the next section, and introduces our notation.
Affine Lie algebras and Sugawara construction
Let G be the Lie algebra associated with a semi-simple Lie group G, and f abc and |G| be its structure constants and dimension, respectively, a, b = 1, 2, . . . , |G|. Given a nontrivial representation t a (r) of G, let us consider the trace, tr t a (r) t b (r) ≡ g ab (r) , defining the normalization metric g ab (r) . This metric can always be diagonalized in the representation In particular, as far as the adjoint (A) representation is concerned, the metric g ab A is known as the Cartan-Killing metric, and its canonical form is given by (2. 2) The Casimir eigenvalue C r associated with representation t a (r) is defined by Eqs. (2.1) and (2.3) imply the relation C r d r = l r |G| , where the dimension d r of representation (r) has been introduced, α, β = 1, 2, . . . , d r . The normalization of representation (r) is therefore fixed by the coefficient l r alone. If the sum in eq. (2.1) were restricted to the Cartan subalgebra of G, we would get instead where r G is the rank of the group G, and µ are the weights of the representation (r). In particular, as far as the adjoint representation is concerned, we have d A = |G| and where α's are the roots of G. Let ψ be the highest root. Then the normalizationindependent quantityh where n L and n S denote the numbers of long and short roots, respectively, is known as the dual Coxeter number. The roots in classical Lie algebras are known to come in two lengths at most. The Dynkin diagrams having only single lines have roots all of the same length, and they correspond to the so-called simply-laced Lie algebras.
Let J a (z) be generators for the associated affine Lie algebra G of level k G , The Sugawara stress tensor is defined by 4 and it has central charge One can think of this CFT construction as realized by the 2d WZNW theory based on the group G (see sect. 4 for more). As is well known, the level k G must be a positive integer for unitary affine representations, as well as for the WZNW action to be welldefined.
where eq. (2.19) has been used as a guide. Note that the 'improved' currentJ i instead of the 'naive' bosonic currentĴ i has been used in eq. (2.29). This is possible sincẽ J i (z) commutes with j i (z). Most importantly, eq. (2.26) yields the desired orthogonal decomposition since so defined T G/H (z) and G G/H (z) commute with J i (z),J i (z) and j i (z). 6 Explicitly, they read and have central charge where k = k G +h G as above. In particular, for a symmetric space G/H where fābc = 0, one finds and
KS construction
Having obtained the N = 1 super-Virasoro algebra associated with the N = 1 super affine Lie algebra, it is quite natural to ask about the conditions on the coset G/H which would allow more supersymmetries, i.e. N > 1. The case of N = 2 was fully addressed by Kazama and Suzuki [4]. Since the N = 2 extended SCA has a second supercurrent and an abelian U(1) current beyond the content of the N = 1 SCA, the N = 2 conditions on the coset G/H just originate from requiring their existence. The most general ansatz for the second supercurrent takes the form [4] G (2) where hāb and Sābc are constants. The supercurrents G (1) ≡ G G/H and G (2) have to satisfy the basic N = 2 SCA OPE where the N = 2 SCA current J(z) has been introduced. It results in the following N = 2 conditions [4]: Given these conditions, the N = 2 SCA U(1) current reads The conditions (2.36) have simple geometrical interpretation, which allows to describe their solutions in full [4]. In particular, the condition (i) just means that hāb is an almost complex structure on a hermitian manifold. The condition (ii) implies that the almost complex structure is covariantly constant with respect to the connection with torsion to be defined by the structure constants, whereas the condition (iii) means that the almost complex structure is integrable, i.e. it is a complex structure indeed (the equation (iii) is equivalent to the vanishing condition on the so-called Nijenhuis tensor [9]). The condition (iv) is the defining equation for Sābc. The conditions (ii), (iii) and (iv) are trivially satisfied for the symmetric spaces having fābc = Sābc = 0. The hermitian symmetric spaces therefore represent an important class of solutions to eq. (2.36), and they were extensively studied [4]. A different class of N = 2 supersymmetric solutions is given by the kählerian coset spaces which are in fact the only solutions if rank G = rank H [4]. In general, when rank G − rank H = 2n, n = 0, 1, 2, . . ., the coset G/ [H ⊗ U(1) 2n ] must be kählerian [4]. Hence, a solution to the N = 2 conditions exists for any hermitian coset space. Given a Cartan-Weyl decomposition of G, the complex structure maps the Cartan subalgebra of G into itself, whereas the generators corresponding to positive (negative) roots are the eigenvectors with the eigenvalues +i (−i).
N = 4 SCFT coset models
The KS construction delivers a large class of N = 2 SCFT's by the coset space method. We now wish to identify those of them which actually possess N = 4 supersymmetry. Sevrin and Theodoridis [5] already generalized the KS construction to the N = 4 case by requiring the existence of the 'large' linear N = 4 SCA having D(2, 1; α) as projective subalgebra. The N = 4 generators are supposed to act on a coset G/H, i.e. they have to commute with the H generators. Our approach to constructing N = 4 SCFT's by the coset space method is however different from the one adopted in ref. [5]. We are going to impose the non-linear N = 4 supersymmetry because it is more general than the linear one represented by the 'large' N = 4 SCA. The 'large' linear N = 4 SCA is actually not a symmetry algebra since it has subcanonical charges represented by four free fermions and one boson. The proper N = 4 supersymmetric symmetry algebra having only canonical charges of dimension 2, 3/2 and 1 was constructed by Goddard and Schwimmer [2], and we are going to call it theD(2, 1; α) quasi-superconformal algebra (QSCA) [11]. The N = 4 QSCAD(2, 1; α) is quadratically non-linearly generated. Given a SCFT representing the 'large' linear N = 4 SCA, one can always realize over there theD(2, 1; α) QSCA too, since the generators of the latter can be non-linearly constructed from the generators of the former (see Appendix). The reverse may not be always possible. We should therefore expect more solutions to exist when imposing theD(2, 1; α) QSCA instead of the 'large' linear N = 4 SCA. In addition, imposing the QSCA seems to be more satisfactory from the viewpoint of N = 4 string theory: the most general algebra to be gauged is not the 'large' linear N = 4 SCA but the non-linearD(2, 1; α) QSCA! 7 TheD(2, 1; α) QSCA comprises stress tensor T (z), four dimension-3/2 supercurrents G µ (z), and six dimension-1 currents J µν (z) in the adjoint of SO(4) ∼ = SU(2) + ⊗SU(2) − . The only non-trivial OPE of this QSCA defines an N = 4 supersymmetry algebra in the 7 See Appendix for a review of both algebras.
where k + and k − are levels of affine Lie algebras associated with SU(2) + and SU(2) − , respectively. The tensor J µν comprises two (anti)self-dual SU(2) triplets (M = 1, 2, 3) where the antisymmetric 4 × 4 matrices t M ± satisfy the relations The only non-linear : JJ : (w) term on the r.h.s. of eq. (3.1) can be interpreted as the Sugawara stress tensor for the SO(4) currents. It attributes the N = 4 'improvement' to the 'naive' stress tensor T (z).
Requiring the N = 4 QSCA supersymmetry, we expect the KS conditions (2.36) to be satisfied for each supersymmetry separately. This happens to be true indeed (see below). On dimensional grounds, the general ansatz (2.34) is valid for any supersymmetry, where h μ ab and S μ abc are constants, µ = 0, 1, 2, 3. The OPE for a product of the supercurrents (3.4) takes the form Eq. (3.5) is to be compared with eq. (3.1). To get T = T G/H of eq. (2.30) on the r.h.s. of eq. (3.5), let us first look at the coefficients of the terms (z − w) −1ĴĴ . This gives the first necessary condition The supercharge G 0 = G G/H of the N = 1 subalgebra is defined according to the last line of eq. (2.30), which implies Substituting eq. (3.7) into eq. (3.6) at µ = M and ν = 0 yields Eqs. (3.8) and (3.9) mean that each h M ab represents an almost complex hermitian structure. Altogether, according to eq. (3.6), they represent an almost quaternionic tri-hermitian structure.
The terms of the form (z − w) −1Ĵ ψψ in eq. (3.5) have to deliver the remaining terms in the stress tensor T G/H of eq. (2.30), in particular. We find that this necessarily implies the two conditions: and Equation (3.10) determines the tensor S μ abc as follows: So far, we only required the relevant stress tensor to appear on the r.h.s. of the supersymmetry algebra in eq. (3.5), which resulted in the necessary conditions (3.6) and (3.13) for the cosets in question. These equations are also contained in the set of N = 4 conditions found by Sevrin and Theodoridis in their work [5]. It is not surprising since they are not sensitive to the differences between the 'large' linear N = 4 SCA and the non-linear QSCA. 8 These conditions are therefore very general, and they also have very clear geometrical interpretation [9]. Namely, according to eq. (3.6), there should be three independent almost complex hermitian structures satisfying the quaternionic algebra, thus defining an almost quaternionic tri-hermitian structure on G/H. Eqs. (3.11) and (3.13) guarantee the H-invariance and the covariant constancy of that structure, and imply the vanishing of the Nijenhuis tensor [9]. In other words, the almost quaternionic structure is actually integrable, and defines a quaternionic tri-hermitian structure. The latter appears to be the only condition to be satisfied in order that a coset G/H could support N = 4 SCFT. All quaternionic manifolds are known to be Einstein spaces of constant non-vanishing scalar curvature. The only known compact cases are the Wolf spaces to be discussed below.
Looking at the double-pole terms in eq. (3.5) and comparing them with eq. (3.1), we find the SU(2) ± currents of the QSCA in the form and which generalize the results of ref.
[10] to non-symmetric spaces. Simultaneously, the levels of the affine Lie subalgebras SU(2) k ± , 16) and the N = 4 QSCA central charge, are also fixed. All the generators and the parameters of the non-linear algebra are now determined, and it is straightfowrard (although quite tedious) to verify the rest of thê D(2, 1; α) QSCA. No additional consistency conditions arise.
As far as the symmetric quaternionic spaces are concerned, eqs. (3.4), (3.14), (3.15) and the defining OPE's of theD(2, 1; α) algebra in Appendix lead to very simple expressions for the generators of this non-linear algebra on such spaces, where dābcd are certain linear combinations of the structure constants -see the l.h.s. of eq. (4.17) below.
Given a simple Lie group G, there is the unique (associated with this group) quaternionic symmetric space, which is called the Wolf space. To introduce this space, let (E ψ± , H ψ ) be the generators of the su(2) ψ subalgebra of G, associated with the highest root ψ, The associated Wolf space is the coset where H ⊥ is a centralizer of SU(2) ψ in G. The cosets (3.20) for various groups G are of dimension 4(h G − 2), and they are all classified [12,13]. The non-symmetric spaces (G/H ⊥ )⊗U(1) of dimension 4(h G −1) are also quaternionic. Therefore, the both different sets of cosets, support the non-linear N = 4 QSCA, but only the second one supports the 'large' linear N = 4 SCA too [5]. The list of compact Wolf spaces and the QSCA central charges of the associated N = 4 SCFT's are collected in Table 1. The only known non-compact quaternionic spaces are just non-compact analogues of those listed in Table 1, as well as some additional non-symmetric spaces found by Alekseevskii [13]. Table 1. The Wolf spaces, and the (Virasoro) central charges of the associated N = 4 SCFT's, with respect to the N = 4D(2, 1; α) QSCA. Here k + = k G ≡k, k − =h G − 2, and c GS = 6(k + 1)(h G − 1)/(k +h G ) − 3. (3) The 'small ' linear N = 4 SCA can be formally obtained from the 'large' linear N = 4 SCA in the limit k − → ∞ and k + → 0. We are however not in a position to get SCFT's based on the 'small' linear N = 4 SCA from our N = 4 coset construction since k + is the only parameter at our disposal according to eq. (3.16), which is not enough. This simple observation already makes a difference between the 'old' N = 4 strings [1], based on the 'small' linear N = 4 SCA, and the 'new' N = 4 strings based on the non-linear N = 4D(2, 1; α) QSCA [11].
The unitary highest-weight (positive energy) representations of the non-linear algebra were investigated by Günaydin, Petersen, Taormina and van Proeyen [14]. They showed that the central charge values leading to the rational N = 4 SCFT's (with finite numbers of different unitary representations) arise when k − = 0, for the so-called massless representations labeled by the integer k G and the half-integral highest-weight of the su(2) subalgebra [14]. This impliesh G = 2 in the coset approach above. According to Table 1, no such unitary (massless) rational N = 4 SCFT's can appear in our construction. 16 In the previous section, we constructed the N = 4 coset models by using the techniques of 2d CFT. A natural question arises whether our models can be identified with certain 2d non-linear sigma-models (NLSM's). The CFT construction applies to the holomorphic sector of a 2d field theory which corresponds to its left-moving degrees of freedom after the (inverse) Wick rotation. Therefore, by N = 4 supersymmetry above we actually mean (4, 0) supersymmetry. 9 In this section, we want to compare the N = 4 SCFT construction with the standard two-dimensional N = 4 NLSM construction known in the literature (see ref. [15] for a recent review), and build the relevant N = 4 WZNW actions on Wolf spaces.
(4, 0) NLSM from the viewpoint of (1, 0) superspace
Since an arbitrary bosonic NLSM can be made supersymmetric with respect to N = 1 or (1, 0) supersymmetry, it seems to be quite natural to require an explicit (1, 0) supersymmetry of the (4, 0) supersymmetric NLSM in question. By 'explicit' we mean 'off-shell', in order to use superspace. It should be noticed however that only on-shell supersymmetry is required in SCFT. Since our N = 4 supersymmetry is going to be non-linearly realized in general, the standard (or harmonic) N = 4 superspace cannot be applied, at least naively, because it implies a linearly realized N = 4 supersymmetry, which is too restrictive for our purposes, as we already know from the previous section. To make contact with the standard results, we start from the N = 1 or (1, 0) supersymmetric 2d NLSM.
The (1, 0) superspace action for the most general (1, 0) NLSM reads [15] 10 in terms of the (1, 0) scalar superfields Φ i (z = | ,= , θ + ) taking their values in a D-dimensional target manifold M, and the (1, 0) spinor superfields Ψ a − (z = | ,= , θ + ) in a vector bundle K over M. In eq. (4.1), is an antisymmetric tensor on M, h ab (Φ) and Ω i a b (Φ) are a metric and a connection on the fibre K, respectively. It is therefore assumed that M must be a Riemannian manifold. In components, the action (4.2) takes the form and | denotes the leading component of a superfield. In eq. (4.2) the target space connection, 4) and the fibre-valued curvature, have been introduced. The scalars F a are auxiliary, and they vanish on-shell.
The NLSM of eq. (4.1) has manifest off-shell (1, 0) supersymmetry. Requiring further (non-manifest) supersymmetries implies certain restrictions on the NLSM couplings [9]. The form of additional supersymmetries is fixed by dimensional analysis: where some tensors h (M )i j (Φ) and h (M )a b (Φ) have been introduced, and M = 1, 2, 3 (cf eq. (3.4)). It should be noticed that the second line of eq. (4.6) is irrelevant on-shell where ∇ + Ψ b − = 0. The 'canonical' (1, 0) supersymmetry can also be represented in the form (4.6) with h (0)i j = δ i j and h (0)a b = δ a b , which again, as in the previous section, invites us to switch to the four-dimensional notation µ = (0, M).
Requiring the on-shell closure of the supersymmetry transformations (4.6) on the scalar superfields Φ i alone results in the same conditions (3.6) and (3.13) appeared in the previous section, namely, (i) the existence of three independent complex structures satisfying the quaternionic algebra, and (ii) the vanishing Nijenhuis tensor! The on-shell closure on the spinor superfields Ψ a − yields in addition. Generally speaking, the conditions above are not enough to ensure the invariance of the action (4.1) with respect to the transformations (4.6), so that it could make a difference with the CFT approach. As is well known [9], the action (4.1) is actually invariant provided that, in addition, all the complex structures are hermitian and covariantly constant with respect to the connection (4.4), Therefore, the most general N = 4 supersymmetry conditions for the 2d NLSM's and the SCFT's defined on cosets are exactly the same! In geometrical terms, the (2, 0) supersymmetry of the NLSM requires the holonomy of the connection (4.4) to be a subgroup of U(D/2), and the vector bundle K to be holomorphic [9,15]. The (4, 0) supersymmetry requires the holonomy to be a subgroup of Sp(D/4) ⊗ Sp (1), and the bundle K to be holomorphic with respect to each complex structure. The latter is known to lead to hyper-kählerian (b = 0) or quaternionic (b = 0) manifolds, whose dimension is always a multiple of four. The holonomy conditions just mentioned easily follow from the vanishing commutator of the derivatives ∇ i on the complex structures h µ , because of eq. (4.8).
An N = 4 gauged WZNW action for a Wolf space
The NLSM construction in the previous subsection is not explicit enough to accomodate the group-theoretical structure of the (S)CFT coset models. It is the gauged (super) WZNW actions that actually represent the relevant 2d field theories [16]. In ref. [6], Günaydin constructed the gauged N = 4 supersymmetric WZNW theories invariant under the 'large' linear N = 4 SCA. These gauged super WZNW theories are defined over G ⊗ U(1), and have the gauged subgroup H such that G/ [H ⊗ SU (2)] is a Wolf space [6]. In this subsection, we modify the construction of ref. [6] to get the gauged super WZNW theories over the Wolf spaces. They are going to be invariant under the non-linear N = 4 QSCAD(2, 1; α).
The standard WZNW action at level k is given by kI(g), where where ∂B = Σ, ∂ = ∂ z ,∂ = ∂z, and the field g(z,z) takes values in the group G.
The gauged WZNW action reads where the gauge fields (A z , Az), taking their values in the Lie algebra H of a diagonal subgroup H of the global G L ⊗ G R symmetry of the WZNW action (4.9), have been introduced.
The gauged (1, 0) supersymmetric WZNW action for a coset G/H takes the form [16,17] where Du = ∂u − ⌊ ⌈A z , u⌋ ⌉, Du =∂u − ⌊ ⌈Az, u⌋ ⌉, and u is the H-valued infinitesimal gauge parameter. The on-shell (1, 0) supersymmetry of the action (4.11) is (4.13) The action (4.11) is a good starting point to examine further supersymmetries. In particular, as was shown by Witten [18], that action admits (2, 0) supersymmetry when the coset space is kählerian, the canonical example being provided by the grassmannian manifolds SU(n + m)/ [SU(m) ⊗ SU(n) ⊗ U(1)] [19]. A quantization of the action for kählerian cosets results in a subclass of the KS models (subsect. 2.4), namely, those of them which have rank G = rank H. According to our discussion in subsect. 2.4, the rest of non-kählerian but still N = 2 supersymmetric KS models corresponds to the cases when G/H = K ⊗ U(1) 2n , n = 1, 2, . . . , where K is a kählerian coset. It is trivial to generalize Witten's construction of the N = 2 gauged WZNW actions to the other (nonkählerian) cases, since the factor U(1) 2n is abelian and, therefore, it merely contributes a free supersymmetric action for n scalar (2, 0) supermultiplets. Without loss of generality, we can restrict ourselves to the case of n = 0 in our construction of the N = 4 actions, modulo adding a free action for some number of chiral scalar (4, 0) supermultiplets. 11 To this end, we are going to elaborate the structure of the gauged super-WZNW theories on the Wolf spaces (3.20), by using Günaydin's results about coset realizations of the N = 4 extended SCA's over the so-called Freudenthal triple systems (FTS's) [6]. A convenient (Kantor) decomposition of the Lie algebra G is given by its decomposition into the eigenspaces with respect to the grading operator H ψ [20], where the H ψ -eigenvalues appear as superscripts (in brackets). 12 The one-dimensional spaces G (−2) and G (+2) just comprise E ψ− and E ψ+ , respectively, whereas G (0) can be identified with H ⊥ ⊕ H ψ , where H ⊥ is the Lie algebra of H ⊥ . Let Eā ± be the generators of G (±1) , and H ⊥ ac the generators of H ⊥ in the Cartan-Weyl-type basis. The non-trivial commutation relations of G are then given by (the signs are correlated!) Here fābcd are the structure constants of H ⊥ ⊕ H ψ , whose (Cartan-Weyl) normalization is fixed by the conditions [6] fāācd = h G − 2 δcd , fābbc = h G − 1 δāc , (4.16) and which satisfy the identity The matrix Ω ± ac introduced in eq. (4.15) represents a natural symplectic structure associated with a Wolf space [21], (Ω ± ) T = −Ω ± . (4.18) 11 Free chiral scalar N = 4 supermultiplets are still relevant in N = 4 string theory, since they contribute to the conformal anomaly. They play the role similar to free scalars appearing in the toroidally compactified (four-dimensional) superstrings. 12 The elements of G (−1) can be put in one-to-one correspondence with FTS, the latter being usually represented by a division algebra [21].
Finally, the QSCA stress tensor reads [6] It is instructive to compare the N = 4 QSCA generators obtained from the N = 4 SCFT coset approach in eq. (3.18), with the N = 4 WZNW generators given above. First, we immediately see that they actually coincide after identifying and using the crucial identity (4.17). Second, after identifying the generators as above, we find the quaternionic structure {h µ }, µ = 0, 1, 2, 3, on a Wolf space, in terms of the symplectic structure of the associated FTS. The first complex structure takes, of course, the canonical form, as it should, As far as the other two complex structures are concerned, we find Summarizing the above-mentioned in this section, the N = 4 field theory (WZNW) approach leads to the same results as the N = 4 SCFT approach, although in a more tedious way.
New N = 4 strings
We are now in a position to discuss N = 4 strings propagating on Wolf spaces. The coset realizations of the N = 4 QSCA considered above give relevant constraints on the N = 4 string physical states in the form ψā ± Eā ± |phys = ψā ± Ω ∓ ac Ec ± |phys = 0 , E ψ± |phys = H ψ |phys = 0 , Ω ± ac ψā ∓ ψc ∓ |phys = ψā + ψā − |phys = 0 , where eqs. (4.21), (4.22) and (4.23) have been used. It is obvious that these constraints are very different from the ones proposed in ref. [1], and, therefore, they define a new theory of N = 4 strings. Note, in particular, a presence of the quartic fermionic term in the second line of eq. (5.1). Although the string constraints (5.1) look very complicated, the N = 4 QSCA they satisfy actually allows us to get information about their content from the corresponding N = 4 SCFT.
The full invariant 2d action for this N = 4 string theory is obtained by promoting the superconformal symmetries of the N = 4 gauged WZNW action to the local level. As is usual in string theory, the string constraints (5.1) are to be in one-to-one correspondence with proper on-shell N = 4 supergravity fields. In our case, the new W -type N = 4 supergravity seems to be needed [11], and its gauge fields are 13 e a α , where e a α is a zweibein, χ µ α are four 2d MW gravitinos, and B I± α are six SU(2) ⊗ SU(2) gauge fields. The full action is obtained by adding to the rigid N = 4 action (4.11) the Noether coupling for the N = 4 supersymmetry, and minimally covariantizing the result with respect to all the gauge fields in eq. (5.2) [11]. No additional terms are needed in the action. 14 Like in the 'old' invariant N = 4 string action found by Pernici and Nieuwenhuizen [8], the rigid and local N = 4 models have the same geometry for the internal NLSM manifold parametrized by the scalar fields (i.e. quaternionic), and no constraints on the Sp(1) curvature of a quaternionic manifold arise, unlike in four dimensions [7]. Instead of concentrating on the action and the transformation laws [22], we proceed with the BRST quantization.
The gauge field content of theD 4 conformal 2d supergravity is balanced by the gauge symmetries as usual, which implies no off-shell degrees of freedom (up to moduli). In quantum theory, some of the gauge symmetries may become anomalous and thereby some of the gauge degrees of freedom may become physical.
The BRST ghosts appropriate for this case are: 13 Following ref. [11], we call itD 4 supergravity. 14 Of course, as is always the case in the Noether procedure, the transformation laws of the fields receive proper modifications.
where the constant 'non-linearity' tensor Λ can be easily read off from the last term on the r.h.s. of the supersymmetry algebra (A.1c) after rewriting it in terms of the self-dual currents defined by eq. (3.2).
The quantum BRST charge (5.9) is nilpotent if and only if [11] which implies, in particular c tot ≡ c matter + c gh = 6(k + + 1)(k − + 1) In calculating the ghost contributions to the central charge, we used the standard formula of conformal field theory [23], where λ is the conformal dimension and n λ is the number of the conjugated ghost pairs: λ = 2, 3/2, 1 and n λ = 1, 4, 6, respectively.
To cancel the positive ghost contribution, we need therefore the negative central charge (−6) for a matter representation. According to Table I, the level k G is also negative for a negative central charge. This simple observation already excludes unitary representations of the N = 4 QSCA, and, hence, the physical space defined by the constraints (5.1) has little chance to be positive definite. Moreover, comparing eqs. (3.16) and (5.10) in the case of a Wolf space to be used as the background for the N = 4 string propagation, we conclude thath G = 0. Therefore, the group G has to be abelian. It leaves us only (locally flat) tori as the consistent N = 4 string backgrounds.
Conclusion and Outlook
Our main results are given by the title and the abstract. Contrary to the conventional approach to N = 4 strings based on the 'small' N = 4 SCA, we used the non-linear N = 4 supersymmetric QSCA, which is more general. We generalized the supersymmetric coset construction to that N = 4 case, constructed the relevant N = 4 gauged WZNW actions, and defined the BRST quantized theory of N = 4 strings propagating on the Wolf spaces. Due to the non-linearity of the underlying gauged algebra, it is not possible to build new representations by 'tensoring' the known ones, similarly to representations of W algebras. Still, even that rather general framework didn't save us from the disaster: the Wolf spaces as the N = 4 string backgrounds are forbidden by the quantum BRST charge nilpotency conditions, as we showed. The only spaces allowed are just tori, which are locally flat. The result is rather surprising since the Wolf spaces naturally appear as solutions in the N = 4 coset construction. Consistent backgrounds for the N = 4 string propagation may also exist outside cosets.
To this end, we would like to comment on the issue of off-shell extensions of the N = 4 gauged WZNW actions. All our considerations above were merely on-shell, which was important in our general analysis. In particular, the super WZNW theories on the Wolf spaces are only invariant under the on-shell N = 4 supersymmetry which is given by the on-shell current algebra, and which is non-linearly realised. In terms of the transformation laws for the super WZNW fields, the non-linearity implies certain field dependence of the 'structure constants' in the commutator of two N = 4 supertransformations. In order to get an off-shell description if any, it is the necessary first step that the N = 4 supersymmetry should be linearized. It has been known for some time [2,5,10] that it is indeed possible, although not for the super WZNW theories on the Wolf spaces W , but for those on cosets of the type W ⊗ SU(2) ⊗ U(1) (cf eq. (3.21)), where the additional fields belonging to the SU(2) ⊗ U(1) group factor serve as the 'auxiliaries' to linearize the on-shell current algebra. Given the linear N = 4 supersymmetry, the natural way for an off-shell approach would be to use N = 4 superspace. However, it is not known how to formulate the N = 4 super WZNW theory on a non-trivial Wolf space in N = 4 superspace, even without coupling to any 2d supergravity theory [26]. The related problem recently discovered [27] is a variety of ways to define an on-shell N = 4 scalar supermultiplet, as well as its off-shell realizations, in two dimensions. The N = 4 superspace constraints for scalar supermultiplets are of most importance, since they simultaneously determine kinematics of the propagating fields. Clearly, there are still some unsolved problems around [22]. D (1, 2; α) QSCA central charge is c = 6(k + + 1)(k − + 1) We define the α-parameter of theD(1, 2; α) QSCA as a ratio of its two affine 'levels', α ≡ k − /k + , which measures the relative asymmetry between the two su(2) affine Lie algebras. When α = 1, i.e. k − = k + ≡ k, theD(1, 2; 1) QSCA coincides with the SO(4) Bershadsky-Knihznik QSCA [24,25]. The 'levels' and the central charges of those QSCA's are different, k ± large = k ± + 1 and c large = c + 3. The exceptional 'small' N = 4 SCA with the su(2) affine Lie algebra component [1] follows from the 'large' N = 4 SCA in the limit α → ∞ or α → 0, where either k − → ∞ or k + → ∞, respectively. Taking the limit results in the central charge c small = 6k, where k is an arbitrary 'level' of the remaining su(2) component. | 2014-10-01T00:00:00.000Z | 1995-01-31T00:00:00.000 | {
"year": 1995,
"sha1": "1aaf5fce2f826bd8c0d8996d7ab4a1465b534c69",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9501140",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1aaf5fce2f826bd8c0d8996d7ab4a1465b534c69",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
14271540 | pes2o/s2orc | v3-fos-license | Research into the Predictive Effect of TEG in the Changes of Coagulation Functions of the Patients with Traumatic Brain Hemorrhage
To analyze the predictive effect of thrombelastogram (TEG) in the changes of coagulation functions of patients with traumatic brain hemorrhage, as well as to provide a practice basis for clinical guidance. 54 cases were observed from Aug. 2013–Oct. 2014. All patients received a TEG test 1d, 3d and 7d after traumatic injury. According to the statistical analysis, the comparison among the aforementioned coagulation function parameters in each group of patients, K, α and Ma all had significant differences. In the comparison between different time points in the same group, there was still a significant difference. Compared to the patients, the changes of R and K reached the lowest at 1d and the highest at 3d, but there was no significant difference between two groups at 7d. The changes of α and Ma reached its highest at 1d and the lowest at 3d after traumatic injury, but there was no significant difference at 7d. There was some difference in changes of coagulation functions between all groups. The former was more serious and the changes of coagulation functions had certain regularity, i.e., after traumatic injury, 1d showed a hypercoagulable state; 3d showed a hypocoagulable state; the coagulation functions of 7d returned to normal.
Introduction
After traumatic brain hemorrhage, the coagulation functions of patients may be dysfunctional, i.e., the coagulation system shows a hypercoagulable state due to being activated abnormally [1], thereby causing fibrinolysis. Being in a hypercoagulable state for a long term, blood may become viscous, and the blood flow may slow down, easily resulting in thrombus [2]. Clinically, a fast and efficient test method with strong specificity is needed to predict the changes of coagulation functions of patients with traumatic hemorrhage, and to provide guidance for clinical treatment [3]. Thus, as a technology for recording the blood coagulation process, TEG emerged at the right moment. It's not only mainly applied to the research on coagulation and fibrinolysis process [4], but also applied to determining platelet functions [5]. Through analyzing the predictive effect of thrombelastogram (TEG) in the changes of coagulation functions of patients with traumatic brain hemorrhage, this research provides a practice basis for clinical guidance. The research results are hereby reported as follows.
2 Data and method 2.1 General data 54 patients with traumatic hemorrhage were admitted to our hospital from Aug. 2013 to Oct. 2014 were selected as research subjects, including 42 cases of patients with traumatic brain hemorrhage and 12 cases of patients with other traumatic hemorrhage, wherein, 40 cases were males while 14 cases were females, with an age range from 23 to 62, average age 35.6±5.8. The patients with traumatic brain hemorrhage were classified as Group A, including 29 cases of males and 13 cases of females, with an age range from 24 to 61, average age 36.8±5.3. The GCS score of all patients were not higher than 8, wherein, 15 cases got 3~5 points; 27 cases got 6~8 points. The causes of injuries were as shown as follows: 21 cases of car accidents, 7 cases of crushing injuries, 7 cases of falling injuries, and 5 cases of other injuries. The CT examination showed that all were simple brain traumas, comprising 7 cases of simple brain contusion & laceration, 17 cases of brain contusion & laceration combined with diffuse axonal injury, 11 cases of brain contusion & laceration combined with primary brain stem injury, 3 cases of brain contusion & laceration combined with right frontal lobe hematoma and 4 cases of epidural hematoma. Prothrombin time (PT), activated partial thromboplastin time (APTT) and fibrinogen coagulation time (CT) were all normal. The patients with other traumatic hemorrhage were classified as Group B, including 8 cases of males and 4 cases of females, with an age range from 23 to 62, average age 36.6±5.6. The causes of injuries were as shown as follows: 6 cases of car accidents, 3 cases of crushing injuries, 2 cases of falling injuries and 1 case of other injury. The CT examination showed that there were 6 cases of limb fracture, 4 cases of rib fracture combined with pulmonary contusion and 2 cases of crushing injury of bilateral lower extremities. Prothrombin time (PT), activated partial thromboplastin time (APTT) and fibrinogen coagulation time (CT) were all normal. 15 cases of patients with neither coagulation disorder nor blood disease (5 cases) admitted to our hospital and healthy volunteers (10 cases) were selected in the control group, including 8 cases of males and 2 cases of females, with an age range from 23 to 72, average age 31.7±6.14. According to statistical analysis, the general information of the three groups of subjects had no significant difference (P > 0.05). Thus, this research is comparable and maneuverable.
Ethical approval:
The research related to human use has been complied with all the relevant national regulations, institutional policies and in accordance the tenets of the Helsinki Declaration, and has been approved by the authors' institutional review board or equivalent committee. Informed consent: Informed consent has been obtained from all individuals included in this study.
Inclusion criteria
The post-traumatic patients less than 24h, i.e., the patients admitted to hospital, with an age range from 18 to 75, who were diagnosed with simple brain trauma after receiving cranial CT or spinal MRI examination, were included in the research. All patients suffered from closed brain trauma, and their GCS score was not higher than 8, without liver dysfunction or coagulation disorder [6].
Exclusion criteria
The patients with congenital coagulation disorders and liver dysfunction were excluded in this research. In addition, the patients who took any anticoagulant 6 months prior to the onset or had taken aspirin for a long term, and the patients with antithrombin should also be excluded in the research [7].
Method
All patients in Group A and Group B underwent the collection of venous blood samples on the morning of 1d, 3d and 7d with an empty belly after getting injured, 2ml each time. For the subjects in Group C, their blood samples only needed to be collected once [8]. For all blood samples, whole-blood recalcification method was adopted to test the changes of their coagulation functions [9], specifically as follows: blood samples are directly injected into a cylinder; as the liquid blood cannot vibrate the cylinder to drive the movement of the cylinder axis, the result recorded is a straight line; when the blood gradually starts to coagulate, the cylinder vibrates; then driven by the viscous fibrous protein in the blood, it can be transmitted to the cylinder axis; accordingly, the cylinder axis and alloy wires both move [10]. The electrical signal with a changed action signal can be recorded as a swinging curve after being amplified, i.e., TEG.
Monitoring indicators
R means the response time, which is the period from the starting time when the blood is injected to the cylinder to the time when it starts to coagulate, equal to the generation time of fibrous protein at the initial stage. Normally, its reference range is from 2 to 8 min. K means the coagulation time, which is the period from the end point of R to the time when the curve amplitude reaches 20min, equal to the generation time of thrombin. Normally, its reference range is from 1 to 3min. At this moment, the elasticity of sludged blood is 25. α means the coagulation angle, which can represent the generation speed of fibrous protein.
Normally, its reference range is from 55° to 78°. Ma means the maximum amplitude of thrombus, which represents the maximum solidness of fibrous protein. Normally, its reference range is from 51mm to 69 mm [11]. The TEG (5000 Series) used in this research was provided by Haemoscope Corporation, the US.
Statistical method
The statistical software SPSS 17.0 was adopted to conduct statistical analysis on all data in the research. The measurement data were represented by (±SE). One-way ANOVA was adopted for the age comparison among groups. For the comparison of coagulation functions among all groups of subjects at various time points, two-way multilevel ANOVA was adopted. Rank sum test was adopted for enumeration data. Therefore, taking P < 0.05 as data has a significant difference.
Comparison of R value of the patients in the three groups
Upon statistical analysis, the R value of the patients in three groups at different times is of significant difference (P < 0.05), but the inter-group comparison is of no significant difference (P > 0.05). Each group and measurement time have interaction (P < 0.05). See details in Table 1 and Figure 1. Thus it can be seen that the impact of different injuries on the starting process of intrinsic coagulation is of no significant difference.
Comparison of K value of the patients in three groups
Upon statistical analysis, the K value of the patients in three groups at different times is of significant difference (P < 0.05), but the inter-group comparison is of no significant difference (P > 0.05). Each group and measurement time have interaction (P < 0.05). See details in Table 2 and Figure 2. Thus it can be seen that the impact of different injuries on coagulation function is of significant difference. However, with time extension, the impact of different injuries on each time point during the coagulation process is different.
Comparison of α value of the patients in the three groups
Upon statistical analysis, the K value of the patients in three groups at different times is of significant difference (P < 0.05), but the inter-group comparison is also of significant difference (P < 0.05). Each group and measurement time have interaction (P < 0.05). See details in Table 2 and Figure 2. Thus it can be seen that the generating speed of sludged blood of brain trauma and other injuries will change and will get right after 7 days. Compared with the control group, the R and K value of patients in Group A and B reduces after getting injury for 1 day (P < 0.05), but the Ma and α value increases (P < 0.05), indicating that the blood is in high aggregation state; the R and K value rise significantly after 3 days (P < 0.05), and the Ma and α value decreases (P < 0.05), indicating that the blood is in low aggregation state; all parameters get right after 7 days (P > 0.05).
Comparison of Ma value of the patients in three groups
Upon statistical analysis, the Ma value of the patients in three groups at different times is of significant difference (P < 0.05), but the inter-group comparison is also of significant difference (P < 0.05). Each group and measurement time have interaction (P < 0.05). See details in Table 4 and Figure 4. Thus it can be seen that different injuries can cause coagulation disorder dysfunction, and the blood coagulation will get right after 7 days.
Discussion
After brain trauma, brain tissue will release plenty of thrombocinase and accordingly generate extrinsic coagulation, and the coagulation disorder dysfunction of the injured area is more obvious [12]. As for the patients with the traumatic brain hemorrhage, monitor and analyze the coagulation function at different time points with various parameters of thrombelastogram, indicating that the injuries at any parts of the patients may cause coagulation disorder dysfunction, and the coagulation function disorder of the patients with the traumatic brain hemorrhage will be more durable and obvious [13]. Maybe, compared with other injuries, the reduction degree of the arachidonic acid content in the brain tissue reduces more significantly, which causes the cyclooxygenase and function of platelet reduce significantly, accordingly prolongs the time of blood coagulation of the patients with the traumatic brain hemorrhage and finally causes coagulation disorder dysfunction [14]. In addition to this, compared the brain trauma with other injuries, the content of adenosine diphosphate is different and will get right after 3 days. This study shows that the reason why the parameters of thrombelastogram of the patients with the traumatic brain hemorrhage shows abnormality after 6-8 hours getting injuries and is in high aggregation state is that the injured brain tissue or vascular endothelium tissue is injured, blood and brain barrier are injured, and then brain tissue secrete and release plenty of tissue factors, activating the coagulation system [15].
The study results show that: the blood of the patients with the traumatic brain hemorrhage is in low aggregation state in 3 days after injuries, the Ma and α value in thrombelastogram are lower than normal reference range, indicating that the function and quantity of platelet are less and the generation and coagulation speed of fibrin are obviously lower than that patients in the control group. When the patients get injuries, massive bleeding will need plenty of platelet. Massive blood transfusion will dilute blood, so the rate of all-body infection will increase, resulting in that the megakaryocyte in the marrow can get mature normally, and moreover, the drug effect will also suppress the normal function of platelet [16]. It is indicated by the study results that the coagulation function of the patients with the traumatic brain hemorrhage changes from high aggregation state to low aggregation state, and gradually get right; the significance of thrombelastogram in clinical test is to monitor the coagulation function of patients, know about the condition of patients and make a better preparation after prognosis.
In this study, there are 26 patients with blood in high aggregation state at early stage of treatment; 12 patients with blood changing to be in low aggregation state; and the function and quantity of their platelet are less. Therefore, fibrinolytic hemostatic shall be used with caution [17] for the patients with the traumatic brain hemorrhage at early treatment stage, and conduct real-time monitoring on each coagulation function, and adjust dosage and time according to the index changes. In general, it will be of great benefit to observe the coagulation disorder dysfunction or fibrinolytic system malfunction caused by traumatic brain hemorrhage, better take advantages of diagnosis with thrombelastogram, early treatment and improving survival rate, and shorten the time appearing coagulation disorder dysfunction. | 2018-04-03T04:04:10.405Z | 2015-12-17T00:00:00.000 | {
"year": 2015,
"sha1": "8270e8409e423cea3710167e1eba33a98ac13b87",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1515/med-2015-0069",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8270e8409e423cea3710167e1eba33a98ac13b87",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268643194 | pes2o/s2orc | v3-fos-license | Role of Breast Cancer Risk Estimation Models to Identify Women Eligible for Genetic Testing and Risk-Reducing Surgery
Hereditary breast and ovarian cancer (HBOC) syndrome is responsible for approximately 10% of breast cancers (BCs). The HBOC gene panel includes both high-risk genes, i.e., a four times higher risk of BC (BRCA1, BRCA2, PALB2, CDH1, PTEN, STK11 and TP53), and moderate-risk genes, i.e., a two to four times higher risk of BC (BARD1, CHEK2, RAD51C, RAD51D and ATM). Pathogenic germline variants (PGVs) in HBOC genes confer an absolute risk of BC that changes according to the gene considered. We illustrate and compare different BC risk estimation models, also describing their limitations. These models allow us to identify women eligible for genetic testing and possibly to offer surgical strategies for primary prevention, i.e., risk-reducing mastectomies and salpingo-oophorectomies.
Introduction
Approximately 10% of all women with breast cancer (BC) are clinically related to hereditary breast and ovarian cancer (HBOC) syndrome [1][2][3], defined by the identification of pathogenic germline variants (PGVs) in HBOC-related genes.Although a family history of first-degree BC is significantly associated with an increased risk of BC in patients with HBOC syndrome, a suggestive family history may not be present [4].
In a single-center retrospective analysis, after BRCA1 and BRCA2 PGVs, CHEK2 PGVs are by far the most prevalent, followed by ATM, PALB2, and TP53 PGVs [9].However, it is known that the lifetime breast cancer risk for carriers of CHECK2 PGVs is only 25-30% equal to that of ATM PGV carriers, but lower than that related to PALB2 (40-60%) and TP53 (40%) PGV carriers.
Patients with early-onset BC have the highest probability of being carriers of BRCA1 or BRCA2 PGVs.Conversely, patients with later-onset BC have the highest probability of being carriers of non-BRCA1/BRCA2 PGVs [10].In the case of a positive oncological family history, the prevalence of BRCA1/BRCA2 PGVs is comparable to the prevalence of PGVs in HBOC-related genes not including BRCA1/BRCA2.On the other hand, in the absence of an oncological family history, PGVs in HBOC-related genes not including BRCA1/BRCA2 are prevalent and can cause oncological diseases less frequently due to their low penetrance.Therefore, these women may not develop tumors during their lifetime [10].
Cancer Risks Associated with PGVs in HBOC Genes
The cancer risks associated with PGVs in HBOC genes are listed in Table 2. Carriers of BRCA1/BRCA2 PGV show more than a 50% risk of developing BC during their lifetime.
Carriers of BRCA1 PGVs show: (i) an absolute risk of BC higher than 60%; (ii) an absolute risk of male BC ranging from 0.2 to 1.2%; and (iii) an absolute risk of epithelial ovarian cancer ranging from 39 to 58%.On the other hand, carriers of BRCA2 PGVs display: (i) an absolute risk of BC higher than 60%; (ii) an absolute risk of male BC ranging from 1.8 to 7.1%; and (iii) an absolute risk of epithelial ovarian cancer ranging from 13 to 29% [5,11,12].
The cumulative risk of BC at 80 years of age is 72% for BRCA1 carriers and 69% for BRCA2 carriers.While the cumulative risks for BRCA1/BRCA2 carriers through age 80 are similar, the cumulative risks up to age 50 are higher for BRCA1 carriers.
For carriers of BRCA1 PGV, peak incidence occurred in the 41-50 year age group, while for carriers of BRCA2 PGV, peak incidence was in the later 51-60 year age group [13].
BRCA1-associated BCs frequently show a ductal histotype, with a negative expression of receptors for estrogen (ER) and progesterone (PR) and absence of HER2/neu amplification ("triple negative" phenotype).Instead, BRCA2-associated BCs have a luminal-like profile, i.e., positivity of ER and/or PR and absence of HER2/neu amplification, as well as a slight incidence increase of the lobular histotype [5,11].
BC patient carriers of BRCA1/BRCA2 PGVs appear less likely to die with respect to non-carrier BC patients.This advantage may be due to the increased sensitivity of BCs with BRCA1/BRCA2 PGVs to many chemotherapeutic agents, such as platinum-based drugs, or their increased sensitivity to immune attack [14].
The absolute lifetime risk for BC in women with CHEK2 PGV is 25-30% higher in women with a stronger oncological family history than in those without [4,12].Most individuals with CHEK2 PGV develop luminal-like BCs [15].
For carriers of ATM PGV, risk is described as: (i) an absolute risk of BC ranging from 20 to 40% and (ii) an absolute risk of epithelial ovarian cancer ranging from 2 to 3% [4,12,15].The missense c.7271T>G pathogenic variant of ATM is associated with a significant BC risk increase (three to four times) [16,17].Most patients with ATM PGV develop luminal-like BCs, both HER2-negative and HER2-positive [15].
In carriers of PALB2 PGV, risk is described as: (i) an absolute risk of BC ranging from 40 to 60%; (ii) a 10% absolute risk of contralateral BC; (iii) a 0.9% absolute risk of male BC; and (iv) an absolute risk of ovarian cancer ranging from 3 to 5%.One study showed a more than 5% lifetime risk to develop ovarian cancer [4,12].
PALB2 closely interacts with BRCA1 and BRCA2 in the homologous recombination (HR) DNA repair pathway, where the recruiting sequence at the DNA level is: BRCA1, PALB2, and then BRCA2.This suggests that PALB2 and BRCA2 may be associated to similar carcinoma risks because BRCA2 needs PALB2 to be recruited in the HR repair.Age-specific incidence of BC follows a pattern similar to that observed in BRCA2 mutant patients, where incidence increases with age and increases steadily from age 50 onward [18].
For carriers of TP53 PGV, an absolute risk of BC of more than 40% is estimated.TP53 PGV appears to be associated with only about 1% of hereditary BC cases.Retrospective studies have shown that TP53 PGVs are significantly associated with HER2-positive BC, regardless of hormone receptor status (positive or negative).
For carriers of CDH1 PGV, an absolute risk of BC ranging from 40 to 60% is estimated.CDH1 PGVs are primarily associated with lobular BC.
For carriers of PTEN PGV, an absolute risk of BC ranging from 40 to 60% and 38-50 years average age at diagnosis is described.
For carriers of STK11 PGV, risk is reported as: (i) an absolute risk of BC ranging from 32 to 54% and (ii) an absolute risk of nonepithelial ovarian cancer (Sertoli-Leydig) from 10 to 20% [4,12].
For carriers of BARD1 PGV, an absolute risk of mostly triple-negative BC ranging from 20 to 40% is observed [4].
Besides the genes of the HBOC panel, other genes are correlated to BC. Lynch syndrome attributed to PGVs in one of the four mismatch repair (MMR) genes (MLH1, MSH2, MSH6, and PMS2), or EPCAM gene deletionsis is also associated with an approximately 15% absolute risk of BC.
Moreover, NF1 PGV carriers show a 20-40% absolute risk of BC, and BRIP1 PGV a 5-15% absolute risk of developing epithelial ovarian cancer.For the latter, data to define the absolute risk of BC are, to date, not sufficient [12].
Gail/BCRAT Model
The Gail model (later modified into the BC Risk Assessment Tool (BCRAT)) provides a numerical estimate of a woman's risk of developing BC over 5 years and over her lifetime, compared to the average risk of a woman of the same age.A woman who has a 5-year relative risk ≥ 1.66 is considered worthy of preventive measures.The Gail model is based primarily on non-genetic risk factors, with limited information on family history.It takes into consideration age, age at menarche, age at first full-term birth, number of first-degree relatives diagnosed with BC, number of previous breast biopsies negative for BC, and race/ethnicity.The major limitation of the Gail model is represented by the inclusion of first-degree relatives only, and so the risk is underestimated in 50% of families with cancer in the paternal line [22,23].The BCRAT (Figure S1) (https://bcrisktool.cancer.gov,accessed on 27 February 2024) is validated for patients aged 35 years and older in many different populations [24][25][26], but it is not very useful for women with a biopsy diagnosis of atypia, as it underestimates the overall risk [27].The Gail model has been validated in multiple studies, undergoes periodic updates based on changes in BC incidence data, and considers competing risks of mortality other than BC.Limitations of the Gail model include: the inability to be used for individuals < 35 years old, limited use in individuals of non-European (non-white) ethnicity, only female first-degree relatives' inclusion (paternal family history excluded), lack of inclusion of age of relatives' BC diagnoses, family history of cancer diagnoses other than BC, prior mantle radiation therapy.In addition, it underestimates the risk of developing BC in individuals with PVGs in HBOC-related genes, those with a strong family history of BC, those with a family history of ovarian cancer in the mother or paternal family lineage, and those with atypical hyperplasia [28].
Claus Model
The Claus model only takes into consideration family history, i.e., history of BC in the maternal and paternal lines, first-and second-degree relatives, and age at diagnosis of BC.The major disadvantage of the Claus model is the lack of inclusion of non-hereditary risk factors.Furthermore, the Claus tables reflect the risk of the female American population in 1980, while the current risk in both the United States and Europe is higher.In conclusion, the Claus model underestimates the risk [29].
BRCAPro Model
The BRCAPro software package uses both the Claus and Ford models [30].The latter is based on personal and family history of BC and ovarian cancer to identify the presence of BRCA1/BRCA2 PGVs [31].
BRCAPro is a Bayesian computer program or statistical model for calculating an individual's probability of being a carrier of BRCA1/BRCA2 PGVs, based on the type of cancer and history of BC and/or ovarian cancer among relatives of the first and second degree.The BRCAPro model may also have the purpose of evaluating the risk of BC over time [32].However, no non-hereditary risk factors are included in the model [33].The limitations are the underestimation of the frequency of carriers in families with ovarian cancer and in families with prostate cancer, the non-applicability to ethnic minorities, the impossibility of incorporating third-degree relatives, and non-inclusion of other genes besides BRCA1/BRCA2 [28].
BCSC
The BCSC Risk Calculator is an interactive tool designed to estimate a woman's 5-and 10-year risk of developing invasive BC.Calculations of the absolute risk of BC are based on five factors: age, race/ethnicity, family history of BC in a first-degree relative (mother, sister, or daughter), history of a breast biopsy with diagnosed disease benign breast, and breast density in BI-RADS ® (radiological assessment of breast tissue density).Unknown values for race/ethnicity and family history are allowed.The calculator is not applicable to women who meet any of the following criteria: age younger than 35 years or older than 74 years, prior diagnosis of BC, prior diagnosis of ductal carcinoma in situ (DCIS), prior augmentation mammaplasty, prior mastectomy [34], or beyond first-degree relatives [28].
Tyrer-Cuzick/IBIS Model
The Tyrer-Cuzick model (also known as the IBIS risk assessment tool) was developed to evaluate the individual risk of BC over time, but also provides an estimate of the probability of finding BRCA1/BRCA2 PGVs.This model is the first able to integrate family history with surrogate measures of exposure to oestrogen and benign breast pathology (atypical hyperplasia).In various validation processes, this model is the only one that achieved the best prediction estimates.The Tyrer-Cuzick model considers body mass index (BMI), age at menarche, parity, age at menopause (if applicable), history of benign breast pathology associated with increased risk of BC (hyperplasia, typical, LCIS), history of ovarian cancer and male BC, use of hormone replacement therapy, family history (including BC and ovarian cancer, Ashkenazi Jewish descent, and genetic testing results if performed) [35].In the most recent version (version 8) of the IBIS risk assessment tool (http://www.ems-trials.org/riskevaluator, accessed on 27 February 2024), mammographic breast density is also included [36].This computer model provides personalized assessment of lifelong (up to 85 years) BC risk and 10-year risk estimates.This test can be used in individuals < 35 years old and calculates the risk of BRCA1/BRCA2 PGVs.The family history assessment includes first-, second-, and third-degree relatives (first cousins).It considers the competing risks of mortality other than BC, but does not consider the risk of mantle radiotherapy.It overestimates the risk of developing BC in Hispanic individuals, because this model has been validated primarily in white individuals in the United Kingdom, in cases of atypical hyperplasia, lobular carcinoma in situ, and dense breasts [28].The IBIS calculator, unlike the BCRAT, can be used to "qualify" women for additional BC screening with MRI.However, this model tends to overestimate the risk for women with a biopsy diagnosis of atypia and, therefore, should not be used in this population [37].
Many studies have shown that the IBIS model, compared to the Gail and Claus models, is able to identify the highest percentage of the population at high risk [38][39][40][41].
BOADICEA/CanRisk Model
The "Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm" (BOADICEA) is another model for calculating the probability of BRCA1/BRCA2 PGVs as well as the probability of BC occurrence.This model incorporates the assessment of family history of BC, ovarian cancer, prostate cancer, male BC, and pancreatic cancer with the following individual traits: sex specification; age at cancer diagnosis or age at death of all family members; genetic factors (BRCA1, BRCA2, PALB2, CHEK2, and ATM PGVs; polygenic risk score); height, body mass index, parity, age of first birth, age at menarche, age at menopause, use of oral contraceptive, use of hormone replacement therapy, alcohol intake; mammographic density (BI-RADS); histopathology of BC i.e., ER, PgR, HER2/neu, CK14 status, CK5/6 status; and demographic factors (country of origin, year of birth, ethnicity, such as Ashkenazi Jewish descent) [42].
The CanRisk Tool (BOADICEA v6) (https://canrisk.org,accessed on 27 February 2024) is a model for calculating BC and ovarian cancer risks based on family history and genotypes for PGVs in BRCA1/BRCA2, PALB2, CHEK2, ATM, BARD1, RAD51C, and RAD51D and incorporates the effects of common genetic variants (summarized as polygenic risk scores, PRS), lifestyle, hormonal and clinical characteristics, breast density, and disease histopathology.It is validated prospectively for predicting both carrier probabilities and subsequent cancer risk.It does not consider personal risk factors such as breastfeeding, previous breast biopsy and atypia, and does not include risks due to mantle radiotherapy [28].It is the first freely accessible cancer risk prediction program to carry the European Community (EC) mark to indicate compliance with applicable safety and performance requirements for use by healthcare professionals within the European Economic Area (EEA).BOADICEA is currently recommended by several national agencies and organizations to determine eligibility for high-risk BC screening, eligibility for screening of BRCA1/BRCA2 PGVs, and to inform BC risk management.These include the UK NICE Guidelines (https: //www.nice.org.uk/guidance/cg164,accessed on 27 February 2024), the American Cancer Society, the Ontario Breast Screening Program (https://www.cancercareontario.ca/en/guidelines-advice/cancer-continuum/screening/breast-cancer-high-risk-women, accessed on 27 February 2024), and the eviQ Australian guidelines for healthcare professionals (https://www.eviq.org.au/cancer-genetics/adult/risk-management,accessed on 27 February 2024).BOADICEA's CanRisk tool has also been incorporated into the NCCN guidelines for familial breast/ovarian cancer [43].
Myriad Model
The BRCA Risk Calculator (https://webapps.myriad.com/brca-risk-calculator/calcembed.html,accessed on 27 February 2024) is based on data, periodically updated, representing characterization of deleterious PGVs by Myriad Genetic Laboratories through a clinical testing service on approximately 10,000 women.Data obtained through tests performed as part of specific research protocols are not included.Data are obtained from a routine laboratory request form, and have not been independently verified by Myriad Genetic Laboratories.The calculator asks the woman's gender, whether the woman has Ashkenazi Jewish ancestry, whether she has been diagnosed with BC, whether anyone in the woman's family has been diagnosed with BC under age 50, and whether someone in the woman's family has been diagnosed with ovarian cancer [44].
The main models for estimating BC risk are shown in Table 3.Other risk models designed to predict the probability that an individual is a carrier of the BRCA1/BRCA2 PGVs are the PENN II model, the Lambda model, and the Couch model.The PENN II risk model (https://pennmodel2.pmacs.upenn.edu/penn2/,accessed on 27 February 2024) can be used to predict the pre-test probability that an individual has inherited BRCA1/BRCA2 PGVs.This model does not predict BC risk.In general, individuals with at least a 5-10% chance of having a PGV in HBOC-related genes are considered good candidates for genetic testing.For the maternal and paternal sides, the model asks for the presence of Ashkenazi Jewish ancestry, the number of women in the family diagnosed with BC and synchronous ovarian cancer, the number of women in the family diagnosed with ovarian cancer or tubal cancer in the absence of BC, the number of cases in the family diagnosed with BC before the age of 50, the age of the youngest case of BC, the presence of mothers and daughters diagnosed with BC, the number of women with bilateral BC, the number of male BC cases, the presence of pancreatic cancer cases in the family, the number of prostate cancer cases in the family, and the closest relative with BC or ovarian cancer [46].The Lambda model estimates the probability that an Ashkenazi Jewish woman is a BRCA1/BRCA2 PGV carrier based on a point system, considering personal family history, whether she is a first-degree or second-degree relative with BC and ovarian cancer, age at diagnosis, and bilateral BC in the proband [47].The Couch model was designed to provide probability estimates for the detection of BRCA1 PGVs in women with a family or personal history of BC, ovarian cancer, or both [48].
Surgical Strategies for Primary Prevention of BC
In the 2000s, in the weeks following the statements of the actress Angelina Jolie, there was exponential interest in the possibility, for carriers of BRCA1/BRCA2 PGVs, to undergo prophylactic bilateral mastectomy [49].
Prophylactic bilateral mastectomy is associated with a substantial reduction in the incidence of BC in carriers of BRCA1 or BRCA2 PGVs [50][51][52].Prophylactic bilateral mastectomy reduces the risk of BC in women with previous or concomitant prophylactic bilateral oophorectomy by approximately 95%, and in women with intact ovaries by approximately 90% [53].
The benefits of bilateral prophylactic mastectomy are probably greater if performed starting from the age of 30 (up to the age of 30, the cumulative risk of BC for BRCA1/BRCA2 PGVs is only 4%); however, over 55 years of age, the evidence of benefit is weak.
To date, the data available suggest that nipple-sparing mastectomy is the preferred surgical technique compared to total mastectomy or skin-sparing mastectomy, thanks to the cosmetic outcomes, despite the possibility of leaving residual breast tissue.It follows that this technique requires continuous surveillance with gadolinium-enhanced MRI [4,54,55].The rate of residual breast glandular tissue has been reported in up to 100% of patients and was found to be mainly associated with the type of surgeon experience [56].However, in a study on 575 women at moderate to high risk for developing BC treated with prophylactic nipple-sparing subcutaneous mastectomy, only six women developed BC on the chest wall, with only one tumor in the nipple [54,55].
Prophylactic bilateral mastectomy was associated with lower mortality than surveillance for carriers of BRCA1 PGVs, but for carriers of BRCA2 PGVs, prophylactic bilateral mastectomy may lead to BC-specific survival like that of surveillance [57].
The survival benefit was observed primarily in young (<40 years) women with primary BC, featured by differentiation grade 1/2 and/or without a triple-negative phenotype, and not being treated with adjuvant chemotherapy.Contralateral risk-reducing mastectomy is associated with improved overall survival in carriers of BRCA1/BRCA2 PGVs with a history of primary BC [58].
The breast-conserving surgery is an option for carriers of BRCA1/BRCA2 PGVs who are willing to continue high-risk screening [60].
Prophylactic bilateral mastectomy is associated with frequent adverse effects, including decreased sensitivity to touch, pain, tingling, infection, oedema [61], decrease in satisfaction with body image, and sexual sensations.In seventeen case series reporting adverse events from prophylactic bilateral mastectomy with or without reconstruction, reported rates of unanticipated reoperations ranged from 4% in those without reconstruction to 64% in participants with reconstruction [62].
It is necessary to have a conversation with women regarding the degree of protection of prophylactic bilateral mastectomy, reconstruction options, risks, the residual risk of BC with age, and life expectancy, and to address the psychosocial and quality aspects of life.Although the timing of reconstruction in some patients with BC remains controversial, immediate reconstruction is appropriate for many patients undergoing prophylactic bilateral mastectomy.Occult carcinoma is found in less than 3% of women and is usually in the early stages, so postoperative therapy is rarely necessary.The benefits of an immediate rather than delayed approach to reconstruction are substantial.Thoughts on the psychological impact of the times of reconstruction have varied.It was initially thought to be advantageous for a woman to live with a mastectomy defect for several years so that she could appreciate her reconstruction more, even if the cosmetic result was not optimal.As reconstructive techniques improved, it was felt that the psychological benefit of emerging from a mastectomy with reconstruction outweighed the need for a waiting period.A study compared the preoperative psychological characteristics of women undergoing immediate versus delayed reconstruction.Those seeking immediate reconstruction had greater impairment in emotional well-being, higher levels of anxiety, and greater general mental health complaints than those opting for delayed reconstruction.This suggests that the availability of immediate reconstruction is particularly important for women's mental health [54,55].So, mastectomy should always be offered with immediate breast reconstruction, paying attention to the women' psychological sphere [61,63].
Risk-reducing salpingo-oophorectomy is closely related to BC risk reduction caused by BRCA1/BRCA2 PGVs, but the year of data publication is a critical interaction factor, and it should be noted that more recent studies have failed to find a significant reduction of the BC risk associated with risk-reducing salpingo-oophorectomy [64].The apparent smaller effect on mortality in carriers of BRCA2 PGVs compared to BRCA1 PGVs may be due to the lower risk of ovarian cancer in carriers of BRCA2 PGVs as well as the more aggressive biological characteristics of BRCA1-associated BC [55,[65][66][67][68][69][70][71].Kauff et al. demonstrated that risk-reducing salpingo-oophorectomy is associated with an approximately 85% risk reduction of BRCA1-associated gynecologic cancer and 72% risk reduction of BRCA2associated BC.In contrast, protection from risk-reducing salpingo-oophorectomy against BRCA1-associated BC and against BRCA2-associated gynecological cancer did not reach statistical significance [72].All studies on risk-reducing salpingo-oophorectomy and BC risk are observational in nature and subject to various forms of bias and confusion, thus limiting the conclusions that can be drawn about causality.Early studies supported a statistically significant protective association for risk-reducing salpingo-oophorectomy on the risk of BC, which is reflected in several international guidelines that recommend considering risk-reducing salpingo-oophorectomy in premenopausal women for reducing the risk of BC.However, these landmark studies have been hampered by the presence of several important biases, including informational censorship, which may have led to overestimation of any protective benefit.Contemporary studies, specifically designed to reduce some of these biases, have produced contradictory results.Taken together, there is no clear and consistent evidence for the role of premenopausal risk-reducing salpingooophorectomy in reducing the risk of BC in carriers of BRCA1/BRCA2 PGVs.More recent evidence does not support a role for risk-reducing salpingo-oophorectomy to decrease BC risk for carriers of BRCA1/BRCA2 PGVs [73].
Preventive bilateral oophorectomy was also associated with an 80% risk reduction of ovarian, fallopian tube, or peritoneal cancer in carriers of BRCA1/BRCA2 PGVs and a 77% reduction in all-cause mortality [68,74].
The link between BRCA1/BRCA2 PGVs and uterine cancer is unclear.Therefore, although risk-reducing salpingo-oophorectomy is the standard treatment for women with BRCA1/BRCA2 PGVs, the role of concomitant hysterectomy is controversial.This risk should be considered when discussing the benefits and risks of hysterectomy at the time of risk-reducing salpingo-oophorectomy in carriers of BRCA1 PGVs [76].
Therefore, risk-reducing salpingo-oophorectomy is recommended once the desire for pregnancy is completed in women aged between 35 and 40 years with BRCA1 PGVs, and in women aged between 40 and 45 years with BRCA2 PGVs [4].
Risk-reducing salpingectomy with delayed oophorectomy has gained interest for women at high risk for tubo-ovarian cancer, as there is compelling evidence that especially high-grade serous carcinoma originates in the fallopian tubes [77], but it is not recommended outside of a clinical trial setting [4].
There are no data on the benefit of bilateral prophylactic mastectomy for carriers of CHEK2, ATM, TP53, NBN, PTEN, STK11, BARD1, MSH6, and PMS2 PGVs, but this procedure can be considered on a case-by-case basis based on family history.Prophylactic bilateral mastectomy can be considered for carriers of PALB2 PGVs, while it can be discussed in cases of CDH1 PGVs.Risk-reducing salpingo-oophorectomy can be considered in women who have completed pregnancy and who are carriers of PALB2 PGVs at an age > 45 years, of RAD51C and RAD51D PGVs at the age of 45-50 years, of NF1 PGVs at an age > 45 years, and BRIP1 PGVs from 45 to 50 years of age [12].CHEK2, NBN, PTEN, MSH6, and PMS2 PGVs are not associated with the risk of ovarian cancer; therefore, bilateral prophylactic salpingectomy is not indicated [12,[77][78][79][80].
Large studies in women with ovarian cancer have shown that there can be a slightly increased risk of ovarian cancer in carriers of ATM PGVs, but there is currently insufficient evidence to recommend risk-reducing salpingo-oophorectomy.
Conclusions
Up to 20% of BCs arise in carriers of HBOC-related PGVs.The early identification of healthy carriers allows clinicians to activate primary prevention measures that can significantly reduce the incidence and/or mortality from BC.In addition to individuals at hereditary risk on a monogenic basis, there are individuals at increased risk on a multifactorial family basis.For the most common neoplasms, in fact, sharing both constitutional and exogenous risk factors with affected family members can determine significantly higher risks compared to the general population.Screening programs designed for the standard risk population may be insufficient due to starting age, frequency, and type of tests for early diagnosis in carriers at increased risk; therefore, it appears necessary to pursue the personalization of preventive actions by identifying those at high risk and by setting up intensified surveillance and specific prevention programs that complement screening.In Italy, the National Prevention Plan 2020-2025 promotes the adoption of organized pathways for the prevention of BC (and ovarian cancer) associated with BRCA1/BRCA2 PGVs, with the activation of a Diagnostic-Therapeutic-Assistance Path called "High Hereditary-Familial Risk for people carrying BRCA pathogenetic variants" [81,82].
Annual mammography screening appointments could represent an ideal opportunity to administer screening tools to improve referrals to genetic counseling.Breast imaging centers could, therefore, serve as strategic locations for identifying women at increased risk of BC based on family cancer history who would benefit from genetic counseling and genetic testing [2].
Women should be enrolled in regular screening programs involving the administration of questionnaires according to, for example, the Tyrer-Cuzick model, to identify those at highest risk to be referred for onco-genetic counseling and possible genetic testing.This strategy could be feasible and effective, trying to guarantee its application to the entire population of women undergoing screening.
Figure 1 .
Figure 1.Interactions between the main proteins encoded by the HBOC-related gene.
Figure 1 .
Figure 1.Interactions between the main proteins encoded by the HBOC-related gene.
Table 1 .
Lifetime cancer risk in BC for HBOC.
Table 1 .
Lifetime cancer risk in BC for HBOC.
Table 2 .
Cancer risks associated with PGVs in HBOC genes. | 2024-03-24T15:18:50.248Z | 2024-03-22T00:00:00.000 | {
"year": 2024,
"sha1": "f32cd08f478f452eec963eee6d3a72b98e173010",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/12/4/714/pdf?version=1711100717",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d06765b7945a2a0f454d0edcd8a52abec13a825",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
3522773 | pes2o/s2orc | v3-fos-license | Statistical Inference for Partial Differential Equations
. Many physical phenomena are modeled by parametrized PDEs. The poor knowledge on the involved parameters is often one of the numerous sources of uncertainties on these models. Some of these parameters can be estimated, with the use of real world data. The aim of this mini-symposium is to introduce some of the various tools from both statistical and numerical communities to deal with this issue. Parametric and non-parametric approaches are developed in this paper. Some of the estimation procedures require many evaluations of the initial model. Some interpolation tools and some greedy algorithms for model reduction are therefore also presented, in order to reduce time needed for running the model.
Introduction
Many physical phenomena are modeled by parametrized PDEs.The involved parameters are often unknown and have to be estimated.This mini-symposium focuses on this challenging issue.The first two sections are based on statistical estimation tools.Section 1 is interested in transport-fragmentation equations, and the aim is a non parametric estimation of the division rate of a given cell.Section 2 deals with an industrial application in thermal regulation of an aircraft cabin, and one is interested in estimating parameters appearing in boundary conditions of Navier-Stokes equations.As parameter estimation often requires many computations of the underlying model, one is also interested in speeding the computation time.It is the aim of Sections 3 and 4. Section 3 couples a SAEM algorithm with an interpolation approach to speed up the estimation procedure of parameters involved in KPP equations used to model the evolution of a tumor extracted from MRI images.Section 4 presents greedy algorithms dedicated to solve high-dimensional PDEs.Parametric (see Sections 2,3,4) and non-parametric (see Section 1) approaches are presented.
Context
We consider (simple) particle systems that serve as toy models for the evolution of cells or bacteria: Each particle grows by ingesting a common nutrient.After some time, each particle gives rise to two offsprings by cell division.We structure the model by state variables like size, growth rate and so on.Deterministically, the density of structured state variables evolves according to a transport-fragmentation PDE.Stochastically, the particles evolve according to a PDMP (piecewise deterministic Markov process) that evolves along a branching tree.Growth-fragmentation type equations provide a natural framework for the study of size-structured populations: Let n(t, x) denote the density of cells of size x at time t.The parameter of interest is the division rate B(x).At division, a cell of size x gives birth to two cells of size x/2.The growth of the cell size by nutrient uptake is given by a growth rate g(x) = τ x (for simplicity).The temporal evolution of n is governed by the transportfragmentation equation ∂ t n(t, x) + ∂ x τ xn(t, x) + B(x)n(t, x) = 4B(2x)n(t, 2x) with n(t, x = 0) = 0, t > 0 and n(0, x) = n (0) (x), x ≥ 0. It is obtained by mass conservation law: the LHS term is obtained by density evolution plus growth by nutrient plus division of cells of size x, while the RHS is obtained by division of cells of size 2x.
Objectives
Our main goal is to estimate non-parametrically B(x) from genealogical data of a cell population of size N living on a binary tree.We also want to avoid solving an inverse problem as it is the case for alternative approaches (see e.g., [11,12] ) for estimating B(x), thanks to richer data set provided by genealogical data (i.e.observed along a genealogical tree).Finally, we wish to reconcile the deterministic approach with a rigorous statistical analysis (relaxing the steady-state implicit approximation of deterministic approaches).
Our strategy to reach this goal is: 1) Construct a stochastic model accounting for the stochastic dependence structure on a tree for which the (mean) empirical measure of N particles solves the fragmentation-transport equation (in a weak sense).2) Develop appropriate statistical tools to estimate B(x).3) Incorporate the additional difficulty of growth variability: each cell has a stochastic growth rate inherited from its parent.
Results
We construct a Markov process on a binary tree , where X t denotes the size and V t the growth rate of living cells at time t, inherited from their parent according to a kernel ρ.
Result 1: We prove in [10] that the function n(t, is a (weak)-solution of an extension of the transport-fragmentation equation: The initial framework g(x) = τ x is retrieved as soon as ρ(v , dv) = δ τ (dv).
Result 2:
We assume that we are given genealogical data of the form (ξ u , τ u ) u∈U N , where U N is a (connected) subset of size N of the binary tree U = ∪ k≥0 {0, 1} k .This means that we observe the size ξ u and the variability τ u of the cell for each node u ∈ U N .This means that during its lifetime, at time t, the cell u has size ξ u exp(τ u t) and the variability τ u thus denotes the growth rate attached to each cell and that may vary from one individual The reconstruction is fairly good in the region where the density νB (in solid black) is not too small, and it consistently deteriorates beyond x ≈ 3, where almost no data of size x ≥ 3 were observed.Note however the dramatic effect of ignoring variability (the Monte-Carlo estimators in red) beyond x ≈ 2.5.
to another.This is a reasonable assumption that we have been able to implement in practice.We can construct an estimator ( B N (x), x > 0) of the s-regular division rate B(x) s.t.
Construction of B: Let ν B (y) denote the (asymptotic or invariant) density distribution of the size of a cell at division.The construction of B is based on the key representation formula proved in [10] Introduce a kernel function K : [0, ∞) → R, [0,∞) K(y)dy = 1.and set K h (y) = h −1 K h −1 y for y ∈ [0, ∞) and h > 0. We construct the following estimator based on an empirical regularisation of (1): , specified by the kernel K, the bandwidth h and the threshold > 0 (that guarantees that B is well defined).This approach avoids the implementation of an inverse problem where only the estimation rate N −s/(2s+3) is achievable ( [11], [12]).It also matches both deterministic and stochastic approaches rigorously.
Context
Thermal energy management onboard modern commercial aircraft has become an important challenge for aircraft manufacturers so as to propose competitive solutions to new markets demands.Modern aircraft use more and more new and highly dissipative heat sources (electrical packs, power electrics, etc.) and thus the integration of such equipment needs a careful understanding of the thermal behaviour of this new environment.Therefore, new requirements have to be satisfied in order to improve passenger thermal comfort and to ensure the thermal control of equipment in the avionic bay.
This work aims at presenting the thermal regulation problem inside a commercial aircraft and some scientific challenges inherent to this study.
The thermal exchanges in the cabin and bay of an aircraft are modelled thanks to Navier-Stokes equations that are implemented into a software.The resolution of such equations induces very often parameters with unknown exact value.There are two kinds of unknown parameters: first, the ones subjected to lack of knowledge or variability (e.g turbulence rate, equipment temperature, etc.), and second, the parameters that have to be estimated (e.g thermal contact resistance, thermal conductivity, heat dissipation, etc.).The latter parameters involve additional information which is in practice composed of datasets provided from former aircrafts, real experiments, trials, etc.
Problem
To illustrate our purpose, let us consider a simplified thermal exchange modelling given by Navier-Stokes equations : with boundary conditions : where ρ= air density, u=air speed, k= air conductivity, T =temperature, µ=viscosity , τ =turb.rate.
In this application, we consider that the turbulence rate τ and the air speed condition u 0 (M ) are uncertain (for simplicity, for x ∈ M write u 0 (x) = u * 0 (x) + (x) where u * 0 (x) is the deterministic part of the boudary condition, and (x) is a random variable which models the uncertainties).Then, the heat dissipation coefficient φ should be estimated.The observable of interest we consider is the temperature around the equipment given from a post-processing after resolution of the system above.Thus, a simulated temperature can be seen as a real-valued function (x, θ) → h(x, θ), where x = (τ, ε) with ε = ( (x 1 ), • • • , (x M )), and θ = φ.As the variables x = (τ, ε) are uncertain, we can choose to model this uncertainty by a random vector X = (τ, ε) where τ and ε are seen as random variables drawn from some given distributions.Besides this modelling, in order to characterize the parameter θ = φ, one needs a database of temperatures of the equipment which can be obtained from sensors placed on it.Let us denote such database by Y 1 , ..., Y n .
Estimation method
The method we present is taken from N. Rachdi et al. [21].The principle consists in estimating a parameter θ ∈ Θ ⊂ R k which minimizes "a distance" between the empirical distribution of the Y i 's (measurements) and the simulated distribution of the random variable h(X, θ) based on a sample h(X 1 , θ), ..., h(X m , θ) provided from numerical simulations, where X 1 , ..., X m are m simulations of the random variable X ∈ (X , P X ).Indeed, assume that the random simulation outputs {h(X, θ), θ ∈ Θ} induce a (Lebesgue) density family {f θ , θ ∈ Θ} where f θ is the density of h(X, θ) .Hence, a maximum-likelihood based method would provide the following estimator But, in our framework we do not know explicitly the density functions f θ as it is the result of complex simulations.Unlike classical maximum-likelihood methods, we do not form a parametric "density model" for the measurements (Gaussian, Beta, etc.) but this density model is provided from simulations of the random variable h(X, θ).In this case, f θ in (3) would be the density of h(X, θ) which does not have necessarily an analytical form.We then propose to replace f θ by a kernel estimator f m θ (among others) where for instance K b (y) = 1 √ 2 π b e −y 2 /2b 2 .Then, replacing f θ by f m θ in (3) provides the computable estimator Under mild conditions, Theorem 3.1 in [21] proves the consistency in a general case when considering other contrast functions than log, and Theorem 6.2 in [20] shows the consistency in the special case of the log-contrast function.In such estimation procedure, we need a lot of system runs providing the desired observable h which can be CPU time expensive.To avoid this difficulty, surrogate model techniques may be considered, which aim at replacing the costly model h by a mathematical approximation h in (4), very cheap to evaluate.
Context
A crucial step in the validation of a model is of course to compare it with real world data.This is usually done by using nonlinear regression techniques in the case of individual data, or by statistical approaches (for instance using a SAEM algorithm [14], [7]), which is a maximum likelihood estimation method, in the case of population data.In both cases, this requires a large number of evaluations of the model, for a large number of different sets of parameters.The evaluation time for a single set of parameters may be very long, in particular when partial differential equations are involved (it may go up to a few minutes, or a few hours, or even days).In this case, nonlinear regression algorithms or population approaches can not be done within a reasonable time.It is therefore crucial to find methods to accelerate them.
The application we have in mind is the study of the evolution of the volume of a tumour extracted from MRI images.In that case, the PDE model can be the classical KPP equation, and the tumour volume is the integral of the tumoral concentration.
SAEM coupled with model precomputation
Our strategy is to speed up the computation associated to the evaluation of the scalar time series associated to the solution of the full PDE.We couple a SAEM algorithm with evaluation of the model through interpolation on a precomputed mesh of the parameters domain.
The idea is the following: to compute quickly a function, we interpolate it from precomputed values, on a grid.The main issue is to construct a grid in an efficient way: -Interpolation should be easy on the mesh.Here we choose a mesh composed of cubes (tree of cubes) to ensure construction simplicity and high interpolation speed -Mesh should be refined in areas where the function changes rapidly (speed of variation may be measured in various ways, see below).
Let us describe the algorithm in dimension N .We consider J fixed probabilities 0 < q j < 1 with J j=1 q j = 1 and J positive functions ψ j (x) (required precisions, as a simple example, take ψ j (x) = 1 for every x).We start with a cube (or more precisely hyper-rectangle) C init = Π N i=1 [x min,i , x max,i ] to prescribe the area of search.The algorithm is iterative.At step n, we have 1 + 2 N n cubes C i with 1 ≤ i ≤ 1 + 2 N n, organized in a tree.To each cube we attach J different weights ω j i (where 1 ≤ j ≤ J, see below for examples of weights), and the 2 N values on its 2 N summits.
-First we choose j between 1 and J with probability q j .-Then we choose, amongst the leaves of the tree, the smallest index i such that ω j i / sup x∈Ci ψ j (x) is maximum.
-We then split the cube C i in 2 N small cubes of equal sizes, which become 2 N new leaves of our tree, the original C i becoming a node.To each new cube we attach J weights ω j i .
Then we iterate the procedure at convenience.We stop the algorithm when a criterion is satisfied or after a fixed number of iterations.We then have a decomposition of the initial cube in a finite number of cubes, organized in a tree (each node having exactly 2 N leaves), with the values of f on each summit.It is interesting to notice that this approach can be easily parallelized to ensure an optimal use of the processors.
If we want to evaluate f at some point x, as during a SAEM computation, we first look for the cube C i in which x lies, and then approximate f by the interpolation f inter of the values on the summits of the cube C i .Note that this procedure is very fast, since, by construction, the cubes form a tree, each node having 2 N nodes.The identification of the cube in which x lies is simply a walk on this tree.At each node we simply have to compare the coordinates of x with the centre of the "node" cube, which immediately gives in which "son" x lies.The interpolation procedure (approximation of f (x) knowing the values of f on the summits of the cube) is also classical and rapid (linear in the dimension N ).
Application: parametrization of a KPP model
We want to illustrate the previous methodology in the context of the estimation of the parameters associated to the so called KPP equation: where u(x) is the unknown concentration (assumed to be initially a compact supported function, for instance), D the diffusion coefficient and R the reaction rate.These equations are posed in a domain ∆ with Neumann boundary conditions.Note that the geometry of the domain ∆ can be rather complex (e.g., when u is the density of tumor cells in the brain).Initially the support of u is very small and located at some point x 0 ∈ ∆.Therefore we may assume that for some time T 0 (in the past).We generate a virtual population of solutions of the KPP equation, assuming Gaussian distributions on its parameters, and adding noise.We then try to recover the distributions of the parameters by a SAEM approach (using Monolix software [22]).For this we first precompute solutions of the KPP equation on a regular or non regular mesh (see Figure 2), and then run SAEM algorithm using interpolations of the precomputed values of KPP equation (instead of the genuine KPP).
As illustrated in table 1, the results are of good quality.Adding some noise deteriorates the accuracy but the results are reasonable for practical applications.The main interest of this methodology is to tackle problem of parameters identification in complex PDE systems.The computational cost of the whole algorithm that mean generation of the mesh and SAEM computation, can be divided in two distinct parts: an offline time corresponding to the computation of the mesh, which can be done once and for all, and an online time corresponding to the estimation of the parameters for a given population.
In the previous example, the gain is of order 725 for the homogeneous grid and 1200 for the heterogeneous grid compared to a full computation with a whole resolution of the PDE during the SAEM algorithm.
Conclusions
In this work we present a new method combining SAEM algorithm and a precomputation step.This method could be helpful to reduce the overall computation time when the model is very long to compute, for instance when the model is based on partial differential equations.
In a future work we intend to study in details the method which performs simultaneously the precomputation of the parameter space and the SAEM algorithm.
Greedy algorithms and model reduction
In this section, we will briefly present a general method to approximate high-dimensional functions.This technique can be used in particular to solve high-dimensional partial differential equations.This section is related to a series of recent works [4][5][6]16].We also refer to the contribution of Virginie Ehrlacher to this volume for an application of this technique to eigenvalue problems.
An introductory example
To fix the idea, let us consider the parametric problem: find u(θ, x) a real valued function solution to, for all θ ∈ T , Here, x varies in subdomain X of R d , θ is a parameter which lives in T a subset of R p .We assume in the following that (θ, x) → a(θ, x) and (θ, x) → f (θ, x) are two real valued functions such that for almost all values of the parameter θ ∈ T , the problem ( 7) is well posed.For example, f is in L 2 (T × X ), a is in L ∞ (T × X ) and is bounded from below by a positive constant, so that there exists a unique solution u ∈ L 2 (T , H 2 (X ) ∩ H 1 0 (X )).This is the setting we will consider in the following.The functional spaces H 1 0 (X ) and H 2 (X ) are the classical Sobolev spaces: x,x v| ∈ L 2 (X )}.A parameter estimation problem typically writes as follows: given some observations on the function u(θ, x), how to estimate the values of θ ∈ T ?Many approaches have been proposed to solve this inverse problem, and it is not the aim of this section to discuss them.We rather would like to explain a method to approximate the high-dimensional function (θ, x) → u(θ, x).This approximation can then be used to solve the inverse problem, for example to provide a first guess to a deterministic optimization approach or to build a variance reduction technique in a Bayesian technique.Such an approximation is sometimes called a response surface, or a reduced order model.
The difficulty is of course that the functions u depends on the variable (θ, x) with dimension d + p.Standard approximation techniques based for example on tensorization of one dimensional grids lead to a huge number of degrees of freedom, since the complexity is exponential in the dimension.This is the so-called curse of dimensionality.Various approaches have been proposed to tackle this difficulty such as sparse grids techniques base [3,24] or reduced bases methods [17,19].We here focus on an algorithm introduced by Ladevèze [15], Ammar [1] and Nouy [18] that we call below the greedy algorithm.This algorithm is also sometimes called the Proper Generalized Decomposition.
The greedy algorithm
Let us now present the greedy algorithm we are interested in.The bottom line is to approximate the function u(θ, x) as a sum of tensor products: and to compute each of the terms in this sum iteratively, as the best next tensor product approximation.Depending on the problem under consideration, this best approximation is defined in various ways.This algorithm is greedy in the sense that the terms are computed iteratively and once one of them is computed, it is not modified in the following iterations.For simplicity, we consider the tensor product of only two functions (r k (θ) and s k (x)) but the algorithm equally applies to a tensor product of more than two functions.For example, if the parametric space is high-dimensional (namely if p is large), one could think of using a decomposition of the form u(θ, x) = k≥1 r 1 k (θ 1 ) . . .r p k (θ p )s k (x).
In the specific example (7) above, the algorithm writes as follows: iterate on K ≥ 0 (r K+1 , s K+1 ) ∈ argmin r∈L 2 (T ),s∈H 1 0 (X ) where is the energy functional associated to (7), defined for v ∈ L 2 (T , H 1 0 (X )).The energy functional E has a unique minimum, which is characterized by the Euler equations (7).The idea underlying the algorithm ( 8) is that at each iteration, the best tensor product minimizing E is chosen.
In practice, to solve (8), one actually considers the Euler Lagrange equations associated to (8), which are: iterate on K ≥ 0, find r K+1 ∈ L 2 (T ) and s K+1 ∈ H 1 0 (X ) such that, for all δr ∈ L 2 (T ) and δs ∈ H 1 0 (X ) where, for the ease of notation, we introduced The problem (9) is the weak form of the problem: where the first equation is an elliptic problem on s K+1 (for a fixed function r K+1 ) with homogeneous Dirichlet boundary conditions, and the second equation gives r K+1 (for a fixed function s K+1 ).Two remarks are in order.First, it is obvious from the formulation (11) that the problem defining the couple (r K+1 , s K+1 ) is nonlinear: starting from the linear problem (7), we end up with the nonlinear problem (11).This is because the space of tensor products is not a linear space.Second, if we assume that the data a and f admit a separated representation of the form a(θ, x) = k≥1 r a k (θ)s a k (x) and f (θ, x) = k≥1 r f k (θ)s f k (x), then, all the integrals involved in (11) are either integrals over T or over X , using the Fubini's theorem: there is no integral over the product space T × X .In practice, (11) is typically solved by a fixed point algorithm.
Notice that compared to the original problem which was with complexity N d+p (if N denotes the number of degrees of freedom per dimension), the new formulation is a sequence of problems with a much smaller complexity, namely N d + N p .This comes at a price: the nonlinearity of (11).
Convergence
The algorithm (8) is at the interface between two approximation techniques: (i) the techniques developed in order to get the best rank n approximations of tensors, using appropriate formats and associated approximation algorithms (which are not necessarily greedy algorithms), see [9,13]; (ii) the greedy algorithms developed in the field of nonlinear approximation, to approximate a function as a sum of elements of a dictionary (which is not necessarily the set of tensor products), see [23].In the algorithm (8), we both use tensor products to approximate the solution (as in item (i) above) and a greedy technique to compute the terms of the sum (as in item (ii) above).
The convergence of the algorithm can actually be deduced from general results on the convergence of greedy algorithms [8,16].
Theorem 1.Let us consider the algorithm (8) and the function u K defined by (10).The following convergence result holds: lim K→∞ u K − u L 2 (T ,H 1 0 (X )) = 0.Moreover, if u ∈ L 1 where the Banach space L 1 ⊂ L 2 (T , H 1 0 (X )) is defined as the set of functions with finite projective norms then there exists C > 0 (which depends on u) such that, for all positive K, u K − u L 2 (T ,H 1 0 (X )) ≤ CK −1/6 .The convergence rate can be improved to −1/2 using an orthogonal version of the algorithm.The theorem also holds for tensor products of more than two functions.
This result essentially tells us that the algorithm is safe: it is converging.On the other hand, the convergence rate is rather slow.In practice, one typically observes that the convergence is exponential for small values of K, and then slows down.
Extensions and open questions
The prototypical example (7) enjoys two specific properties: it is linear and symmetric.In [4], we were able to generalize the convergence result to nonlinear problems, which are still defined as the minimum of some functional.In this case however, we have no convergence rates.In [6], we investigated various techniques to generalize the approach to linear but non-symmetric problems, which are thus not simply associated to an energy minimization problem: there is up to now no satisfactory technique to treat non-symmetric problems.We have also on-going works on parametric eigenvalue problems.
Generally speaking, the main question which seems difficult to attack is the following: if the solution to the original problem admits a separated representation (sum of tensor product functions) with a small number of terms, will the greedy algorithms be able to approximate efficiently this function ?There has been very encouraging results in that direction for some greedy algorithms recently [2] but it is unclear if they can be extended to our setting.
(x)=x2 , the growth rate distribution is uniform on [0.5,1.5],plain tree distribution of all cell sizes distribution of size at division true division rate estimated division rate with variability estimated division rate without variability
Figure 1 .
Figure 1.Simulated data: for N = 2047 we see the quality of Bn over 50 Monte-Carlo simulations.The true B is in solid blue, and the Monte-Carlo reconstructions are in green.The reconstruction is fairly good in the region where the density νB (in solid black) is not too small, and it consistently deteriorates beyond x ≈ 3, where almost no data of size x ≥ 3 were observed.Note however the dramatic effect of ignoring variability (the Monte-Carlo estimators in red) beyond x ≈ 2.5.
Figure 2 .
Figure 2.An example of an inhomogeneous mesh of the space of parameters (with 500 points).The two parameters are w = D/R and x 0 .
Table 1 .
Results (from Monolix) and errors for the mean parameters of the population.Col- umn E1 refers to a population without noise (see text).Column E2 (resp.E3) refers to a population with a 5% (resp.10%) noise.Test with homogeneous grid. | 2018-12-04T16:46:02.744Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "1b6751c1ad33f98b6a99f8e3d8de31ba1d8c327c",
"oa_license": "CCBYNC",
"oa_url": "https://www.esaim-proc.org/articles/proc/pdf/2014/02/proc144518.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1b6751c1ad33f98b6a99f8e3d8de31ba1d8c327c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
212622523 | pes2o/s2orc | v3-fos-license | Preventing slips, overruns, and cancellations: Application of system accident investigations and theory to the understanding and prevention of engineering project failures
Organizations that develop or operate complex engineering systems are plagued by systems engineering failures like schedule overruns, budget exceedances, and project cancellations. Unfortunately, there is not much actionable guidance on why these failures happen or how to prevent them. Our approach contains two novel aspects. First, we argue that system accidents and other failures in systems engineering are manifestations of similar underlying problems. Therefore, we can leverage the literature on accident causation and the many publicly available accident investigation reports to better understand how and why failures in systems engineering occur, and to identify ways of preventing them. Second, to address the lack of concrete guidance on identifying and preventing incipient failures, we provide specific examples of each type of failure cause and of the recommendations for preventing these causes. We analyzed a set of 30 accidents and 33 project failures, spanning a range of industries, and found 23 different failure causes, most of which appear in both accidents and other project failures, suggesting that accidents and project failures do happen in similar was. We also identified 16 different recommended remedial actions. We link these causes and recommendations in a cause-recommendation network, and associate over 900 specific examples of how these causes manifested in failures, and over 600 specific examples of the associated recommended remedial actions, with each cause or recommendation.
Introduction
Few engineering projects are completed on-time, within proposed budget, and with the negotiated features and functions. In 2008, only eleven of 72 major United States defense programs were on schedule, on budget, and met performance criteria [1]. Since then, U.S. aerospace and defense programs have only worsened: total cost overruns "have risen from 28 percent to 48 percent, from 2007 through 2015" [2]. The U.S. Government Accountability Office (GAO) lessons learned from accidents to help identify specific, pointed preventive measures for project failures. We use our findings to develop a cause-recommendation network that shows how causes tend to cluster, which recommendations are appropriate for which causes, and how the causes and recommendations are manifested in a range of industries. We begin with a brief review of the state-of-the-art research in accident causation and how it can be applied to project failures. Next, we describe our case selection dataset and how we extracted and analyzed findings and recommendations from failure reports. We then build networks of causes in accidents and project failures as well as a network of causes linked to recommendations, and illustrate potential applications for this cause-recommendation network. We conclude the paper with ideas for future work.
A note on definitions: Both system accidents and project failures are "undesired and unplanned (but not necessarily unexpected) event[s] that result in (at least) a specified level of loss" [13]. We use "system accident" (which we shorten to "accident" for ease of reading) to refer to those events that directly result in loss of life, injury, or damage to property [14]. System accidents are a generalization of "process accidents" in the chemical process industry (cf. [15]). We do not consider here occupational safety accidents such as falls from ladders or mishandling of lathes. We use "project failures" for all other undesired project events, such as failure to achieve mission objectives, budget or schedule overruns, cancellations, and quality or performance issues [14]. In both cases, this paper focuses on systems that are technologically and organizationally complex, and usually expensive, both in terms of direct and indirect losses.
What have we learned from accident research?
A range of accident modeling techniques that help explain how accidents are caused is available. Accident investigation reports, and subsequent meta-analyses of these reports, have revealed that accidents across industries have similar causes despite occurring in different scenarios. This section provides a brief review of the literature; for a more extensive discussion see Saleh et al. [16].
Theories and models on accident causation have become increasingly sophisticated, beginning with considering accidents as simple chains of human errors and physical failures. Our current understanding is that accidents result from a complex web of interactions, many of which are, or at least appear to be, locally and temporally rational. Man-made disasters theory is an early and influential articulation of this perspective [17]. It posits that accidents are not the result of chance events, but rather occur as a result of a build-up of errors and hazards over time. Man-made disasters theory helps explain why accidents occur even at organizations that have safety programs in place and claim to value safety. When members of the organization collectively follow the safety rules and procedures less well and less frequently or commit other mundane day-to-day errors, accidents may arise.
Human factors (ergonomics) and organizational factors studies have provided understanding of why people make errors. For example, people routinely violate procedures-because doing so often allows them to perform tasks more quickly and efficiently, sometimes at the cost of safety. James Reason's work, of which the Swiss cheese model is one of the best-known aspects, is an influential successor to Turner's work [18]. The Swiss cheese model views safety as being maintained by layers of defense, which develop and close holes over time as for example procedure compliance decreases and increases. When there are sufficient holes, or when holes remain in place for long enough, accidents can shoot through the layers of defense. Reason also posited that accidents can be traced back to problems on four levels: specific acts, preconditions, supervision, and organizational influences. Each higher level drives the problems below it. Based on these layers, Shappell and Wiegmann [19] developed a taxonomy of accident causes and codified them in the Human Factors Analysis and Classification System (HFACS). The view that system safety is a control problem that requires a systems perspective has emerged as the current leading theory. The control-theoretic perspective on system safety grew out of general systems theory and sees accidents as resulting from the absence or breach of defenses, be they technical or organizational, or from the violation of technical or organizational safety constraints [20] [21] [22] [16]. Absences and breaches of defenses and safety constraint violations can occur at any level of an organization.
Progress in accident theory and modelling is both informed by and drives the growing recognition that accidents, though often differing in their details, share root causes, whether expressed as lurking pathogens in Swiss Cheese, layers or types of errors in HFACS, or control flaws in Rasmussen or Leveson's work [e.g., [23] [21] [10]]. For example, the technicians working on the NOAA N-Prime Satellite committed a skill-based memory lapse error when they failed to notice that bolts holding the spacecraft to a working surface were missing, despite wiping the surface and not detecting interference from the bolts, resulting in the spacecraft toppling when they attempted to move the working surface [24]. After a Boeing 747 operated by China Airlines experienced a tailstrike incident, personnel committed a rule-based mistake when they did not follow maintenance procedures requiring them to remove the entire potentially damaged portion of the tail. The material eventually fatigued to the point of failure on flight 611 [25]. Both of these failures had problems with their organizational climates and communication: the NOAA N-Prime crew had an atypical mix of authority on the morning of the incident, which was not conducive to open discussion and shared responsibility, and the Boeing repair procedures and customer communications channels did not instruct the China Airlines crew on how to perform tailstrike repair correctly.
Here, then, we posit that, just as accidents share many causes, project failures share causes with accidents in particular, and also with other project failures. We explore this idea in the next section.
Method
This section describes the dataset, the resulting set of accident and project failures causes, and the linked set of recommendations for preventing these failures. Our resulting data is hosted on the Purdue University Research Repository [26].
Dataset description
There are few detailed publicly-available reports on project failures. We identified 33 cases with systems engineering-related causes, with sufficient detail, that span a range of industries, and that occurred relatively recently (from 1979 to 2015). We also selected novel projects that involved state-of-the-art, advanced technology (e.g. Mars Polar Lander), as well as ongoing projects that make improvements to existing designs (e.g. Boeing 787 Dreamliner). In contrast, no industry is free of publicly available accident investigation reports, with the United States National Transportation Safety Database, and Chemical Safety Board being two examples of readily available accident report sources. We selected 30 accidents spanning a wide range of industries. For more information on the types of sources we used and the implications of those sources, see [27]. Table 1 shows our cases.
Cause extraction
Our approach consists of five steps: (1) identifying findings in reports, (2) seeding our coding process with summary statements for findings from a subset of our cases, (3) applying the findings to a modified STAMP model to identify where in the design process they fall, (4) iteratively developing a coding scheme for the findings, and (5) coding the remaining findings to remove extraneous detail. We illustrate this process with the Deepwater Horizon oil spill and the F-35 Lightning II schedule and budget exceedances.
We began by extracting findings on the Deepwater Horizon oil spill from the two available accident reports [28] [29]. Table 2 shows a subset of the 25 findings for the Deepwater Horizon oil spill. We extracted the findings of the F-35 Lightning II budget and schedule exceedances from four newspaper articles and a U.S. Department of Defense report [30] [31] [32] [33]. Table 3 shows a subset of the 25 finding extracts for the F-35 Lightning II. Second, we modified the STAMP model [22] to help us systematically identify where and when in the design process the finding occurred and used it as a framework for classifying the findings by organizational level. Fig 1 shows the model for the Deepwater Horizon case with the finding summaries 1 through 5 from Table 2 placed at appropriate locations on the model. For more information on how we modified and used the STAMP model, see [34].
Next, we reworded each finding to retain the defining information but discard extraneous details. For example, we reworded the first finding in Table 2 to discard "crew" and "site leader" and replaced them with the more general "personnel". The test the crew did indicated a potential problem, so they retested in a different way, rather than figuring out why they got unfavorable results in the first place. We summarized this finding as "insufficiently addressed questionable test results". Many reports refer to the same instance of a particular problem more than oncefor example in a body chapter and also in the conclusion. Cases with more than one report (e.g., Deepwater Horizon) also resulted in more than one extract referring to the same instance of a particular problem, as indicated for example in rows 5 and 6 of Table 2. Reports may also refer to different instances of the same problem, as indicated for example in rows 3a and 3b of Table 2. In Table 2, rows 5 and 6 discuss two different regulator shortcomings. We therefore counted these excerpts as two findings. In contrast, rows 3a and 3b both refer to the same instance of the same problem-accordingly we counted these excerpts as one finding.
The reports vary in how they specify the parties involved in a particular finding. Some reports contain extensive details, including names and roles (e.g., the Walkerton water contamination accident names particular people [35]). Some reports specify only the roles (e.g., the NTSB discusses the causes in terms of "pilot" or "co-pilot"). Johnson [36] describes the ambiguity that many accident reports contain because they use inconsistent natural language. When reports did not specify names or roles, we inferred the roles. For example, consider the third finding in Table 2, in which the oil rig crew was distracted by a VIP tour while conducting an important test in a small control room. We inferred from the report that the persons responsible for bringing the VIPs were in an operations management role. Some investigation bodies record accidents using a coding system, such as the NTSB's method for investigating aviation accidents [37]. This type of system allows the investigators to have a baseline from which to analyze multiple accidents at once. The NTSB coding system facilitates analysis of overall trends in accident causation. Here, we coded each statement into Regulatory body provided poor regulatory supervision of rig operations. 6 The regulator "does not require industry to identify and manage all safety critical elements and tasks through defined performance standards, nor does it require assurance and verification activities to ensure a safety critical element is appropriate, available, and effective throughout its life cycle. [29, p. 16] Regulatory body provided poor regulatory supervision of rig operations.
# Report Extract (Finding) Finding Summary
1 "Our assessment documented several findings citing inadequacies in Lockheed Martin's oversight of its suppliers and management of subcontractor deliverables." [32] Development management poorly supervised suppliers.
2a "Technological innovation, including heavy reliance on computer simulation, which could take the place of real-world testing, would keep costs down. [. . .] Building an airplane while it is still being designed and tested is referred to as concurrency. In effect, concurrency creates an expensive and frustrating non-decision loop: build a plane, fly a plane, find a flaw, design a fix, retrofit the plane, rinse, repeat." [31] Computer simulations were inadequate tests to identify design problems.
2b "Pentagon officials accepted Lockheed's claim that computer simulations would be able to identify design problems, minimizing the need to make changes once the plane actually took to the sky. [. . .] [But] early tests uncovered flaws unnoticed by the computer simulations." [30] Computer simulations were inadequate tests to identify design problems.
3a "The Air Force, Marines and Navy all sought additional modifications to meet their needs, reducing commonality among the three models. A bigger problem was the fundamental concept of building one plane, with stealth technology, that could fly as far and fast as the Air Force wanted while also being able to land on the Navy's carriers and take off vertically from Marine amphibious assault ships." [30] Development management tried to please too many customers in one limited design. an "actor-causal action-object" structure, where the actor is the person (or group of people), the causal action is what they did, and the object provides detail about what the causal action was applied to. This coding structure allows us to compare failures with a baseline, like the NTSB's scheme. The "object" acts like a modifier to a "causal action" and makes it specific to a failure type. For more examples of how this coding scheme can be applied to different causes, refer to [38]. Fig 2 shows an example of two similar findings, from the Deepwater Horizon accident and the F-35 project failure. In both cases, testing was inadequate in some way, so we created a "subjected equipment to inadequate testing" causal action. In the Deepwater Horizon case, it was the personnel conducting the test who did not adequately investigate the questionable test results. Had they done so, they would likely have realized that they needed to redo the test. In contrast, on the F-35, development managers requested a form of testing (computer simulation) that was insufficient. Thus, we assigned responsibility to the development managers, rather than to the engineers conducting the simulations. The objects for each statement, "safety testing" and "development testing", identify the specific type of testing.
When a particular finding involved more than one actor, causal action, or object, we assigned additional unique actor-causal action-object codes to the finding to illustrate all facets of the finding. Fig 3 shows an example of a finding from the Westray Mine collapse to which we assigned two coded statements.
We identified a total of 966 findings, which we represent using a set of 23 causal actions, 9 actors, and 119 objects. Each causal action is associated with at least one object; for instance, "subjected to inadequate testing" has objects describing five types of testing: acceptance, development, quality, reliability, and safety testing. Other causal actions have more abstract objects. For example, "used inadequate justification", has objects like "acquisition" and "hiring".
We focus on the causal actions, as listed in Table 4. In the failures we studied, we found that different actors made similar mistakes (e.g., people at all levels of an organization keep poor records). People also made similar mistakes on different "objects" (e.g., poor records of different processes). Focusing on the "causal action" helps in identifying what went wrong rather than who to blame. In the remainder of this paper, we will simply refer to "causal actions" as "causes".
The accidents and other project failures in our data set share many causes. Which causes are most often reported in accidents and project failures? Are some causes reported more in accidents than in project failures, and vice versa? To answer these questions, we define a presence measure that answers the question: "How often does a particular cause appear across the failure samples?" The presence measure for cause i is given by: Where failuree k is the k th accident or project failure and N is the number of accidents or project failures. For example, "failed to train" occurred at least once in 19 of the 30 accidents, so its accident presence is 63%. This cause occurred at least once in 4 of the 33 project failures, thus its presence in project failures is 12%. The presence measure is binary within failures, i.e., it does not assign greater weights to causes that appear multiple times within a particular failure. Thus, any double counting of causes within a failure (e.g., a cause that appeared in two different report sections) does not affect the presence. Table 4 shows the presences and definitions of the 23 causes, ordered from most to least similar frequencies.
Previously in this section we showed examples of causes that appear similar between project failures and accidents. Table 4 shows that many causes have similar presence between project failures and accidents, but others have quite different presences. In [38] we discussed in detail where and why causes are similar and different between accidents and project failures, and here we provide a brief summary of that discussion. The higher presence of some causes is likely an artifact of accident investigations generally being more detailed and thorough than project failure investigations. For example, we found far fewer instances of inadequate procedures in project failures than accidents. For procedures specifically, the systems that experienced accidents likely had procedures that were more clearly defined than those for project failures because procedures are explicitly required for system operation (not necessarily so for project development). In the Alaska Airlines flight 261 crash, when the horizontal stabilizer did not respond properly, the pilot attempted different control configurations until the faulty jackscrew completely gave way and the aircraft nose-dived into the ocean. The NTSB criticized the emergency procedures, stating: "Without clearer guidance to flight crews regarding which actions are appropriate and which are inappropriate in the event of an inoperative or malfunctioning flight control system, pilots may experiment with improvised troubleshooting measures that could inadvertently worsen the condition of a controllable airplane" [40, p. 140].
Some differences in cause presence may indicate actual differences between the types of failures. Notably, many of the project failures we studied occurred before the systems had matured through their design cycles and therefore had no opportunity to perform maintenance. Thus accidents had more instances of the cause conducted maintenance poorly. For instance, in the Three Mile Island nuclear accident, "[r]eview of equipment history for the 6 months prior to the accident showed that a number of equipment items that figured in the accident had had a poor maintenance history without adequate corrective action" [41, p. 47]. The single instance of this cause in project failures is in the Hubble spacecraft mirror flaw, in which the equipment used to manufacture the mirror (and responsible for the flaw) had been poorly maintained [42].
Since our sample is relatively small and was not selected randomly, we cannot definitively (and with statistical certainty) conclude that in project failures actors failed to train exactly Table 4. Cause definitions and presences [14].
Cause
Definition Project Failure %
Accident %
Failed to supervise Actor(s) in the organization failed to supervise people or a process properly. 76% 77% Failed to provide resources Actor(s) in the organization failed to provide adequate resources to a department; for instance, maintenance, marketing, or safety.
45% 47%
Failed to consider design aspect Actor(s) in the organization failed to consider an aspect in the system design. In many cases, this causal action describes a design flaw, such as a single-point failure or component compatibility.
85% 83%
Lost tacit knowledge when employee departed Personnel quit, were moved to a different project, or retired, and the organization failed to sustain the knowledge base without these persons.
9% 7%
Lacked experience Actor(s)' lack of experience or knowledge led to the failure. For example, an inexperienced manager who was placed in charge of a large project.
42% 40%
Used inadequate justification Actor(s) in the organization used inadequate justification for a decision. 42% 47% Subjected to inadequate reviews Actor(s) in the organization did not review documentation or other work sufficiently to capture errors and deficiencies.
12% 17%
Kept poor records Actor(s) in the organization kept poor records of a process, such as maintenance. 21% 27% Failed to form a contingency plan Actor(s) in the organization failed to form a contingency plan to implement if an unplanned event occurred.
27% 20%
Inadequately communicated Actor(s) in the organization failed to communicate with each other such that personnel were confused with the information they were given, had to "fill in the gaps" in the information they were given, or not notified about important information at all.
33% 23%
Conducted poor requirements engineering Actor(s) in the organization did not lay out the needs, attributes, capabilities, characteristics, or qualities of the system well.
52% 40%
Failed to consider human factor Actor(s) in the organization failed to consider a human factor in system development. This causal action describes, for example, failing to consider human factors in specifying procedures or physical design.
30% 47%
Failed to inspect Actor(s) in the organization failed to inspect a crucial component. 15% 37% Violated regulations Actor(s) in the organization violated a regulation pertaining to the system. 6% 33% Managed risk poorly Actor(s) in the organization failed to identify, assess, formulate, or implement a proper mitigation measure.
48% 77%
Subjected to inadequate testing One or more actors in the organization subjected a component or subsystem to inadequate testing. This causal action captures inadequate tests as well as adequate tests performed inadequately.
45% 17%
Violated procedures Actor(s) in the organization violated a procedure pertaining to the system, such as a maintenance or operation procedure.
21% 53%
Did not allow aspect to stabilize Actor(s) in the organization did not allow a system aspect like personnel, design, or requirements to stabilize before moving forward with the project.
45% 7%
Did not learn from failure Actor(s) in the organization did not take past failures into account and a similar problem occurred. 9% 50% Conducted maintenance poorly Actor(s) in the organization failed to perform maintenance on a component or subsystem. Table 4 for that failure (e.g. look for weaknesses in supervision prior to looking for weaknesses in maintenance).
Study bias investigation
To determine whether our study suffered from strong indicators of bias, we enlisted an associate to perform the same extraction process on a few of the project failures we studied so we could perform an inter-rater agreement calculation on the result. We determined the presence (see Eq 1) of each cause from the associate's process and compared this result to the presence we determined from our coding process to calculate the percent agreement. Table 5 shows the results of our analysis. The average inter-rater agreement was 82%, which indicates "very good" high inter-rater agreement [43], and is a good indication that our process is free from rater bias.
Recommendation extraction and analysis
Project failure reports rarely contain recommendations. Only one of the project failures we studied contained recommendations (the Drug Enforcement Administration (D.E.A.) plane [44]), and these recommendations do not address the underlying problems that led to the failed acquisition. In contrast, most large accident investigations include extensive recommendations on how to prevent future accidents. Since we have found that accidents and project failures share many causes, recommendations from accident investigations are potentially also applicable to project failure prevention. Fig 4 describes our approach to coding and analyzing the recommendations from accident reports, using excerpts from the Imperial Sugar Refinery Accident report [45]. First, we linked the accident report findings to the corresponding recommendations. Some accident reports explicitly link recommendations to specific findings (e.g., the Space Shuttle Columbia accident report [46]), but most of the reports do not. For example, NTSB reports have a section labeled "findings" followed by a section labeled "recommendations", but in general there is no explicit link to the recommendations from the findings. One of the reports did not make any recommendations at all (the Bhopal accident [47]) and others made only a few recommendations, often addressing only a subset of the findings.
We used a similar approach to the cause coding to code the recommendations. Some findings had multiple recommendations that spanned many ideas, so a single cause could have more than one recommendation, and hence potentially more than one recommendation code.
In Fig 4, we connected the finding to a single recommendation, which we described using two recommendation codes because it contains two distinct ideas. In total, we identified 16 recommendation codes, as shown in Table 6. Last, we linked the causes from the actor-causal action-object codes to the recommendation codes. We linked only those recommendations that we could reasonably infer corresponded to the causes we identified. Fig 5 displays the recommendation code distribution for managed risk poorly. Overall, we did not find recommendations for 30% of the accident causes.
This cause-recommendation linking effort has shown that the recommendations made in accident reports are not without flaws. First, the effort needed to link causes and recommendations (those that are linked by inference) highlights a lack of clarity in accident reports. Second, we were not able to link many causes to recommendations, which indicates that there are problems the investigators found that they (1) did not have the resources to make a recommendation for, (2) did not know how to solve, or (3) did not think was critical enough to improve upon. Nevertheless, the recommendations made in accident reports are likely more useful than those we found in the project failure literature because they provide more specific, actionable guidance.
Cause network
The cause network is based on the cause presence and the probabilities of finding pairs of causes in a given accident or project failure. Table 7 shows the intersectional probabilities P (cause i \ cause j ) for "failed to consider human factor" (cause i ) and all the other causes for both accidents and project failures. For example, failed to supervise occurred together with failed to consider human factor in 21% of project failures, and 37% of accidents.
We plotted the intersectional probabilities of causes for accidents and for project failures as undirected graphs, as shown in Figs 6 and 7. The nodes represent the causes, and the links represent the cause intersectional probabilities. Heavy links indicate high intersectional probabilities, thin links the opposite. Large nodes indicate a high cause presence, small nodes the opposite. Linked nodes appear closer to each other, and unlinked nodes appear further from each other. In project failures (Fig 6) the eight causes with low presence (<20%), such as enforced inadequate regulations, are all outlying nodes with thin connections. Similarly, the five causes with low presence in accidents (Fig 7), such as did not allow aspect to stabilize, are all outlying nodes with thin connections. The two causes with high presence (>70%) in project failures (failed to consider design aspect and failed to supervise) are both internal nodes in with Table 6. Recommendation code definitions [14].
Conduct random and independent evaluations
Perform an evaluation like an inspection or audit on a component, organization, or person, and do it randomly, often, and by an independent organization or party.
Develop a comprehensive and rigorous test
Develop a test that includes all possible regimes, equipment, and situations, and is stricter than what is minimally necessary (e.g. to a certain factor of safety).
Develop specialized training Develop training to teach, reiterate, or reinforce a specific aspect related to the failure.
Establish a program or service Establish a program to aid a process, such as a record-keeping program.
Establish an independent and transparent supervisory agency Establish an agency that acts as a watchdog for an aspect of the failure.
Establish more checks in the system Put more checks in the system, for example a supervisor's signoff, such that work cannot continue without conducting the check.
Give supervisor more capacity for oversight Provide supervisors with the power to enforce the rules to which systems are required to adhere.
Identify weak areas
Assess what aspects of the system may be neglected.
Improve efficiency in critical tasks Improve how a task is done, for example by eliminating steps in a procedure, providing better equipment, or making software assistance to operators more logical.
Increase resources Provide more aspects like people, money, or equipment, to an aspect of the system.
Involve stakeholders in decision-making
Involve more stakeholders to provide additional points of view that were previously lacking.
Keep up with current technologies Improve technological aspects of the system like outdated computer systems, or emergency systems.
Make instructions more clear Improve instructional aspects of the system, such as procedures, job descriptions, employee roles, or any other type of instruction to be clearer.
Make regulations more strict Improve regulations to make the standards to which the system is held to be more stringent.
Review decision-making logic Instead of incrementally making small changes to a system, rather, for example, change how aspects of the system are addressed or review the system from a high-level perspective.
Track compliance to an objective standard Hold system activities to applicable standards, such as ensuring drawings follow a template or having every employee complete the same training. https://doi.org/10.1371/journal.pone.0229825.t006
Cause-recommendation network
Next, we built a cause-recommendation network using the links we identified between the causes and the recommendation codes. In Fig 8, the black nodes are causes, and the gray nodes are recommendations. For clarity, we have omitted the cause-cause links. Like the cause networks, nodes with many connections repel nodes with few connections. Thin links indicate that the cause and recommendation were connected only one or two times; heavy links the opposite, with the thickest line indicating 49 connections between managed risk poorly and no recommendation (see Fig 5). Some causes only have a few recommendations; this situation occurs when causes are quite specific and also have quite specific recommendations. For example, a frequent recommendation for subjected to inadequate testing is develop a more comprehensive and rigorous test (that is, a frequently suggested solution to inadequate testing is adequate testing!). Other causes are more ambiguous and are thus covered by a wider range of recommendations. Such causes include failed to supervise, which is covered by recommendations like conduct random and independent evaluations and develop specialized training.
Application of the network
In the Introduction, we discussed how suggestions for improvement are often either so general they are essentially platitudes ("put your best people on the job"), or highly specific to particular contexts (e.g., "replace the faulty burst valve"). In contrast, our study straddles both of these approaches to not only provide practitioners general language to help them categorize their problems but also provide specific examples of each of these general problems in a wide variety of industries and contexts. Subsequently, many of these specific problems have expert-provided recommendations that practitioners may use as inspiration for solving their own problems. Here, we demonstrate two aspects of how the information in the cause-recommendation network can be used to identify useful and informative guidance.
Identifying and understanding potential causes
An organization that suspects it may have problems can use the network to identify the most frequent causes. Our analysis of project failure and accident causes showed where most of the problems are likely to be found for either type of event. In the cause extraction section, we suggested that an organization looking for what problems may lead to failures look for the causes with the highest frequencies from Table 4 for that failure type. The most frequent cause in both accidents and project failures is failed to consider design aspect (Table 4). To help illustrate this and the other causes, the network also provides over 900 "back stories" of how each cause has appeared in accidents and project failures. Table 8 shows examples of these back stories from both accidents and project failures for failed to consider design aspect.
These examples show the pitfalls of major design decisions, such as having two (formerly competing) contractors build separate ends of a large system while neglecting coordination effort or how delayed common parts in the development of a program can snowball to cause large-scale delays. A practitioner who is interested in the ramifications of issues like failing to consider certain aspects of design could peruse these examples.
Identifying and understanding potential recommendations
Practitioners may find it useful to see what general improvements accident investigators most often recommended to make cost-effective and efficient resourcing decisions on their project. Fig 9 shows the 16 recommendations, ranked by the percentage of accident causes connected to each one. The percentages do not add up to 100% because many causes are linked to more than one recommendation code (see Fig 4) and some causes are not linked to any recommendations. For example, make instructions more clear accompanied 17% of the causes in accidents that had recommendations. An organization seeking to make general improvements without prior knowledge of problems should start by following the recommendation codes with the highest percentages. These recommendations are most likely, based on our dataset, to be applicable in any given organization.
In Fig 9, It is not surprising that identify weak areas was most often recommended-it is hard to imagine a scenario in which identifying weak areas is not a good idea! Similarly, many of the other recommendations also appear self-evident, but may be hard to translate into concrete context-specific terms. To help address this problem, the cause-recommendation network provides 600 back stories of the recommendations and the problems that led to the recommendations. For example, Table 9 shows examples of why and how investigators made the recommendation identify weak areas, which appears in 25 out of 30 accident investigations and is linked to 29% of accident causes.
If an organization has identified a particular problem behavior, it can use the cause-recommendation network to identify the most appropriate recommendations for addressing that behavior. For example, suppose an organization discovers that it did not adequately supervise a project. Table 10 shows the associated recommendations for failed to supervise, as well as the relative ranking of each recommendation, based on how often we connected them to failed to supervise, described in percentage as well as raw count. Thus, for example, identify weak areas was recommended 16 times in response to failed to supervise, which we identified a total of 117 times in our accidents and project failures. Thus its percentage is 16/17 � 14%.
The network also allows users to sort by other categories, such as industry type-a user could, for instance, see all causes related to government acquisitions or aircraft crashes.
Our work is currently available in an interactive web-based platform, where the user is able to click on a certain cause, see what other causes are related to that selected cause, and then see recommendations related to that set of causes, available at: https://engineering.purdue.edu/ VRSS/research/force-graph/index_html. For details on how we constructed this interactive version of the network and how we propose practitioners use the network to identify problems and potential solutions in their own organizations, see [14].
To see our preliminary results on using this network with novice and expert systems engineers to determine whether this tool is useful for forming remediation measures for problems on projects, refer to [38].
Failure Cause Back Story
Upper Big Branch Mine explosion [48] The Upper Big Branch Mine was a coal mine in West Virginia that suffered an explosion that killed 29 miners. Coal mines require constant "rock dusting" to keep coal dust levels down to prevent explosive atmospheres from forming within the mine. Among other causes, the mine was so large that workers conducting rock dusting had to make many trips to reload material to rock dust the entire mine.
Design aspect not considered: A chute-like delivery system to the center of the working area of the mine would have made rock dusting easier.
ValuJet flight 592 crash [49] Contractors working for ValuJet Airlines were refurbishing an aircraft and removed its expired chemical oxygen generators, used to supply oxygen to passengers in situations when a plane suffers a decompression during flight. The contractors improperly packaged and labeled the generators as empty rather than expired. Eventually the expired, but not empty, generators were shipped on flight 592. During takeoff, a fire started in the cargo hold, and would have burned itself out had the (now damaged) generators not supplied the fire with oxygen. The plane was eventually overwhelmed by the fire and crashed. The passengers and crew were all killed on impact. The NTSB report also noted that even if the aircraft had managed to land, the passengers might have been injured or killed by toxic air.
Design aspect not considered: The emergency oxygen masks deployed during in-flight emergencies do not separate cabin air, which could be toxic in the event of a fire, from the oxygen flow.
Westray Mine collapse [39] The Westray Mine was a coal mine in Nova Scotia that had a history of problems because the mine's management frequently took shortcuts to improve production at the cost of safety.
Design aspect not considered: the ventilation system in the mine was designed in a haphazard way; for example, the fans were placed in locations within the mine that were not conducive for the air flow. Thus, the ventilation system allowed methane gas and coal dust to build up, eventually causing the mine to collapse.
Iridium satellite phone cancellation [50] The Iridium satellite phone was a phone that could connect a call anywhere on Earth at a time when cell phone coverage was unreliable. The phone did not sell well, and the founding company declared bankruptcy, although the satellite system remains operational.
Design aspect not considered: Designers did not properly consider their customers' needs. The phone was extremely expensive, calls could only be made outside (within line-of-sight of the satellite network), the phone was difficult to use and required special training, a special cartridge was required to make conventional mobile network calls, and the phone itself was large, weighing over 1 lb.
Seawolf Navy Submarine delays and cost overrun [51] The Seawolf Navy submarine was delayed and over-budget.
Design aspect not considered: Two contractors who had originally competed to win the contract were commissioned to design and build the aft and forward sections of the submarine separately. This decision underestimated the immense coordination and cooperation that would be required between the contractors, as well as the extensive design and construction rework the program eventually needed.
F-35 Lightning II delays and cost overrun [31] The F-35 Lightning II is currently delayed and over-budget.
Design aspect not considered: The aircraft is intended to be one-size-fits-all for the United States Navy, Air Force, and Marines, which means that a single design, with slight modifications, is meant to meet the needs of all three customers. This common platform decision did not fully consider the challenges and compromises involved in trying to meet divergent needs. In addition, development on all three variants is delayed whenever a common part fails. https://doi.org/10.1371/journal.pone.0229825.t008 The source data for this research is available on the Purdue University Research Repository: https://purr.purdue.edu/publications/2859.
Conclusion and future work
We identified a set of 30 accidents and 33 project failures, spanning a wide range of industries. Next, we modified Leveson's STAMP model and used it to methodically extract and analyze their causes. We found 23 different failure causes, most of which appear in both accidents and other project failures, suggesting that accidents and project failures do happen in similar was. We also identified 16 different recommended remedial actions. We link these causes and recommendations in a cause-recommendation network, and associate over 900 specific examples of how these causes manifested in failures, and over 600 specific examples of the associated recommended remedial actions, with each cause or recommendation.
The limitations of this study are such: first with identifying project failures to study. As Judgev & Müller [56] state in their paper on understanding project success: "Trying to pin down what success means in the project context is akin to gaining consensus from a group of people https://doi.org/10.1371/journal.pone.0229825.g009 Table 9. Examples of source accidents for recommendation code identify weak areas.
Created inadequate procedures
Swissair Flight 111 Crash [52] A fire started on the aircraft while it was flying, and because it propagated in unoccupied parts of the aircraft, it went unnoticed and eventually brought the plane down. Investigators found that the in-flight entertainment system was improperly installed on the aircraft, and as a result wires from the system chafed against metal components in the attic area of the aircraft. To identify discrepancies for all aircraft of this model in service, the investigators recommended that the FAA require an inspection for wiring discrepancies, such as chafed or cracked wire insulation.
Subjected to inadequate testing
Deepwater Horizon oilrig blowout and fire [29] The Deepwater Horizon oilrig was plugging a new oil well in a standard procedure. The well's plug burst and the ensuing oil spurting from the well caught on fire and destroyed the rig. When the crew performed a test to determine the plug's integrity, the test results indicated that the plug was not secure, but instead of investigating the results, the crew performed a different test and mistakenly concluded the plug was secure. The investigators recommended that the regulator require oilrig operators seeking design approval to demonstrate that well components are equipped with sensors or other tools to obtain accurate diagnostic information on the status of the well.
Failed to consider design aspect Buncefield oil fire [53] The Buncefield oil storage depot was filling a storage tank with oil when a gauge designed to detect when the oil reached a high point failed. There was no alarm and the receiving site could not halt the flowing oil. The tank overfilled and a spark lit the spewing oil on fire, causing an explosion. Operators relied on a retaining wall around the tank as a backup system to ensure that liquids would not be released to the environment. However, this containment method failed and oil and firefighting liquids flowed off site and entered the groundwater. The investigators recommended that operators of oil storage sites evaluate the siting and protection capabilities of emergency response measures at their facilities for potential weaknesses.
Created inadequate procedures
Colgan 3407 crash [54] Colgan Air flight 3407 was on approach to Buffalo in icing conditions, and the pilot had the aircraft on autopilot, which made it more difficult for him or the co-pilot to realize that the wings were icing. Neither the pilot nor the co-pilot responded appropriately to stall warnings and were not able to recover the aircraft from the stall, leading to the aircraft crashing. Pilot fatigue played a role in the pilots responding inappropriately, and at the time Colgan did not provide any information to its pilots about fatigue prevention. The investigators recommended that airline operators address fatigue risks associated with pilot commutes, including identifying pilots who commute and providing guidance to mitigate fatigue risks.
Failed to consider human factor
Texas City refinery explosion [55] The Texas City Refinery was a plant that refined oil into unleaded petrol. As part of this process, the plant used a raffinate splitter that separated as much as 45,000 barrels of the fluid into lighter and heavier hydrocarbon components using a tall tower. A combination of factors led to the relief system for the tower overfilling and raffinate fluid spilling out, creating a flammable vapor cloud that was eventually ignited by a nearby pick-up truck engine a worker had left running. In particular, the control board display for the process did not provide adequate information to the operator, such as the imbalance of the flow of hydrocarbons. The CSB recommended that the plant evaluate its process units to ensure that critical process equipment is safely designed, such as by having effective instrumentation and control systems and by configuring control board displays to clearly indicate material balance.
on the definition of 'good art'." Not only is project success difficult to define, but project failure is also not one-minus the definition of project success. Readers may disagree with the way in which we defined project failures (e.g. we classified unmanned space mission failures as project failures, but we classified the Space Shuttle disasters in which the crews were killed as accidents), but this distinction has no material effect on our results and our results are potentially useful for any project experiencing problems, no matter the distinction. Second, studying a set of previously-reported project failures and accidents is inherently subject to bias from the investigators. These biases are inherent to any approach based on studying investigation reports. We discuss these potential biases at length in [34]. Third, the extraction and coding process is subject to bias by the coders. Different coders may identify more or fewer causes or recommendations in a given report, and different coders may assign a given finding or recommendation to different codes. Since we provide in the network both the original sources and the paraphrased "stories" behind each instance of each code, the impact of the code creation and allocation process is minimal.
In this paper, we focused on the causes. In future work, we will expand the network by incorporating other aspects from our analysis, for instance (1) The actors involved in each cause, (2) The types of objects involved in the causes and the difference between project failures and accidents (e.g. what types of testing was involved), or (3) When in the design cycle the cause occurs. Companies experiencing problems during project development may use the cause-recommendation network as a guide to analyze any issues they have found, identify other potential related issues, and then use the recommendation codes to reduce the likelihood of failure.
We developed a specialized coding scheme to compare the causes of systems engineering related accidents and project failures. There are also other coding schemes, both more general and more specific, such as the HFACS accident causation hierarchy. Part of our future work may include mapping our coding scheme to other methods to analyze the differences in the coding schemes and determine whether different patterns emerge. Establish an independent and transparent supervisory agency 11% 13 Establish a program or service 10% 12 Conduct random and independent evaluations 9% 11 Make regulations more strict 8% 9 Increase resources 7% 8 Review decision-making logic 5% 6 Track compliance to an objective standard 5% 6 Develop specialized training 4% 5 Make instructions more clear 4% 5 Give supervisor more capacity for oversight 3% 4 Involve stakeholders in decision-making 2% 2 Keep up with current technologies 2% 2 Establish more checks in the system 1% 1 Improve efficiency in critical tasks 1% 1 Develop a more comprehensive and rigorous test 0% 0 Adding findings to the network is easy, but extracting and coding them requires significant effort. Machine learning methods may provide an automated way of adding failures to our cause-recommendation network [57] [58] [59].
Finally, in related work we are using game theoretic approaches to explore the underlying reasons behind the causes we identified here [60]. | 2019-04-16T13:26:44.308Z | 2020-03-06T00:00:00.000 | {
"year": 2020,
"sha1": "980e2da8953256706c6c1ec9576b77d0f5f7ef24",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0229825&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8531f831932a1d2a01bbddacc11b42e2d6b3d2d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Medicine"
]
} |
238001375 | pes2o/s2orc | v3-fos-license | The use of accounting outsourcing in small agricultural enterprises
Accounting and management accounting at an agricultural enterprise reflects the results of economic and financial activities. The success of the company’s operation largely depends on the performance of its accounting functions. The article deals with the problems of setting and maintaining management accounting at agricultural enterprises. The necessity of using accounting outsourcing in small agricultural enterprises is justified. Calculating the cost of production of an agricultural enterprise is an important problem on the way to reducing it. To effectively calculate the cost of production, an agricultural enterprise should combine the process-by-process and batch method of cost accounting. This will make it possible to increase control over production costs, but requires the use of accounting specialists with appropriate competencies and skills. As a rule, small agricultural enterprises and primarily farms use a simplified accounting and reporting system. The simplified form of accounting and reporting, on the one hand, significantly simplifies the preparation and submission of tax reports for small agricultural enterprises, but at the same time does not allow for effective control of production costs. Accounting outsourcing allows you to reduce the costs of the company for the services of accountants and at the same time increase their efficiency..
Abstract. Accounting and management accounting at an agricultural enterprise reflects the results of economic and financial activities. The success of the company's operation largely depends on the performance of its accounting functions. The article deals with the problems of setting and maintaining management accounting at agricultural enterprises. The necessity of using accounting outsourcing in small agricultural enterprises is justified. Calculating the cost of production of an agricultural enterprise is an important problem on the way to reducing it. To effectively calculate the cost of production, an agricultural enterprise should combine the process-by-process and batch method of cost accounting. This will make it possible to increase control over production costs, but requires the use of accounting specialists with appropriate competencies and skills. As a rule, small agricultural enterprises and primarily farms use a simplified accounting and reporting system. The simplified form of accounting and reporting, on the one hand, significantly simplifies the preparation and submission of tax reports for small agricultural enterprises, but at the same time does not allow for effective control of production costs. Accounting outsourcing allows you to reduce the costs of the company for the services of accountants and at the same time increase their efficiency..
Introduction
Broad socio-economic transformations taking place in modern society, the development of agricultural technologies have complicated financial and economic operations, increased market competition and made a change in the dynamics of the development of agricultural enterprises.
This, in turn, requires solving practical and theoretical problems related to the methodology and organization of accounting, as well as cost control of agricultural enterprises ( Figure 1). According to [1,2], accounting at an agricultural enterprise is designed to reflect financial and economic activities , the process of selling agricultural products, and forms an information base for enterprise management. The provision of food to the population largely depends on the performance of accounting functions.
According to [3], the entire enterprise management system , including the planning and organization of financial and economic activities, the motivation of personnel, and the control of income and expenses, depends on the information support based on accounting data .
According to [4], the shortcomings of such information support have a negative impact on the efficiency of managerial decision-making.
The authors [5] identify two reasons that distort the correctness of the operational determination of the cost of production at agricultural enterprises ( Figure 2): Reasons that distort the correctness of the operational determination of the cost of production at agricultural enterprises According to [6], calculating the cost of production of an agricultural enterprise is an important problem on the way to reducing it. The size of the cost of production depends on the amount of income tax, as well as the assessment of the profitability of production.
Accounting and calculation of production costs provides for a complete, timely and reliable display of the costs incurred in the production and sale of products.
According to [7, 8,9], in an agricultural enterprise, production can be divided into main and auxiliary. The main type of production is the production of agricultural products, and the auxiliary type is all production that serves the main production: repair of machinery and equipment, as well as storage , transportation and sale of finished products .
According to [10,11], it is advisable to use the batch (order-by-order) method of accounting for incurred costs for auxiliary production. The analysis of the costs corresponding to each completed order makes it possible to analyze and compare the profitability of different orders. In addition, the order-based method allows you to compare the costs of the same type of product incurred at different times.
According to [12], the batch (order-by-order) accounting method has a number of disadvantages: 1) Complexity of work-in-progress inventory; 2) Full information about the costs incurred is available only at the end of the order execution, so it makes it difficult to take effective operational measures to manage the order execution.
According to [13], for the main production of an agricultural enterprise, it is advisable to use a process-by-process method of accounting for the costs incurred. The technological process of agricultural production is divided into separate technological processes, as a result of the interaction of these processes, agricultural products are created. In the processbased accounting method, costs are not compiled by order, but by individual technological processes. Thus, in the process-based accounting method, as a rule, separate structural divisions of an agricultural enterprise act as the object of calculation.
According to the opinion, the process-based method of cost accounting is less laborintensive and more transparent in comparison with the order-based method.
According to [14], the process-based method of cost accounting allows you to control the deviation of the amount of costs incurred in relation to the standard indicators.
According to [15], the disadvantages of the process-based cost accounting method include the complexity of estimating work in progress.
Thus, based on the analysis, we came to the conclusion that for an effective calculation of the cost of production, an agricultural enterprise should combine the process-by-process and batch method of cost accounting . This will strengthen the control of production costs, but requires the use of specialists in the field of accounting with the appropriate competencies and skills. As a rule, small agricultural enterprises and primarily farms use a simplified accounting and reporting system.
On the one hand, the simplified form of accounting and reporting makes it easier for agricultural enterprises to prepare and submit tax reports, and on the other hand, it does not make it possible to effectively control production costs.
Authors [16] there are three types of accounting outsourcing: accounting consulting, partial outsourcing, and full outsourcing. In accounting consulting, outsourcing is usually limited to consulting in order to monitor the current activities of the accounting department, as well as to promptly inform about the latest changes in tax legislation related to the activities of the enterprise. In selective outsourcing, a third-party accounting or audit firm performs only part of the accounting functions, usually related to the performance of routine operations . This allows you to reduce the staff of the accounting department without reducing the efficiency of its work. Full outsourcing means delegating all accounting functions to a third-party enterprise.
Methods
During the implementation of this study, we applied an analytical method by which the problems under study were considered in their development and unity.
Taking into account the goals and objectives of this research work, a structural and functional method of carrying out scientific research was used.
This allowed the authors to consider a number of problems related to the use of accounting outsourcing in small agricultural enterprises.
Results
The prerequisite for writing this research paper was an appeal to us by the head of the small agricultural enterprise "Kolos". He was interested in what hidden reserves the company has for its hasty development. At the beginning of the study, we found out that the company processes 96 hectares of land. Part of the land is owned by the company, and the other is leased from local residents. The company uses a simplified accounting and reporting system, which allows you to have one accountant in the staff, who also performs the duties of the head of the personnel department. When making management decisions, the head of the agricultural enterprise relied on accounting data and on his personal experience of managerial activities. However, the data used for the management of the enterprise was taken, as a rule, from quarterly accounting statements. While prices for agricultural products were highly variable, data from quarterly accounting reports did not allow for optimal management decisions.
We offered the head of the agricultural enterprise "Kolos" to implement the establishment and maintenance of specialized management accounting at his enterprise. To ensure this process, it was proposed to conclude a contract for services from an audit firm with experience in working with agricultural enterprises.
Disadvantages associated with the lack of effective management accounting in the agricultural enterprise makes it, as a rule, less competitive in the agricultural market. In these conditions, the largest multinational enterprises with a higher level of accounting automation have advantages over domestic agricultural enterprises. Effective implementation of management accounting allows for the collection of information by operational and strategic management of the enterprise. Management accounting allows you to optimally allocate the resources that the enterprise has and thereby increase the efficiency of its functioning. In addition management accounting allows you to: 1) Ensure operational cost control.
2) Plan the activities of the enterprise through budgeting.
3) On the basis of its data to analyze the activities of the enterprise, and on the basis of this analysis to make informed decisions. 4) Allows you to link the level of costs with the quality and quantity of products. The level of economic efficiency of agricultural production depends not only on the ratio of revenue received and costs incurred, but also on the quality of management of production processes occurring at the enterprise.
Management accounting allows you to build a cost management system at an agricultural enterprise and, on the basis of this, be able to identify low-profitable and highprofitable areas of activity of an agricultural enterprise.
To improve the efficiency of management accounting in an agricultural enterprise, in our opinion, it is necessary: 1. Consider the main directions, as well as the problems of the impact of the organizational and economic structure of the enterprise on the formation of accounting information flows.
2. Determine the main directions of management accounting, as well as ways to obtain the necessary data for its functioning on the financial and economic operations of the enterprise.
3. Consider the grouping of operating costs of individual structural divisions of the enterprise.
It is not always possible for a small agricultural enterprise to solve such problems independently. Therefore, to improve the management efficiency of small agricultural enterprises, we suggest using accounting outsourcing. It allows you to increase the efficiency of the functioning of the agricultural enterprise and save labor and financial resources. The transfer of management and financial accounting by outsourcing to an accounting or audit firm allows an agricultural enterprise to: 1. Solve the problem of finding a qualified specialist in the field of accounting. 2. Reduced payments on the salary of labor, as well as related taxes.
3. As a rule, a small company has one accountant on its staff. The audit company has a staff of accountants and, if necessary, can change the accountant who will serve this agricultural enterprise.
4. The most important thing is the competence of the audit firm's employees, including in complex and non-standard problems. This will allow you to organize the management accounting at the agricultural enterprise properly.
As a result of the introduction of management accounting at the small agricultural enterprise "Kolos", production costs at the enterprise were reduced by 6%. This was achieved by more effective control over the activities of the company's structural divisions. Based on the analysis of management accounting data, the company was able to obtain a loan for the purchase of new agricultural machinery from a commercial bank. For the credit department of the bank, the management accounting data served as an additional source of information about the production and economic activities of the small agricultural enterprise "Kolos".
Despite the additional costs associated with setting up and maintaining management records, the small agricultural enterprise "Kolos" received additional benefits that recouped these costs.
Discussion
Improving the quality and efficiency of the information provided by accounting to the management of the enterprise will increase the efficiency of the functioning of the agricultural enterprise. In addition, primary accounting documents are the evidence base for defending their point of view before various regulatory authorities. The proper execution of business transactions contributes to the effective preservation and use of the property owned by the owners of the enterprise.
The presence of management accounting allows you to have the most complete information picture about the conditions of economic activity of an agricultural enterprise, and this will assist in quickly responding to changes in the external and internal environment.
Management accounting makes it possible to form an optimal strategy for the development of the enterprise. At the same time, information about the assets of the enterprise in the management accounting is reflected not only in the qualitative but also in the quantitative dimension, this contributes to a better perception of the state of affairs at the enterprise.
The main difference between accounting and management accounting is the users of the information for which they are intended, and therefore. These differences are reflected in the methods and tasks of management and accounting. In our opinion, the organization of the documentation system in an agricultural enterprise should include the following principles ( Figure 3) In our opinion, in the process of designing a system of registers for accounting in an agricultural enterprise, we should strive to ensure that they most fully reflect the content of financial and economic operations.
Mandatory management accounting is not regulated by law, but it is necessary for the management of the enterprise for the effective analysis of production activities, control and management of the enterprise. Financial accounting, being an integral part of management accounting, is regulated by state regulatory documents. It is understandable for an accountant and an auditor, but contains insufficient information necessary for the successful management of the enterprise. Unlike financial accounting, management accounting allows you to predict the production performance of an enterprise.
In modern economic conditions, it is of paramount importance to provide the management of an agricultural enterprise with operational information about its current economic activities. This requires the use of strengthening the control, predictive and information functions of accounting, in accordance with the need to make economically sound management decisions.
Management accounting helps to optimize the management of production costs and the result of financial and economic activities of an agricultural enterprise.
Management accounting allows you to: 3) To form and process an array of information reflecting the financial and economic activities of the enterprise as a whole and its structural divisions Management accounting requires specialists with experience in this field. Management accounting develops planned tasks and analyzes the actual performance indicators of the entire enterprise and its structural divisions, prepares management decisions, including alternative ones. As a rule, there are no specialists required for this activity in small agricultural enterprises, so it makes sense , in our opinion, to attract specialists from specialized audit or accounting firms for outsourcing.
At the same time, the organization of management accounting at the enterprise and its further maintenance requires additional material costs, so it is very important that these additional costs are minimal.
Conclusions
In our opinion, the use of various forms of outsourcing in agricultural enterprises makes it possible to increase the efficiency of its functioning . Outsourcing allows agricultural enterprises to gain access to the resources and technologies that outsourcers possess. We believe that accounting outsourcing services will be very effective for agricultural enterprises. At the same time, it is very important that outsourcers understand the specifics of the functioning of an agricultural enterprise, and its employees have the necessary competencies to be able to provide such services. Management accounting at an agricultural enterprise, in our opinion, should have the following qualitative characteristics: 1. It is maximally focused on the requests of internal and external users. 2. It should contribute to the identification, systematization and analysis of deviations from the normative economic indicators.
3. Mandatory documentary substantiation of the facts of financial and economic operations.
4. Focus on solving specific management problems. 5. Maximum efficiency not only in determining not only actual but planned and calculated indicators. | 2021-08-27T17:04:08.492Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ea5540b415902fedf3a6a89ae0a01021ad1a37dd",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/61/e3sconf_abr2021_01002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dc391318484dd0f8f076b89ace52006a5669302a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
45938866 | pes2o/s2orc | v3-fos-license | The full configuration interaction quantum Monte Carlo method in the lens of inexact power iteration
In this paper, we propose a general analysis framework for inexact power iteration, which can be used to efficiently solve high dimensional eigenvalue problems arising from quantum many-body problems. Under the proposed framework, we establish the convergence theorems for several recently proposed randomized algorithms, including the full configuration interaction quantum Monte Carlo (FCIQMC) and the fast randomized iteration (FRI). The analysis is consistent with numerical experiments for physical systems such as Hubbard model and small chemical molecules. We also compare the algorithms both in convergence analysis and numerical results.
Introduction
In recent years, following the work of full configuration interaction quantum Monte Carlo (FCIQMC) [4,7], the idea of using randomized or truncated power method to solve the full configuration interaction (FCI) eigenvalue problem has become quite popular in quantum chemistry literature. From a mathematical point of view, the FCI calculation essentially asks for the smallest eigenvalue of a real symmetric matrix (for ground state calculation) or a few low-lying eigenvalues (for low-lying excited state calculation). The computational challenge lies in the fact that the size of the matrix grows exponentially fast with respect to the number of orbitals / electrons in the chemical system, and thus a brute-force numerical diagonalization method (such as power method or Lanczos method) does not work except for very small molecules.
The goal of this work is two folds: On the one hand, we want to establish a general framework to understand these recently proposed randomized algorithms. As we shall see, from the angle of numerical linear algebra, these recent methods can be understood as generalizations of conventional power method when inexact matrix-vector product is used. As a result, the convergence of these methods can be dealt with by a simple extension of the usual proof of convergence of power method. A natural consequence of this understanding is that, to compare the various approaches, the crucial part is to understand the error caused by different strategies of inexact matrix-vector multiplication. Using this insights, we will compare a few of the recently proposed randomized or truncated FCI methods analytically and also numerically using Hubbard model and some small chemical molecules as toy examples.
While the motivation of the study is from FCI calculation in quantum chemistry, these methods can be understood on the general setting of numerical linear algebra, and hence except in the numerical section, we will not restrict ourselves to the FCI Hamiltonian. For a given real symmetric positive definite matrix A ∈ R N ×N , we are interested in numerically obtaining the largest eigenvalue and corresponding eigenvector. It is possible to extend the method to leading k eigenvalues where k is on the order of 1 based on the subspace iteration method, generalization of the power method. In the sequel, we denote the eigenvalues of A as λ 1 ≥ λ 2 ≥ · · · ≥ λ N ≥ 0, and corresponding orthonormal eigenvectors are u 1 , u 2 , · · · , u N (viewed as column vectors).
To obtain the largest eigenvalue and the corresponding eigenvector, one of the simplest algorithm is standard power iteration, given by y t+1 = Ax t ; x t+1 = y t+1 / y t+1 2 with some initial guess x 0 and iterate till convergence. The algorithm is simple to understand: The matrix multiplication Ax t amplifies x t in the leading eigenspace. The convergence of the algorithm is also well This research is supported in part by National Science Foundation under award DMS-1454939. We thank useful discussions with George Booth, Yingzhou Li, Jonathan Weare, Stephen Wright, and Lexing Ying during various stages of the work. known: As long as the initial vector satisfies that u ⊤ 1 x 0 is nonzero and there exists an eigengap (λ 1 > λ 2 ), the subspace span x t converges to the eigenspace span u 1 linearly as t → ∞ with rate proportional to λ 2 /λ 1 .
Since only the convergence of subspace is of interest, the norm of the vector x t plays no role. Hence the normalization step of power iteration may be omitted v t+1 = Av t . This is equivalent with the original power method. Of course, in practical computations, the normalization is important to avoid issues like arithmetic overflow.
Motivated by the recent proposed algorithms in quantum chemistry literature, in this work, we take the view point that we cannot afford (or choose not to perform) the matrix-vector multiplication y t+1 = Ax t exactly. Among other applications, such a scenario naturally arises when the dimension of the matrix A is extremely large, so that even storage of the vector y t+1 (even in sparse format) is too expensive. For example, this is a common situation for FCI calculations in quantum chemistry since the dimension of the matrix A grows exponentially with respect to the number of electrons in chemical system.
Thus, in power iteration, we would replace the matrix multiplication step by a map Here, given the matrix A and the current iterate x t , the map F m , either deterministic or stochastic, outputs an approximation of the product y t+1 ≈ Ax t . Different choices of F m corresponds to various recently proposed algorithms, as will be discussed below. We have used the index m to indicate the "complexity" (computational cost) of F m , the specific meaning depends on the choice of the family of maps. Replacing the matrix-vector multiplication by (1), we get Inexact Power Iteration Algorithm 1a and its unnormalized version 1b. Algorithm 1a: Inexact Power Iteration Initialization: Choose a normalized vector x 0 ∈ R N , x 0 2 = 1, u ⊤ 1 x 0 = 0. for t = 0, 1, 2, · · · , while not converged do y t+1 = F m (A, x t ); x t+1 = y t+1 / y t+1 2 ; end Algorithm 1b: Inexact Power Iteration without Normalization Initialization: Choose a vector v 0 ∈ R N , u ⊤ 1 v 0 = 0. for t = 0, 1, 2, · · · , while not converged do v t+1 = F m (A, v t ); end Notice that the two versions of inexact power iteration Algorithm 1a and 1b are equivalent if the function F m (A, ·) is homogeneous; we will make this as a standing assumption in our analysis.
Various inexact matrix-vector multiplication has been proposed in the literature for configuration interaction calculations, either deterministic or stochastic, see e.g., earlier attempts in [1,6,8,[11][12][13][15][16][17], the FCIQMC approach [3-5, 7, 27], the semi-stochastic approach [2,14,23,28], other stochastic approaches [10,19,30], and various deterministic strategies for compressed or truncated representation of the wave functions [9,18,20,21,[24][25][26][31][32][33][34]. In this work, we will focus on two of such strategies: the full configurationinteraction quantum Monte Carlo (FCIQMC) [4,7] and the fast randomized iteration (FRI) [19]. In some sense, these methods represent two ends of the spectrum of the possibilities; so that the analysis of those can be easily extended to other methodologies. The FCIQMC uses interacting particles to represent the vector v t and stochastic evolution of particles to represent the action of the matrix A on the vector v t . The FRI on the other hand is based on exact matrix-vector multiplication and stochastic schemes to compress the resulting vectors into sparse ones with given number of nonzero entries. These algorithms will be discussed and analyzed in Section 3, following the general analysis framework we establish in Section 2.
The rest of the paper is organized as the following. In Section 2, we provide a convergence analysis for a generic class of inexact power iteration. In Section 3, we give more details of FCIQMC and FRI and analyze them following the convergence analysis established in Section 2. In Section 4, we perform numerical tests on 2D Hubbard model and some chemical molecules to compare the various algorithms and to verify the analysis results.
General convergence analysis of inexact power iteration
As an advantage of taking a unified framework of various algorithms, the convergence of those can be understood in a fairly generic way, which also facilitates comparison of different proposed strategies. In this section, we establish a general convergence theorem of the inexact power iteration.
The convergence of the iteration to the desired eigenvector will be measured in the angle of the vectors. Recall that the angle between two vectors v and w is given by From the definition, it is obvious that θ(v, w) = θ(av, bw) for any vectors v, w and real numbers a, b. In view of this insensitivity of the constant multiple of vectors in the error measure, if the inexact matrix-vector multiplication F m (A, v t ) satisfies the homogeneity assumption below, the two versions of inexact power iterations with or without normalization Algorithm 1a and 1b are equivalent.
for all vectors v ∈ R N and real number c ∈ R.
More precisely, if the initial vectors of the two algorithms are the same up to a number x 0 = c 0 v 0 , then there exist numbers c t such that v t = c t x t for all t. Therefore, θ(u 1 , v t ) = θ(u 1 , x t ). In the following, when we analyze the algorithm, we will always use v t for the unnormalized iterate and x t for the normalized version To analyze the effect of the inexact matrix-vector multiplication, we write F m (A, v t−1 ) as a sum of the exact matrix-vector product with an error term where ξ t is the error of the inexact multiplication at step t, and the dependence on m is suppressed to keep the notation simple. Note that ξ t can be either deterministic or stochastic depending on the choice of F m . For example, ξ t is deterministic for the hard thresholding compression and stochastic for both FCIQMC and FRI methods. While we will proceed viewing v t as a stochastic process, the results apply to the deterministic case as well. Denote F t = σ(v 1 , v 2 , · · · , v t ) the σ-algebra generated by v 1 , v 2 , · · · , v t . We assume that error ξ t satisfies the following properties. Note that this assumption holds for both FCIQMC and FRI algorithms, as we will prove in Section 3.
Assumption 2. The error ξ t in the inexact matrix-vector product (4) satisfies a) Martingale difference sequence condition where C e is a constant that is scale invariant of A (i.e., it does not depend on the norm of A). c) Growth of expectation 1-norm bound A few remarks are in order to help appreciate the Assumption 2. The martingale difference sequence property is just assumed here for convenience, in fact the convergence result extends to the biased case as we will see in Corollary 2. The other two assumptions are more essential. Assumption 2b indicates that the error of the inexact matrix-vector product F m (A, v t−1 ) can be controlled by the sparsity of the v t−1 , as the 1-norm of v t−1 is a sparsity measure. This is a natural assumption considering that the compression of a vector would be easier if the vector is more sparse. The bound depends proportional to inverse of m, so that one could control the error of the inexact matrix-vector multiplication at the price of increasing the complexity. Note that 1/m dependence can be understood as a standard Monte Carlo error scaling. More detailed discussions can be found in Section 3 when specific algorithms are analyzed. Assumption 2c then assumes that the sparsity is not destroyed by the error in the iteration; since otherwise we will lose control of the accuracy of the inexact matrix-vector multiplication.
We now state the convergence theorem for the inexact power iteration Algorithms 1a and 1b. The theorem Let us now estimate the angle between v t and u 1 -the eigenvector associated with the largest eigenvalue. By definition, (16) tan For the denominator, we know the expectation and the variance The Chebyshev inequality implies that and hence, as |λ t For the numerator of (16), the expectation is By Markov inequality, for any δ ∈ (0, 1) Therefore, We can explicitly check then with the choices (8) and (9), for m ≥ m 0 , we have thus the claim of the theorem.
As mentioned above, it is possible to drop the martingale difference sequence condition in Assumption 2 and get a similar result. The reason is that the second moment bound (6) can be used to control the bias of ξ t . We state this as the following theorem.
Theorem 2. For the inexact power iteration Algorithm 1b, under the Assumptions 2(b) and 2(c), for any precision ε > 0 and small probability δ ∈ (0, 1), there exist time and measure of complexity , such that with probability at least 1 − 2δ, for any m ≥ m 0 , it holds Moreover, if Assumption 1 is satisfied, the same result holds for Algorithm 1a.
Proof. Note that so we get Moreover, where the Cauchy-Schwarz inequality is used in the last line. Thus we can again use the Markov inequality to bound both numerator and denominator on the right hand side of (16) to obtain the claimed result.
Algorithms
In this section, we will review two stochastic power iteration methods recently proposed in the literature: full configuration-interaction quantum Monte Carlo (FCIQMC) [4] and fast randomized iteration (FRI) [19]. They can be analyzed in the same framework we established in the previous section. In particular, we prove the convergence of the two algorithms using Theorem 1. We focus on these two methods since in some sense they represent two opposite ends of strategies inexact matrix-vector multiplications. It is possible to combine the ideas and get a zoo of different approaches, which possibly yield better results; and our analysis can be extended to these as well. We will also comment on two variants: iFCIQMC and hard thresholding (HT), closely related to the FCIQMC and FRI approaches.
Without loss of generality, we will assume the matrix A is close to the identity matrix and thus the eigenvalues λ i are close to 1 (we can always scale and center the original matrix so that this is true).
Full configuration-interaction quantum Monte Carlo.
3.1.1. Algorithm Description. FCIQMC is an algorithm originated in quantum chemistry literature to calculate the ground energy of a many-body electron system by a Monte Carlo algorithm for the full configurationinteraction of the many-body Hamiltonian [4].
Let the Hamiltonian be a real symmetric matrix H ∈ R N ×N under the Slater determinant basis. To find the ground state (the smallest eigenvector) of H, we write A = I − δH for δ > 0 sufficiently small and hence focus on the largest eigenvalue of A; this can be viewed as a first order truncation of the Taylor series of e −δH . It is also possible to construct other variants of A from H, which we will not go here.
The FCIQMC can be viewed as a stochastic inexact power iteration for finding the largest eigenvector of A, which corresponds more naturally to the unnormalized version of the inexact power iteration (Algorithm 1b). In the algorithm, the vector v t is not stored as a vector, but represented as a collection of "signed particles" {α where M t is the number of signed particles at iteration step t. Each signed particle α has two attributes: location l α ∈ {1, 2, · · · , N } and sign s α ∈ {1, −1}. Denote e l ∈ R N the standard basis vector with value 1 at its l-th component and 0 at every other component. Then each signed particle α represents a signed unit vector α = s α e lα . The vector v t ∈ Z N is given by the sum of all signed particles at time t: With some ambiguity of notation, we refer to both the set of particles and the corresponding vector as v t , connected by (21). As we always assume that the particles with opposite signs on the same location are annihilated (see the annihilation step in the algorithm description below), the vector v t uniquely determines the set of particles.
In FCIQMC, the inexact matrix-vector multiplication F m (A, v t ) consists of three steps of particle evolution: spawning, diagonal death/cloning and annihilation. Write A = A d + A o with A d the diagonal part and A o the off-diagonal part. The spawning step approximates A o v t ; the diagonal death / cloning step approximates A d v t , and the annihilation step sums up the results from the previous two steps and approximates the summation The three steps will be described in more details below.
Spawning. Each signed particle α (we suppress the index of α (i) t to simplify notation) is allowed to spawn a child particle to another location, corresponding to a nonzero component of A o α = s α A o (:, l α ). 1 The location of spawning is chosen at random, with probability p loc (l | l α ), which is chosen in the original FCIQMC algorithm to be uniformly random over all nonzero components of A o α for some simple Hamiltonian H. In general, p loc (· | l α ) can be more complicated; we refer readers to [4] for more details. In the following of the paper, p loc (· | l α ) is assumed to be uniform distribution over all nonzero components of A o α, while our analysis can be extended to other choice of p loc (· | l α ).
Once the location l is chosen, n (possibly 0) children particles are spawned with the same sign s = sgn(A o (l, l α )s α ) determined by the sign of vector entry (A o α)(l) and the particle α. The location l and number n are stochastically chosen such that the overall step gives an unbiased estimate of A o α: Please refer Algorithm 2a for details.
Algorithm 2a: FCIQMC -Spawning
Input : Set of particles: Select a spawning location l with probability p loc (l | l α ); Determine the expected number of children Randomly choose the number of children Increase the number of particles M sp = M sp + n; Add n particles with location l and sign s into the spawning set {α (j),sp }. end Diagonal cloning / death. This step represents A d v t as a collection of particles in an analogous way to the spawning step. For every signed particle α, we would consider children particles on the location l α (i.e., the location of the new particles is chosen to be l α ) and obtain an unbiased representation The details can be found in Algorithm 2b, the key steps are similar to Algorithm 2a.
Determine the expected number of children at l α Randomly choose the number of children Assign sign of each children as s = sgn (A d α)(l α ) ; Increase the number of particles M diag = M diag + n; Add n particles with location l α and sign s into the set {α (j),diag }.
Annihilation. The annihilation step merges the children particles from the previous two steps and remove all pairs of particles with the same location and opposite signs. If we denote v sp and v diag the corresponding vector representation of the particles are the spawning and diagonal cloning / death steps, the annihilation steps create a collection of particles representing the new vector v = v sp + v diag . Applying the three steps above to the particles representing v t , we obtain the new set of particles v t+1 at time t + 1. Since by construction we have on expectation In terms of the notations used in the framework of inexact power iteration, v t+1 represented using particles can be viewed as the approximate matrix-vector product F m (A, v t ): where ξ t+1 is introduced in the last equality to denote the error from the approximate matrix-vector multiplication through the stochastic particle representation. As we will show in the analysis below, the accuracy of FCIQMC iteration is controlled by the number of particles M t ; and thus it plays the role of the complexity parameter m in our general framework. We would drop the subscript m for F m in the sequel for FCIQMC, as the complexity parameter is implicit. Now that we have defined the inexact matrix-vector multiplication F (A, v t ) in FCIQMC, we may apply this in the inexact power iteration as Algorithm 1b. However, this can be problematic in practice. Recall that A = I − δH is assumed to be a perturbation of identity so its eigenvalue is around 1. If the largest eigenvalue of A is strictly larger than 1, when the signed particles become a good approximation to the leading eigenvector, the number of particles M t will grow exponentially with rate λ 1 , which quickly increases the computational cost and memory requirement. It is also possible (while the probability is tiny) that the number of particles may decrease to 0 due to the randomness.
In practice, it is desirable to have controls on the number of particles to make the algorithm more stable. One such strategy is to introduce a shift s t ∈ R and use matrix instead of A at the t-th step. Notice that s t only shifts the eigenvalues while not changing the eigenspace. The shift s t is adjusted dynamically to control the number of particles. With such shifts, the full FCIQMC algorithm is presented in Algorithm 2.
Algorithm 2: FCIQMC Initialization: t = 0 and set initial particles v 0 . while M t ≤ M target , the target population do // Phase 1: FCIQMC with fixed shift s 0 Spawning step: Use algorithm 2a with v t and A + δs 0 I to get particle set v sp ; Diagonal death / cloning step: Use algorithm 2b with v t and A + δs 0 I to get particle set v diag ; Annihilation step to get the particle set of the next time step v t+1 = v sp + v diag ; Update M t and set t = t + 1; end Set s t = s 0 ; // Initialize the dynamic shift while t < t max do // Phase 2: FCIQMC with dynamic shift s t Spawning step: Use algorithm 2a with v t and A + δs t I to get particle set v sp ; Diagonal death / cloning step: Use algorithm 2b with v t and A + δs t I to get particle set v diag ; Annihilation step: Merge the two set of particles Update M t and set t = t + 1; end The Algorithm 2 contains two phases for different strategies of choosing the shifts and thus controlling the particle population. In Phase 1, the shift is fixed to be s 0 , which is chosen such that |A(i, i) − s 0 | ≥ 1 for all i so that the particle number is most likely to grow exponentially till the target population M target . In the second phase, the shift is dynamically adjusted, so to control the growth of the population by a negative feedback loop. The target number of population M target is chosen to be sufficiently large that the variance is small enough to ensure convergence. It plays the role as the 'complexity' m in Theorem 1. η and q are two parameters to control the fluctuation of number of particles. For the details of the parameter choices, we refer the readers to the original paper on FCIQMC [4] for details.
Energy Estimator. Several estimators can be used to estimate the smallest eigenvalue of H based on the FCIQMC Algorithm 2, which is just a linear transformation of the largest eigenvalue λ 1 of A. One estimator is simply the shift s t . When the algorithm converges, v t is approximately proportional to the eigenvector u 1 . Since s t is adjusted to control the number of particles steady, the largest eigenvalue of A + δs t I is approximately 1, hence connecting s t with the desired eigenvalue estimate, cf. (26). The other estimator we will consider is the projected energy estimator Here v * is some fixed vector, for example the Hartree-Fock state of the system. It is clear that when v t becomes a good approximation of the eigenvector u 1 , E t gives a good estimate of the leading eigenvalue.
In the numerical examples, we will focus on the projected energy estimator, since it can be applied to all algorithms we consider in this work (while shift estimator is unique for FCIQMC, in practice, it gives similar results compared to the projected energy estimator).
Convergence Analysis.
Since FCIQMC can be viewed as an inexact power iteration as in (25), we apply Theorem 1 to analyze the convergence of FCIQMC. For simplicity, we will focus on the case that the shift is constantly 0, s t = 0, since the shift does not affect the eigenvector which is the main focus of Theorem 1. The probability distribution in the spawning step p loc (· | l α ) is assumed to be uniform distribution over all the nonzero entries of A o (:, l α ). To avoid some degenerate case, we will assume that each diagonal entry of A is non-zero and each column of A has more than 2 nonzero entries (so there is at least one possible location for children particles in the spawning step). We now check the three conditions in Assumption 2. The unbiasedness is guaranteed by construction as discussed above for the FCIQMC algorithm, we have or equivalently, the error ξ t is a martingale difference sequence: The expectation 2-norm bound is established in the following proposition.
Proposition 2. For the inexact matrix-vector multiplication (25) in FCIQMC Algorithm 2, the error ξ t satisfies where a k = A(:, k) is the k-th column vector of A, and a o,k is the k-th column vector of A o , thus a o,k equals a k except for the k-th entry a o,k (k) = 0.
Proof. Since each particle evolves independently, Hence, it suffices to consider each particle individually. To simplify the notation, without loss of generality, let us consider a particle with α (i) t = e k for some k. Since the spawning and diagonal cloning/death steps are independent and unbiased, we have the decomposition For the spawning step, since A o e k = a o,k , there are a o,k 0 locations to spawn. Remind that p loc (· | k) is assumed to be uniform distribution, so each location is chosen with probability For the diagonal cloning/death step, we have Summing up the contribution from the two steps, we arrive at Since M t = v t 1 , we can rewrite the above estimate as Here we emphasize the important role of the annihilation step in FCIQMC reflected in the error analysis above. Only with the annihilation step is M t = v t 1 true, so that the growth of error is controlled as in the last step of the proof. In general, without annihilation, the error will be exponentially larger as Mt vt 1 grows exponentially even when v t is close to the eigenvector u 1 . Suppose v t is approximately u 1 . Then v t+1 ≈ λ 1 v t . Therefore, v t+1 1 ≈ A 2 v t 1 . However for the number of particles M t without annihilation, M t+1 ≈ |A| 2 M t , where |A| is the entry-wise absolute value of A. To see this, let us denote v + t the vector represented by all the particles with positive sign and −v − t the vector represented by all the particles with negative sign.
Then M t = ṽ t 1 without annihilation. We can easily check thatṽ t evolves according toṽ t+1 = |A|ṽ t . So finally,ṽ t will converge to the eigenvector of |A|, and M t+1 ≈ |A| 2 M t . Noticing that A 2 ≤ |A| 2 ≤ A 1 , we know Mt vt 1 grows exponentially at rate |A| 2 A 2 after convergence. Therefore if the number of particles M t has an upper bound, which is always true in practice due to computational resource constraint, v t 1 will decay to zero exponentially, which means the algorithm will not converge to the correct eigenvector. Also comment that if the spawning distribution p loc (· | l α ) is not exactly uniform distribution, then E ( F (A o , e k ) − A o e k 2 2 | F t ) will be bound by another constant depending on A o . Therefore the bound of E ξ t+1 2 2 | F t in the Proposition will only differ by a constant multiplier.
Compared with Assumption 2, we observe that the particle number M t plays the role of the "complexity" parameter. The more particles we have, the smaller the error is. We have the following corollary assuming the particle number is bounded from below by m Corollary 3. If the particle number satisfies M t ≥ m, is a parameter scale-invariant of A.
In summary, FCIQMC satisfies Assumption 2b, as long as the particle number is not too small. Note that in practice the particle number can be controlled by the dynamic shift s t to ensure that it does not drop below the required lower bound.
The Assumption 2c, the growth of expectation 1-norm bound, can also be checked easily, since we have In conclusion, we have verified the assumptions of Theorem 1, and thus it can be applied for the convergence and error analysis of FCIQMC. [7] is a modified version of FCIQMC. It can be viewed as a bias-variance tradeoff strategy to reduce the computational cost and error of the FCIQMC approach, by restricting the spawning step.
Remarks on iFCIQMC. iFCIQMC (initiator FCIQMC)
The n locations are divided into two sets: the initiators L i and non-initiators L n with L i ∩ L n = ∅, L i ∪ L n = {1, 2, · · · , N }. The rule of iFCIQMC is that for any particle α at a non-initiator location l α ∈ L n , it is only allowed to spawn children particles at locations already occupied by some other particles. If α spawns particles to a location unoccupied, then the children particles are discarded. An exception rule is that if at least two particles at non-initiator locations spawn children particles with the same sign at one unoccupied location, then the children particles are kept. There are no restrictions for spawning steps for particles in initiators. In the case that all the locations are initiators L n = ∅, iFCIQMC reduces to FCIQMC.
The initiators L i are chosen at the beginning according to some prior knowledge. The initiators are then updated at each step of iteration. Suppose n i,thre ∈ N is a fixed threshold. As soon as the number of particles at a non-initiator location is greater than the threshold n i,thre , then the location becomes an initiator. Intuitively, initiators are more important locations for the eigenvector since they are occupied by many particles. The restrictions on the spawning ability of non-initiators reduce the computational cost and the variance of the inexact matrix-vector product while only introducing small bias since there are few particles on non-initiators. Therefore, iFCIQMC can be viewed as a variance control technique for FCIQMC.
Fast Randomized Iteration.
In this section, we provide a numerical analysis based on our general framework for the convergence of the fast randomized iteration (FRI), recently proposed in the applied mathematics literature [19], inspired by FCIQMC type algorithms. The basic idea of the FRI method is to first apply the matrix A on the vector of current iterate, and then employ a stochastic compression algorithm to reduce the resulting vector to a sparse representation. The original convergence analysis [19] uses a norm motivated by viewing the vectors as random measures. In comparison, as we have seen in the proof of Theorem 1, our viewpoint and analysis is closer in spirit to numerical linear algebra, in particular the standard convergence analysis of power method.
c) Variance bound. For some constant C Φ independent of m and v, The compression function Φ m introduced in [19] is as follows. For a given vector v ∈ R N , first we sort the entries as |v(q 1 )| ≥ |v(q 2 )| ≥ · · · ≥ |v(q N )|, where q : [N ] → [N ] is a permutation. The compression function consists of two parts. In the first part, large components of the vector are preserved exactly. Define with the convention max{∅} = 0, so 0 ≤ τ ≤ m. The compression function keeps the entries v(q i ) for any In the second part of Algorithm 3, the set B = {q τ +1 , q τ +2 , · · · , q N } consists of the indices of all 'small' components to be compressed. Note that for the integer random variable N i , i ∈ B, only its expectation E N i ∈ (0, 1) is specified, so there is still freedom to choose the probability distribution of {N i } i∈B . Here we only discuss independent Bernoulli (which is easy to understand) and systematic sampling (which we use in the numerical examples) approaches, while other choices are possible. Let us focus on the entries in B and For the independent Bernoulli, N i is independent for each i ∈ B and follows the Bernoulli distribution as Note that the probability is well defined due to the choice of τ . The number of nonzero components of the Another choice is the systematic sampling [19]: Take a random variable U uniformly distributed in (0, 1). Then for k = 1, 2, · · · , m − τ , define Given {q ′ 1 , q ′ 2 , · · · , q ′ N −τ } any permutation of indices in B, define then N i is given by Notice that by construction, the number of nonzero N i s is exactly m − τ , therefore Φ m (v) 0 = m. The N i s generated by systematic sampling is obviously correlated as only one random number U drives the generation. The two approaches will be analyzed in the next section in the framework of inexact power iteration.
Convergence Analysis.
We now apply Theorem 1 to analyze the convergence of the FRI algorithm with either independent Bernoulli or systematic sampling. Notice that we have the immediate result Therefore it suffices to check Assumption 3 for the compression function Φ m . Homogeneity is obvious. From the construction of Φ m , the unbiasedness is guaranteed by the expectation of N i s, no matter which particular distribution is used for N i .
The variance bounds are proved in the following lemmas.
Lemma 5. For FRI compression with either independent Bernoulli or systematic sampling, Moreover, we have the almost sure bound for systematic sampling, It is not possible to obtain an almost sure bound as above for independent Bernoulli, since for example it is possible that all the Bernoulli variables are 1, which gives large error . This Lemma thus implies the advantage of using the systematic sampling strategy, which in practice gives smaller variance in general. We will only show numerical results using the systematic sampling strategy in the numerical examples later.
Proof. Since large components of v are kept exactly by Φ m (·), we have Take the expectation Since both independent Bernoulli and systematic sampling are unbiased, Moreover, because there are Finally, Thus, combined with v ′ 1 ≤ v 1 , we arrive at Next we give the almost sure bound for systematic sampling. Note that if N i = 0 for i ∈ B, since Φ m (v) (q i ) and v(q i ) have the same sign, we have Since there are exactly m − τ nonzero N i s, we can estimate The expectation 1-norm bound can be easily checked from the definition.
Lemma 6. For FRI with independent Bernoulli compression, For FRI with systematic sampling compression, Therefore, the compression function Φ m satisfies Assumption 3, and thus the convergence follows Theorem 1.
3.2.3.
Deterministic compression by hard thresholding. Another way to choose the compression function Φ m is by simple hard thresholding, which means Φ m = Φ HT m keeps the m largest entries (in absolute value) and drops the remaining ones. Compared to the previously discussed approaches of compression, the hard thresholding obviously has smaller variance since it is deterministic, as a price to pay, it introduces bias to the inexact matrix-vector multiplication. The bias-variance tradeoff between hard thresholding and FRI type algorithm is similar to that between iFCIQMC and FCIQMC.
Numerical Results
In this section, we give some numerical tests of the FCIQMC and FRI algorithms, and their variance iFCIQMC and Hard Thresholding to compare the performance. The numerical problem is to compute the ground energy of a Hamiltonian H for a quantum system. As discussed before, we define A = I − δH for δ small so the problem is equivalent to find the largest eigenvalue of A. We will test these methods with two types of model systems: the 2D fermionic Hubbard model and small chemical molecules under the full CI discretization. The Hamiltonians for these have the same structure. Each electron lives in a finite dimensional one-particle Hilbert space. The vectors in the basis set of the one-particle Hilbert space are called orbitals. The number of orbitals N orb is the dimension of the one-particle space. We denote N elec the total number of electrons in the system in total. Due to the Pauli exclusion principle, there are at most two electrons with opposite spins in one orbital. In our test examples, we choose the total spin S tot = 0.
Therefore the dimension of the space is N orb N elec /2 2 , neglecting other constraints like symmetry. The dimension grows exponentially as N orb and N elec grows. Here we summarize the system in our numerical tests in the The exact ground energy of the Hubbard model and Ne are computed using exact power iteration, and the ground energy of H 2 O is from the paper [22]. We use HANDE-QMC [29] Note that our comparison is mostly for illustrative purpose and should not be taken as benchmark tests for the various algorithms especially for large scale calculations, which would depend on parallel implementation, hardware infrastructure, etc. On the other hand, even for small problems, the numerical results still offer some suggestions on further development of inexact power iteration based solvers for many-body quantum systems.
Hubbard
Model. The Hubbard model is a standard model used in condensed matter physics, which describes interacting particles on a lattice. In real space, the Hubbard Hamiltonian is where we have scale the hopping parameter to be 1 and so the on-site repulsion parameter U gives the ratio of interaction strength relative to the kinetic energy. We choose an intermediate interaction strength U = 4 in our test.
In the d dimensional Hubbard Hamiltonian (38), r is a d-dimensional vector representing a site in the lattice, r, r ′ means r and r ′ are the nearest neighbor, and σ takes values of ↑ and ↓, which is the spin of the electron.ĉ r,σ andĉ † r,σ are the annihilation and creation operator of electrons at site r with spin σ. They satisfy the commutation relations where {A, B} = AB +BA is the anti-commutator.n r,σ is the number operator and defined asn r,σ =ĉ † r,σĉr,σ . We will consider Hubbard model on a finite 2D lattice with periodic boundary condition.
When the interaction strength U is small, it is better to work in the momentum space instead of the real space, since the planewaves are the eigenfunctions of the kinetic part of the Hamiltonian. The annihilation operator in momentum space isĉ k,σ = 1 √ N orb r e ik·rĉ r,σ , where k = (k 1 , k 2 ) is the wave number and N orb is the total number of orbitals or sites. The Hubbard Hamiltonian in momentum space is then Written as a matrix, the Hubbard Hamiltonian in the momentum space is just a real symmetric matrix with diagonal entries ε(k) and off-diagonal either 0 or ± U N orb . For inexact power iteration, we take A = I −δH with δ = 0.01. In our numerical test, we will use the projected energy estimator for the smallest eigenvalue of H; the projected vector v * is chosen to be the Hartree-Fock state. The initial iteration of all methods is also chosen as the Hartree-Fock state (a vector whose only nonzero entry is at the Slater determinant corresponding to the Hartree-Fock ground state of the system). Figure 1 plots the error of projected energy of each iteration versus wall-clock time (first 1500 seconds) for a typical realization. The error is defined as the difference between the projected energy estimate and the exact ground energy. The complexity parameters of the algorithms are shown in Table 2, which are chosen such that FRI and FCIQMC use about the same amount of memory (e.g., the particle number in FCIQMC is roughly equal to the non-zero entries of the matrix-vector product in FRI or HT before compression), and also chosen so large that all the algorithms converge. The time per iteration listed in Table 2 is averaged over several realizations and is used in the Figure 1. As shown in Figure 1, all four algorithms converge to result close to the exact eigenvalue and the estimated value from each iteration stays around the eigenvalue for a long time. FCIQMC and iFCIQMC take much less time to converge, thanks to their much lower-cost inexact matrix-vector multiplication compared to FRI and HT, but the variance is also larger. In terms of iteration number, the convergence of the four algorithms is similar, which can be understood from our analysis since it is the same eigenvalue gap of the Hamiltonian that drives the convergence. As we mentioned already, per iteration, the FCIQMC and iFCIQMC is much cheaper in comparison. The reason is that FRI and HT need to access all nonzero elements of A for each column associated with a non-zero entry in the current iterate (for multiplying A with the sparse vector), while FCIQMC and iFCIQMC just need to randomly pick some, without accessing the others. The number of non-zero entries per row is large and accessing elements of A is quite expensive for FCI type problems. More quantitatively, we see in Table 2 that for a sparse vector of 3 × 10 4 non-zero entries in FRI, after multiplication by A before compression, the number of non-zero entries increases to roughly 10 6 . Thus for this problem, on average, each column has about 40 nonzero entries that FRI needs to access, while FCIQMC algorithm only needs access of few entries after the random choice. After convergence, the projected energy of FCIQMC and iFCIQMC fluctuate around the exact ground state energy. Although iFCIQMC is biased, the bias is not large for the current problem, while the variance is smaller than FCIQMC. So iFCIQMC is an effective strategy for bias-variance trade-off. The projected energy of FRI also varies around the true energy, and the variance is much smaller than FCIQMC or iFCIQMC. HT is deterministic and the projected energy shows no variance. However the bias is also quite visible.
We can average the projected energy over the path to get a better estimate. The variance of the estimator will decay to zero as we include longer time period in the average. Thus, due to unbiasedness, the error of FCIQMC and FRI can be made smaller if we run for long enough. In Table 2, we give more quantitative comparison of the results of the algorithms. The quantities in the table are defined as below Here E true is the true ground energy obtained by exact power iteration, i 0 is a burn-in parameter and w is the window size of the average. For FCIQMC and iFCIQMC, w = 1600 and i 0 = 2400. For FRI and HT, w = 400 and i 0 = 600. The numerical tests show that the quantities above are insensitive to the choice of w and i 0 , as long as the algorithms indeed converge after i 0 steps and the window size w is not too small. τ auto is the integrated autocorrelation time and W is the number of iterations averaged. The std. is short for the standard deviation of the sample meanĒ (W ) defined asĒ (W ) = 1 Since the time cost per iteration of different algorithms is quite different, to make a fair comparison, we take W = 10000 time per iter. for each algorithm. It gives the standard error of the sample mean if we run each algorithm for 10000 seconds after convergence. The mean square error (MSE) is simply defined to incorporate the variance and bias together. To further obtain insights of the interplay between the error per step of inexact power iteration and the convergence, we plot in Figure 2 the relative compression error and the tangent of the angle between v t and the exact eigenvector u 1 . We observe that that FRI and HT reach convergence after about 100 steps and FCIQMC and iFCIQMC converge after about 350 steps; the more steps of FCIQMC and iFCIQMC are related with the first phase of the algorithm where the particle number is exponentially growing. This can be seen from Figure 2(left) as the huge error growth of the initial stage of the iterations. Only when the particle number reaches a certain level, the compression error becomes small and the power iteration convergence kicks in.
After convergence, FRI has the largest compression error and HT has the smallest. The compression error of iFCIQMC is also smaller than the one of FCIQMC. It is reasonable since HT and iFCIQMC reduce variance and thus compression error compared with the fully stochastic FRI and FCIQMC. As shown in Figure 2, in this example with the parameter choice, FCIQMC has smaller compression error than FRI; and the larger the compression error is, the further v t is away from the true eigenvector u 1 . This agrees with the theoretical results we obtain in Theorem 1, because tan θ(v t , u 1 ) is controlled by the error ξ t at each step.
We remark that the tan θ(v t , u 1 ) error measure does not directly translate to the error of the projected energy estimator using say the Hartree-Fock state. In fact, we observe in Figure 1 and Table 2 that per iteration, the projected energy estimated by FRI is smaller than FCIQMC and iFCIQMC. As an explanation, in our parameter regime, the exact ground state has a large overlap with the Hartree-Fock state, so in FRI, that component is kept unchanged in the compression, while for FCIQMC and iFCIQMC, the stochastic error is more uniformly distributed over all the entries. This behavior seems more problem dependent though, as we will see in the chemical molecular examples that the MSE of FRI become comparable with FCIQMC.
4.2.
Molecules. We also tested the four algorithms for some molecule examples. The FCI Hamiltonian is obtained by a Hartree-Fock calculations in a chosen chemical basis (for single-particle Hilbert space), such as cc-PVDZ. We choose Ne and H 2 O at equilibrium geometry as examples, which is described in Table 1. The time step is taken as δ = 0.01.
The convergence of projected energy error versus wall-clock time is shown in Figure 3 and Figure 4 respectively. The parameter choice of the algorithms and more quantitative comparison are shown in Table 3 and Table 4. The four algorithms also work well for molecule systems. The convergence behavior is similar to the Hubbard case.
The complexity parameter m needed to achieve convergence depends on the system. The ratio m/N of Ne is smaller than H 2 O. The time cost of FRI and HT is much larger than FCIQMC and iFCIQMC, because they require the exact matrix-vector multiplication Av t , which is still expensive although v t is sparse. Unlike the Hubbard case where FRI gives much smaller error, the MSE of FRI is similar to FCIQMC and iFCIQMC in these cases. In summary, the numerical examples show that the FCIQMC, FRI and their variants can achieve convergence using much less memory and computational time compared to the standard power iteration. The stochastic algorithms FCIQMC, iFCIQMC and FRI give better estimates than the deterministic method HT in general. The numerical test also points out directions to further improve these inexact power iterations, including variance and memory cost reduction of the inexact matrix-vector multiplication and efficient parallel implementation to overcome the memory bottleneck. These will be leaved for future works. | 2018-03-03T01:02:25.897Z | 2017-11-24T00:00:00.000 | {
"year": 2017,
"sha1": "95e4ad67e957a48759339fd24bd51493743c01fb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "95e4ad67e957a48759339fd24bd51493743c01fb",
"s2fieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
119302807 | pes2o/s2orc | v3-fos-license | Can helium envelopes change the outcome of direct white dwarf collisions?
Collisions of white dwarfs (WDs) have recently been invoked as a possible mechanism for type Ia supernovae (SNIa). A pivotal feature for the viability of WD collisions as SNIa progenitors is that a significant fraction of the mass is highly compressed to the densities required for efficient $^{56}$Ni production before the ignition of the detonation wave. Previous studies have predominantly employed model WDs composed entirely of carbon-oxygen (CO), whereas WDs are expected to have a non-negligible helium envelope. Given that helium is more susceptible to explosive burning than CO under the conditions characteristic of WD collision, a legitimate concern is whether or not early time He detonation ignition can translate to early time CO detonation, thereby drastically reducing $^{56}$Ni synthesis. We investigate the role of He in determining the fate of WD collisions by performing a series of two-dimensional hydrodynamics calculations. We find that a necessary condition for non-trivial reduction of the CO ignition time is that the He detonation birthed in the contact region successfully propagates into the unshocked shell. We determine the minimal He shell mass as a function of the total WD mass that upholds this condition. Although we utilize a simplified reaction network similar to those used in previous studies, our findings are in good agreement with detailed investigations concerning the impact of network size on He shell detonations. This allows us to extend our results to the case with more realistic burning physics. Based on the comparison of these findings against evolutionary calculations of WD compositions, we conclude that most, if not all, WD collisions will not be drastically impacted by their intrinsic He components.
INTRODUCTION
Type Ia supernovae (SNIa) are well-known cosmological "standardizable candles" thanks to a tight empirical correlation (the Phillips relation; Phillips 1993). It is understood that SNIa are powered by the decay of 56 Ni (Colgate & McKee 1969) produced from the explosion of White Dwarfs (WDs) composed predominantly of carbon and oxygen (CO), but there is no consensus regarding the explosion mechanism. The two canonical scenarios, single-degenerate accretion (WD accretion exceeding the Chandrasekhar limit) and double-degenerate mergers (merger of two close WDs that spiral in due to gravitational radiation), have many theoretical and observational challenges (Hillebrandt & Niemeyer 2000;Maoz et al. 2014). A serious concern for both scenarios is that a successful ignition of an explosive detonation has never been convincingly demonstrated (including the recent claims of a self-consistent ignition in a doubledegenerate merger; Kashyap et al. 2015, see section 2 for details).
Although collisions of WDs were believed to have rates which are orders of magnitude smaller than the rate of SNIa, they motivated three-dimensional (3D) hydrodynamic simulations of such collisions and of the resulting thermonuclear explosion (Benz et al. 1989;Raskin et al. 2009;Rosswog et al. 2009;Lorén-Aguilar et al. 2009;Raskin et al. 2010;Hawley et al. 2012;Aznar-Siguán et al. 2013).While the amount of 56 Ni synthesized in most of these simulations was non-negligible, the re-sults were contradictory, with inconsistent amounts of 56 Ni and different ignition sites of a detonation wave for the same initial conditions. These discrepancies were resolved by Kushnir et al. (2013), where high-resolution two-dimensional (2D) simulations with a fully resolved ignition process were employed. The nuclear detonations in these collisions are due to a well understood shock ignition that is devoid of the commonly introduced free parameters such as the deflagration velocity or transition to detonation criteria (e.g., in the singledegenerate and double-degenerate scenarios; see Hillebrandt & Niemeyer 2000). Katz & Dong (2012) demonstrated that the rate of direct collisions in common field triple systems may approach the SNIa rate. Thompson (2011) had previously argued that the secular Lidov-Kozai mechanism (Kozai 1962;Lidov 1962) in triples might play an important role in WD-WD mergers via gravitational radiation to produce SNIa. However, the non-secular corrections to the Lidov-Kozai mechanism obtained by Katz & Dong (2012) raised the possibility that the majority of SNIa result from collisions. Supporting evidence was provided in Kushnir et al. (2013), in which numerical simulations reproduced several robust observational features of SNIa. In particular, it was established that the full range of 56 Ni necessary for all SNIa across the Phillips relation can be obtained by collisions of typical WDs. Further evidence was recently discovered by Dong et al. (2015) in the form of doubly-peaked line profiles in high-quality nebular-phase spectra, which suggest that SNIa with intrinsic bi-modality are common. They observe such bimodality in a 3D 0.64M -0.64M WD collision simulation as a result of detonation in both WDs.
A crucial property for the viability of WD collisions as progenitors of SNIa is that a significant fraction of the mass is highly compressed to the densities required for efficient 56 Ni production before the ignition of the detonation wave. Otherwise only massive (> 0.9 M ) CO WDs would produce sufficent quantities of 56 Ni (as required by all other progenitor models). Evolutionary calculations predict that CO WDs retain helium in the outermost layers (Althaus et al. 2005;Lawlor & MacDonald 2006;Renedo et al. 2010), see Figure 1. A legitimate concern, then, is whether or not burning of the helium envelope (which is possible at lower temperatures than required for CO burning) can lead to a CO ignition at early times, before a significant fraction of the mass is highly compressed. The predictions for the helium envelope mass, M He,ev , are between ≈10 −3 M and ≈2.5 × 10 −2 M for 0.9 M and 0.5 M WDs, respectively (see, however, significantly less massive helium shells from asteroseismic analysis of helium-atmosphere (DB) white dwarfs; Bischoff-Kim et al. 2014).
The impact of helium shells was recently studied by Papish & Perets (2015), who performed 2D head-on collision calculations. They scanned the relevant parameter space with 8 collisions 4 , and concluded that (see Figure 1) • M He 0.1 M is required for M WD = 0.7 M and M WD = 0.8 M in order for the detonation to propagate in the helium shell, • M He 0.2 M is required for M WD = 0.8 M in order to obtain CO ignition at early times.
These results suggest that the required helium shell masses to obtain CO ignition at early times are much higher than M He,ev for the relevant WD masses. 5 However, the scan of the parameter space was quite sparse, resulting in coarse estimates for the minimal helium mass that can alter the ensuing CO ignition, especially for low mass WDs. Perhaps most importantly, they used the approx19 reaction network (Weaver et al. 1978) in their simulations, which is known to be a poor representation of helium burning at high temperatures because it does not include the proton mediated α-capture reaction 12 C(p, γ) 13 N(α, We perform a careful study of the role of helium shells in equal-mass head-on collisions (zero impact parameter), by using 2D numerical simulations. Since capturing the dynamics of such collisions requires multidimensional simulations, we work with a small reaction network of 13 isotopes (similar to the approx19 reaction network, see below). We demonstrate that a necessary condition for early CO ignition is a stable detonation propagation in the WD helium shell. Establishing this condition allows us to use previous detailed studies of detonation waves in helium shells (Townsley et al. 2012;Moore et al. 2013; Shen & Moore 2014) to derive a lower limit for the the required helium mass for early CO ignition, which we then extend to the case of large reaction networks. We argue that this lower limit is applicable to all collisions, including unequal mass collisions and those with non-zero impact parameter. We obtain this lower limit for a wide range of WD masses, and we conclude that early CO ignition due to He burning is unlikely in real collisions given the predicted helium shell masses from evolutionary calculations, except possibly for a very small fraction of SNIa at the faint end. We argue that, for M He below this lower limit, the 56 Ni distribution in the ejecta should not diverge significantly from the pure CO case. We do not study the synthesis of intermediate mass elements in the burnt helium shell because the yields resulting from the small reaction network utilized here are subject to large uncertainties (previous works are similarly uncertain).
In section 2 we describe the numerical methods used throughout this study. In section 3 we investigate the dynamics of collisions with helium shells and establish a necessary condition for early CO ignition by measuring the lower limit on the requisite helium mass. We conclude in section 4.
NUMERICAL METHODS & SETUP
We calculate head-on collisions of equal mass WDs with the FLASH 6 (Fryxell et al. 2000) hydrodynamics code. The fluid equations are evolved by the directionally split hydrodynamics solver and are closed with the tabular Helmholtz equation of state (Timmes & Swesty 2000). Compositions are updated via a 13 isotope αchain reaction network (similar to the approx13 network supplied with FLASH with slightly updated rates for specific reactions, especially fixing a typo for the reaction 28 Si(α, γ) 32 S, which reduced the reaction rate by a factor ≈4. Note that the approx19 network does not significantly change the results of the approx13 network for CO and helium burning). The gravitational interaction is calculated by the "new multipole solver" (Couch et al. 2013), with the multipole expansion out to l max = 16. We find that our results are converged when employing adaptive mesh refinement with ≈4 km resolution (i.e. the minimal allowed cell size within the most resolved regions), see convergence study in section 3.
False numerical ignition may occur if the burning time, t burn =Q/ε (whereQ is the energy injection rate from burning and ε is the internal energy), in a cell becomes shorter than the sound crossing time, t sound = ∆x/c s (where ∆x is the length scale of the cell and c s is the sound speed). To evade this pitfall, we include a burning limiter that forces the burning time in any cell to be longer than the cell's sound crossing time by suppressing all burning rates with a constant factor whenever t sound > f t burn with f = 0.1 (see Kushnir et al. 2013, for a detailed description). In order to illustrate the necessity of such a limiter we analyze the recent claims of a self-consistent ignition in a double-degenerate merger (Kashyap et al. 2015). They used the FLASH code without implementing the limiter, and they obtained a CO ignition at a density of ≈6.7 × 10 6 g cm −3 and at a temperature of ≈3.2 × 10 9 K. Under these conditions the burning time is t burn ≈ 1.4 × 10 −5 s (for 50% carbon, 50% oxygen, by mass) while the sound crossing time for their highest resolution (68.3 km) is 1.3 × 10 −2 s. Since the burning time is shorter by three orders of magnitude from the sound crossing time the obtained ignition is not physical, and the claim of a self-consistent ignition is not rigorous.
To isolate the behavior of the helium component we utilize a simple model. Isothermal white dwarf profiles 7 are constructed with temperature T = 10 7 K. Two regions are then defined: a 50% (by mass) carbon, 50% oxygen core, and a pure helium envelope. The radius of the composition boundary is altered to achieve the desired He envelope mass. Head-on (zero impact parameter) collisions allow the use of cylindrical geometry (r, z). The WDs are intialized in contact with free-fall velocities. The ambient medium consists of helium gas at density ρ amb = 10 −2 g cm −3 and T amb = 10 7 K. The domain boundaries are r = [0, L] and z = [−L, L], where L = 2 17 km ≈ 1.31 × 10 5 km.
In order to broadly probe the parameter space at hand, we choose the WD mass pairs (in units of M ) 0.5-0.5, 0.64-0.64, and 0.8-0.8; see Tables 1, 2, and 3 for a summary of the models and their main results. The results from the pure CO models are consistent with the previous results of Kushnir et al. (2013), and provide a means of understanding the role of He layers in WD collisions by comparison.
RESULTS
In this section we investigate the dynamics of collisions with helium shells and establish a necessary condition for early CO ignition. We begin by examining the 0.64 − 0.64 case in Section 3.1. The lower limit for the required helium mass for early CO ignition is presented in Section 3.2. These representative collisions demonstrate different behavioral regimes, depending on the helium mass. Two strong shocks (the leading shocks hereafter) initially propagate from the contact surface and move toward the center of each star at a velocity that is a small fraction of the velocity of the approaching stars (the fact that the stars are identical implies a mirror symmetry ±z allowing us to focus on one of the stars in Figure 2). The shocked region near the contact surface has an approximately planar symmetry and a nearly uniform pressure (Kushnir & Katz 2014). The temperature near the surface of contact is too low for appreciable nuclear burning to take place at early times for the pure CO case (panel (a1)), however significant burning of the shocked helium shell is obtained after ∼0.6 s following the collision in the non-zero M He 7 http://cococubed.asu.edu/code pages/adiabatic white dwarf.shtml cases (panels (b1-d1)). The induction time for the helium burning at this stage is calculated accurately with our small network because the temperatures are below 10 9 K, where the 12 C(p, γ) 13 N(α, p) 16 O reaction is not important. The helium burning increases the pressure in the helium shell and accelerates the leading shock (compare the position of the shock in panels (a1-d1)). The burning also leads to an ignition of a detonation wave for massive enough (≥8 × 10 −3 M ) helium shells. We emphasize that the ignition is completely resolved in our simulations, and is not put in by hand as in other models. The detonation wave propagates along the r-direction inside the shocked helium shell (panels (b1-d1)). Beginning from this phase the helium burning is no longer calculated accurately within the small reaction network (Shen & Moore 2014).
The acceleration of the leading shock due to the helium burning is a small effect compared to the acceleration caused by the gravitational field of each star, and therefore the leading shocks continue to accelerate roughly as in the pure CO case until CO ignition is obtained (at ≈2.56 s after contact for the pure CO case, panel (a2); ignition is defined as a formation of a shock due to thermonuclear burning). However, the dynamics can change appreciably if the detonation wave in the shocked helium shell can cross into and propagate within the unshocked helium shell. In the case of M He = 4 × 10 −2 M the detonation wave does not cross into the helium shell (panel (b2)), and the CO ignition is obtained roughly at the same time and location as in the pure CO case (at ≈2.51 s after contact, panel (b2)). For the M He = 8 × 10 −2 M case the detonation wave does cross into the unshocked helium shell, propagating to the posterior of the WD. Nevertheless, the CO ignition is obtained roughly at the same time and location as in the pure CO case (at ≈2.43 s after contact, panel (c2)).
At an even larger helium mass, M He = 0.16 M , the shock wave launched from the helium shell detonation wave into the CO core converges on the WD interior, leading to a CO ignition before the ignition behind the leading shock (panel (d2), similar to the doubledetonation scenario (Livne 1990; Moll & Woosley 2013; Shen & Bildsten 2014)). In this particular case the CO ignition is obtained roughly at the same time as in the pure CO case (at ≈2.68 s after contact), however the position is significantly different. The total 56 Ni yield will be similar to the pure CO case because significant fractions of the colliding WDs are allowed to compress to high densities, but we expect large discrepencies in the 56 Ni distribution. For more massive helium shells the detonation wave traverses the helium shell faster (because of the smaller circumference at the composition interface), so that the CO ignition due to the converged shock happens earlier, resulting in drastic reductions to the 56 Ni yield.
From the analysis presented so far, we conclude that a necessary condition for early CO ignition is that the He detonation wave crosses into the unshocked helium shell. In other words, a collision with a given M He cannot significantly depart from the pure CO case if it cannot also drive a detonation through the unshocked He shell. We find the minimal helium shell mass that allows this crossing, M He,cr , to be (66.5 ± 0.5) × 10 −3 M in the M WD = 0.64 M case, where a successful crossing is declared if a steady detonation wave is propagating in the helium shell (or the entire helium shell is burnt) at the time of CO ignition. For all simulations with M He < M He,cr the CO ignition was obtained roughly at the same time and location. The same calibrated M He,cr was obtained for resolutions of 8 km and 16 km. Therefore our results for M He,cr (for our small reaction network) are converged to the level of ≈10 −3 M .
3.2. The lower limit on the required helium mass for early CO ignition As in Section 3.1, we measured M He,cr for the 0.5 − 0.5 and 0.8 − 0.8 cases, and the results are (93.5 ± 0.05) × 10 −3 M and (39.5 ± 0.5) × 10 −3 M , respectively (Figure 1). For the 0.5 − 0.5 case a similar behavior to the 0.64 − 0.64 case is obtained: the CO ignition time becomes slightly later as M He approaches M He,cr . Beyond the transition point to the double detonation-like ignition mechanism, the CO ignition time becomes earlier as M He increases. We note that CO ignition due to the converging shock happens for all M He ≥ M He,cr in the 0.5 − 0.5 case. The behavior in the the 0.8 − 0.8 case is similar to the 0.64−0.64 case for M He < M He,cr . For M He > M He,cr the CO ignition is delayed, until for massive enough shells (M He 0.18M ) the helium detonation directly ignites the CO on the symmetry-axis at early times. This is different from the CO ignition due to the converged shock behavior described by Papish & Perets (2015) Although all collision calculations conducted in this study are 2D (zero impact parameter) and employ equal mass WD models, we expect that the same behavior will be obtained in non-zero impact parameter and/or un-equal mass collisions. Previous detailed studies of detonation propagation in WD He shells have determined that the success of such propagation depends only upon the total WD mass and the available density of fuel. In other words, the success or failure of He shell detonation propagation in a single WD does not depend on the orientation of the collision, nor on the mass of the collision partner. Therefore the outcome of collision should only depend on the total mass and composition of each WD independently of one another.
DISCUSSION
We conclude that it is unlikely that WD collisions will be significantly affected by their intrinsic He components. In section 3.1 we observed that the behavior of WD collisions may be appreciably altered from the results obtained in pure CO collisions provided that sufficient quantities of He exist in the outermost layers of the progenitors (Figure 2). We then empirically demonstrated that a helium content in excess of M He,cr , the minimal mass for the He detonation to propagate into the unshocked shell, is a necessary condition for non-trivial modification of the ensuing CO ignition (section 3.2). Although we utilized a small reaction network which is known to be a poor approximation for He burning above ∼10 9 K, we have shown that our results are in good agreement with detailed studies concerning the impact of network size on He shell detonations. This agreement allowed us to infer the reductions in M He,cr we would obtain with a more sophisticated reaction network. Even with the enhancements provided by large nuclear networks, the minimal mass for supported He shell detonation M He,st ≈M He,cr is larger than the expected maximal He mass in WDs M He,ev , except possibly for the lowest mass CO WDs (M WD ≈0.5 M ) which are expected to contribute only a small fraction of collisions (Figure 1).
One possible caveat is that real WDs are likely to have non-trivial compositional transition regions, wherein the helium layer is polluted with sizable quantities of carbon and oxygen, as well as smaller amounts of hydrogen and nitrogen (Renedo et al. 2010). Shen & Moore (2014) showed that such pollutants can reduce M He,st by an additional ∼tens of percent from the pure He case when utilizing a large nuclear network. If this is indeed the case, it may be possible for the helium content of low mass WDs (≈0.5M ) to exceed M He,cr . However, given that the composition profiles are complicated, and that their calculations remain somewhat uncertain, it is difficult to predict precisely how large the effect on M He,cr will be.
Finally, although the bulk properties of WD collisions are largely governed by the detonation of the CO core, He burning on the WD exterior can potentially produce observationally relevant isotopes (Holcomb et al. 2013;Moore et al. 2013;Papish & Perets 2015). The conditions characteristic of WD He shells are typically insufficient to produce 56 Ni, and the mass within the shocked He shell is small, therefore nickel synthesis will not be noticeably changed unless the CO ignition time is reduced or delayed. However, intermediate mass elements such as 40 Ca, 44 Ti, and 48 Cr can be produced in large quantities from the burnt He shell, but this again is reliant on the capacity for a given WD to support a He shell detonation. A serious effort to predict the nucleosynthesis of the He shell detonation would require a larger network than employed here. Taking into consideration that the largest uncertainties of this study, as well as those in other similar studies, stem from the use of abridged nuclear networks, we strongly urge the implementation of more sophisticated networks in future calculations concerning nuclear explosive astrophysics. | 2015-10-26T20:20:59.000Z | 2015-10-26T00:00:00.000 | {
"year": 2016,
"sha1": "09fea393e969a545dd2d85ada218ce6ebd5a1cc2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1510.07649",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "09fea393e969a545dd2d85ada218ce6ebd5a1cc2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119143991 | pes2o/s2orc | v3-fos-license | Functional equations for zeta functions of $\mathbb{F}_1$-schemes
For a scheme $X$ whose $\mathbb F_q$-rational points are counted by a polynomial $N(q)=\sum a_iq^i$, the $\mathbb{F}_1$-zeta function is defined as $\zeta(s)=\prod(s-i)^{-a_i}$. Define $\chi=N(1)$. In this paper we show that if $X$ is a smooth projective scheme, then its $\mathbb{F}_1$-zeta function satisfies the functional equation $\zeta(n-s) = (-1)^\chi \zeta(s)$. We further show that the $\mathbb{F}_1$-zeta function $\zeta(s)$ of a split reductive group scheme $G$ of rank $r$ with $N$ positive roots satisfies the functional equation $\zeta(r+N-s) = (-1)^\chi ( \zeta(s) )^{(-1)^r}$.
INTRODUCTION
In recent years around a dozen different suggestion of what a scheme over F 1 should be appeared in literature (cf. [6]). The common motivation for all these approaches is to provide a framework in which Deligne's proof of the Weyl conjectures can be transfered to characteristic 0 in order to proof the Riemann hypothesis. Roughly speaking, F 1 should be thought of as a field of coefficients for Z, and F 1 -schemes X should have a base extension X Z to Z which is a scheme in the usual sense.
Though it is not clear yet whether one of the existing F 1 -geometries comes close to this goal, and thus in particular it is not clear what the appropriate notion of an F 1 -scheme should be, the zeta function ζ X (s) of such an elusive F 1 -scheme X is determined by the scheme X = X Z .
Namely, let X be a variety of dimension n over Z, i.e. a scheme such that X k is an variety of dimension n for any field k. Assume further that X has a counting polynomial i.e. the number of F q -rational points is counted by #X(F q ) = N (q) for every prime power q. If X descents to an F 1 -scheme X , i.e. X Z ≃ X, then X has the zeta function where ζ X (q, s) = exp r≥1 N (q r )q −sr /r is the zeta function of X ⊗ F q if q is a prime power and χ = N (1) is the order the pole of ζ X (q, s) in q = 1 (cf. [9]). This expression comes down to From this it is clear that ζ X (s) is a rational function in s and that its zeros (resp. poles) are at s = i of order −a i for i = 0, . . . , n. The only statement from the Weyl conjectures which is not obvious for zeta functions of F 1 -schemes is the functional equation.
I like to thank Takashi Ono for drawing my attention to the symmetries occuring in the counting polynomials of split reductive group schemes, and I like to thank Markus Reineke for his explanations on the comparision theorem for liftable smooth varieties.
THE FUNCTIONAL EQUATION FOR SMOOTH PROJECTIVE F 1 -SCHEMES
Let X be an (irreducible) smooth projective variety of dimesion n with a counting polynomial N (q). Let b 0 , . . . , b 2n be the Betti numbers of X, i.e. the dimensions of the singular homology groups H 0 (X C ), . . . , H 2n (X C ). By Poincaré duality, we know that b 2n−i = b i . As a consequence of the comparision theorem for smooth liftable varieties and Deligne's proof of the Weil conjectures, we know that the counting polynomial is of the form [2] and [8]). Thus χ = n i=0 b 2i is the Euler characteristic of X C in this case (cf. [4]).
Suppose X has an elusive model X over F 1 . Then X has the zeta function ζ X (s) = Proof. We calculate where we used b 2n−2i = b 2i in the last equation.
If we now substitute i by n − i in this expression, we obtain If n is odd, then there is an even number of non-trivial Betti numbers and χ = 2b 0 + 2b 2 + · · · + 2b n−1 is even. If n is odd, then χ = 2b 0 + 2b 2 + · · · + 2b n−2 + b n has the same parity as b n . Thus the additional statement.
THE FUNCTIONAL EQUATION FOR REDUCTIVE GROUPS OVER F 1
The above observations imply further a functional equation for reductive group schemes over F 1 . Note that Soule's and Connes and Consani's approaches towards F 1 -geometry indeed succeeded in descending split reductive group schemes from Z to F 1 (cf. [1], [5], [7]).
Let G be a split reductive group scheme of rank r with Borel group B and maximal split torus T ⊂ B. Let N be the normalizer of T in G and W = N (Z)/T (Z) be the Weyl group. The Bruhat decomposition of G (with respect to T and B) is the morphism w∈W BwB −→ G , induced by the subscheme inclusions BwB → G, which has the property that it induces a bijection between the k-rational points for every field k. We have B ≃ G r m × A N as schemes where N is the number of positive roots of G, and BwB ≃ G r m × A N +λ(w) where λ(w) is the length of w ∈ W . With this we can calculate the counting polynomial of G as The quotient variety G/B is a smooth projective scheme of dimension N with counting function N G/B (q) = (q − 1) r q N −1 N (q) = w∈W q λ(w) . Let b 0 , . . . , b 2N be the Betti numbers of G/B, then we know from the previous section that N G/B (q) = N l=0 b 2l q l and that b 2N −2l = b 2l .
Thus we obtain for the counting polynomial of G that 2N is the dimension of G and with the convention that r k = 0 if k < 0 or k > r. Denote by a i = k+l=i−N (−1) r−k r k b 2l the coeffients of N (q).
Proof. The first statement follows from the fact that N (q) is divisible by q N . For the second statement we use the symmetries r k = r r−k and b 2N −2l = b 2l to calculate | 2010-10-08T18:19:47.000Z | 2010-10-08T00:00:00.000 | {
"year": 2010,
"sha1": "b79fa6ffd69b996145f98c726f26587e9d254719",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b79fa6ffd69b996145f98c726f26587e9d254719",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
27088338 | pes2o/s2orc | v3-fos-license | TRANSLATION PLANES OF ODD ORDER AND ODD DIMENSION
The author considers one of the main problems in finite translation planes to be the identification of the abstract groups which can act as col lineation groups and how those groups can act. The paper is concerned with the case where the plane is defined on a vector space of dimension 2d over GF(q), where q and d are odd. If the stabilizer of the zero vector is non-solvable, let GO be a minimal normal non-solvable subgroup. We suspect that GO must be isomorphic to some SL(2,u) or homomorphic to A6 or A7. Our main result is that this is the case when d is the product of distinct primes. The results depend heavily on the Gorenstein-Walter determination of finite groups having dihedral Sylow 2-groups when d and q are both odd. The methods and results overlap those in a joint paper by Kallaher and the author which is to appear in Geometriae Dedicata. The only known example (besides
(when dimension and order are odd); it is possible that the key non-solvable group is always SL(2,u) for some u or, perhaps, is a pre-image of A 6 or A 7. This is suggested by the Gorenstein-Walter Theorem [5]. The author [14] has previously shown that this is the case for minimal non fixed-point-free groups (see below) which are non-solvable. However, a non-solvable linear group need not have a non-solvable minimal non-f.p.f, subgroup.
Throughout this paper G is a non-solvable group of linear transformations and G O is a minimal normal non-solvable subgroup.
Our most important result is Theorem (3.5) which states that, if d is the product of distinct primes, a minimal non-solvable normal subgroup of the linear translation complement either has the form SL (2,u) or is a pre-image of A 6 or A 7. The present paper is similar in method, spirit, and results to the joint one. Here there are more restrictions placed on d and weaker initial restrictions on the group.
The notation and language are more or less standard. Some of the terminology and even some of the facts, may not be familar to every potential reader of this paper. We finish this Introduction with a brief discussion of thee matters and some remarks on notation. A minimal invariant subspace of a reducible group G will sometimes be called a minimal G-space. If V 1 is a subspace such that all of the minimal G-spaces in V 1 are isomorphic as G-modules, then V 1 will be called a homogeneous space.
The order of G is denoted by IGI. If G is the full group and G O is a subgroup, 0 (Go) is the centralizer of G O in G.
The subgroup of G which fixes is G(C). [15].) Suppose that the 2-group in Fit G O is a generalized quaternion group Q. Then GO/O (Q) F G O is isomorphic to a group of automorphisms of Q.
The automorphism group of Q is a 2-group or S 4 (see Passman [15].) and hence is solvable. Hence (Q) F G O is non-solvable; by the minimal property of [5], G-0 PSL(2,u) for some odd u or is equal to A 7. Furthermore, if G-0 PSL(2,u) and u # 9 then G O SL(2,u).
One of the key assumptions of Theorem (2.3) was that Fit G O is fixed point free. The next few Lemmas develop the machinery for examining the case where Fit G O is not fixed point free. Dixon [2] gives a similar development for vector spaces over an algebraically closed field. We have attempted to modify Dixon's argument to apply to vector spaces over a finite field. Let $ be a minimal G-invariant subspace of . By Lemma (2.7) in Dixon [2], the dimension of is n and there is a vector v in V F such that <v> V F. Let {u I, u2' n } be a basis for $. The {Vl 1, vl 2, v n} is a ba$i for Vf. Label these vectors v I, v n respectively.
Let w be an arbitrary non-zero element of V and let . a,v,j J V . ail JJ where the aiF.j Let (w) be an eigenvalue wl of the matrix (aij). If F, let E be an extension which contains ,.
Note that wl 1 vu v[. aijj li]. The determinant of the matrix (aij) },I is zero, so the vectors (w v)i are dependent and the dimension of <(w-v)> is less than n.
We had vl v i. Let v w i. Now v and w are in V F (which is embedded in VE). Let the mapping T be defined by wit >,v i. Then T becomes a linear transformation on V E by defining (ClW I + + CnWn)T Cl(WlT + + Cn(WnT) for c 1, c n E. (Note that T also acts as a linear transformation on V F.) Let x be an arbitrary member of V F. Then x + xT belongs to <(wv) > Suppose that XlT x2T. Then (x I + XlT) (X 2 + X2T) x I x 2 is a vector in V F which belongs to <(w-) >. Now <(w -v)> I"1 V F is a G-invariant subspace of V F and, since G is irreducible, is either the null space or all of V F. Suppose that V F <(wv) >. A basis for V F is also a basis for V E so in this case <(w->,v)$> V E, which is a contradiction.
Hence Xl x2T, x I x 2 e V F implies x I x 2 so T is a non-singular linear transformation on V F. If p e G and x V F, then (x + XT)p Xp + XpT SO commutes with G. Hence z is a scalar transformation on V F. We had W.T V.. Hence ; c F and E F.
In particular this holds for w v 1, v 2, v n. Let But n must be a power of u and divide the dimension of V. Hence n u.
(2.8) LEMMA. In the notation of (2.1) ( PROOF. The theorem holds if W of (2.8) is non-trivial and W 0 is trivial, so suppose W 0 is non-trivial. We wish to apply (2.7). For this purpose, we can restrict our attention to a minimal Go-space V 1. Note that the dimension of a minimal Go-space is also the product of distinct primes. Now all of the minimal Z(G O) spaces in a Go-space are isomorphic as Z(Go)-modules. As in Hering [7] Hilfssatz 5, there is a field K so that the additive group of V 1 is a vector space over K and the elements of Z(Go) become scalars. Now consider the action of W on a minimal W-space in V 1. Again, the 2 dimension is a product of distinct primes, so (2.7) implies that IW/WoI w and w is one of the primes dividing the dimension of V over F. Let G G/C (w). Then G induces, by conjugation, a group of automorphisms of IV. That is, there is a homomorphism from G into GL(2,w), since is elementary abelian of order w2. The kernel of the homomorphism is the subgroup of G which centralizes W. If the subgroup of G O which centralizes W were non-solvable, then G O would centralize W. We wish to show that this cannot be the case. But, except for the w-groups, the Sylow sugbroups of G O are included in G 2, so Go/G 2 is a w-group. But if G 2 is solvable, Go/G 2 must be non-solvable, so we again get a contradiction. Thus W 0 must be trivial if W exists and W must be abel ian.
(2.10) LEMMA. Suppose that the Sylow 2-groups in G/Z(G) are dihedral. PROOF. If W 0 is trivial, W is elementary abelian by (2.9). Thus if and the subspace V() pointwise fixed by >, is non-trivial, then W leaves V(>,) invariant. Hence W is not faithful on its minimal spaces.
Let H be a maximal normal subgroup of G included in G O but not equal to G O Then Go/H is simple and either H W[c'(W) FI G O or W Z(Go
Since each homogeneous space is a direct sum of minimal spaces that are isomorphic as W-models, W is not faithful on its minimal spaces. If V.* c for some component C, then is invariant under W. If W leaves just one or two components invariant then G must fix or interchange these two and cannot be non-solvable. If W has 3 invariant components every non-f.p.f, element of W must fix a subplane pointwise. Hence V is a subplane and has even dimension. This implies that k is odd, since 2d k dim V 1. PROOF. Suppose that W 0 is trivial, so that (3.3) holds. Let G(V 1) be the stabilizer of V 1 in G 1. The index of G(V 1) in G is equal to k, so a Sylow 2-group of G(V 1) is a Sylow 2-group of G. Let G(V 1) be the induced group on V 1 i.e. G(V 1) may be identified with the factor group obtained by taking G(V 1) dulo the subgroup fixing V 1 pointwise. Then W is a normal subgroup of order w in G(Vl), and all of the minimal W spaces in V 1 are isomorphic as W-modules. As in Hering [7], G(V 1) rL(s,qt) and the subgroup centralizing W is isomorphic to G(V 1) 1 GL(s,qt) for some s, t such that st dim V 1. Thus the index of e (W) (I G(V I) divides t and is not divisible by 4.
Hence the index of G(V 1) rl (2 (W) in G(V 1) is not divisible by 4. Let S be a Sylow 2-group of G(Vl). As pointed out at the beginning of the proof, S is then a Sylow 2-group of G. Hence S/S rl e (W) is a Sylow 2-group of G/C (W) and its order is I or 2. This implies that G/e (W) is solvable.
Hence Go/G 0 FI e (W) is solvable. This is a contradiction since G O 1 I2 (W) is solvable and G O is non-solvable. We conclude that W 0 must be non-trivial. d (3.5) THEOREM. Let 1I be a translation plane of order q with kernel GF(q), where q and d are odd. Let G be a subgroup of the linear translation complement. Suppose that G is non-solvable and irreducible with a minimal normal non-sOlvable-G 0 and that d ks the product of distinct primes. Then either SL(2,u) for some odd u or G-0 A 6 or A 7. Here G-0 Go/Z(Go)-PROOF. This is a consequence of (3.4), (2. It may be worth while to take a look at some aspects of the ways that G O SL(2,u) can act on a translation plane. One possibility is that u is a power of the characteristic p and that the p-elements are affine elations.
If G O contains affine homologies of prime order greater than 5 then a result of the author [13] shows that G O contains affine elations. The group generated by these elations will be normal in G O and, in fact, equa) to G O (3.7) COROLLARY. If r divides u + I, the conclusions of (3.6) hold even if u is not prime.
REMARK. The cases where u is not prime or where G contains affine homologies of order 3 or 5 were handled in the Kallaher-Ostrom paper [12] under the assumption that a certain p-primitive divisor of qd I (which turned out to be u) divided the order of the group induced on by G().
Note that when (3.6) PROOF. lL(l,qd), in its action on a vector space of dimension d over GF(q) has a cyclic normal fixed point free subgroup of order qd I and index d.
REMARK. In the Kallaher-Ostrom paper [12], Theorem 6.1, it turned out that d divides u I. A subgroup of a Frobenius complement whose order is the produce of two distinct primes must be cyclic. SL(2,u) has a subgroup of over u(u I) which is not fixed point free for u > 5. Putting this together with (3.6), it appears that an important subcase for the possible action of SL(2,u) is the one where the orders of the non-fixed point free elements divide u 1. In the context of (3.8), especially if d is prime, we again arrive at a situation where d divides u 1.
(3.9) LEMMA. If u is not a power of p, if p and d are both odd, and if d divides u -1 then d 1/2(u 1) or d 1/4(u 1).
PROOF. If (IGoI, p) 1 and if G O is absolutely irreducible, it has a complex representation of dimension 2d (u 1) and the representation we are using can be obtained from the complex representation. (See Dixon [2].) In the other case, G O is reducible over a field extension but G O can be obtained from a complex representation of dimension u 1.
Suppose that u + 1 has an odd prime factor r. Let be an element of order r in G O Under the hypotheses, it follows from (3.7) that k is fixed point free.
Let (!) be a complex rth root of 1. Then the character of has the a 0 + ale +-..+ ar_l Er-1 where a is the multiplicity of the eigenvalue form Ei, so that a 0, a I, are non-negative integers. If e is the dimension of u 1 the complex representation, a 0 + a I +-..+ ar_ 1 e. First, suppose e 2 From character table (see Dornhoff [3]) the character of is equal to -1. Recall that, by (3.9) d 1/2(u 1) or (u 1). If r exists, so that 1/2(u + 1) is odd, then u 1 0 mod 4 so that if d is odd, d cannot be equal to 1/2(u i). Thus if 1/2(u + 1) is prime, d (u I).
If r does not exist, so that u + 1 is a power of 2, then (u I) is not integer so that d 1/2(u 1) in this case.
We can use (3.11) to make a slight improvement in Theorem 6.1 of [12].
In the following corollary, u is a prime p-primitive divisor of qd I where p is prime and q pko G has the usual meaning of this paper and G O is a minimal non-solvable normal subgroup of G. Here and earlier in this paper when reference is made to the group induced on by G(), it should be understood that the subgroup fixing pointwise has been factored out. We continue to assum q and d are both odd.
(3.12) COROLLARY. Suppose that, for each component , the order of the group induced on by G() is divisible by u and that G O exists and is non-trivial. Suppose that 11 is non-Desarguesian. Then at least one of the following holds.
(c) u 2d + I, q p, p divides d and G O SL(2,u).
PROOF. In Theorem (6.1) of [12], it is shown that, under the present hypotheses we have case (a) or (d) or G O SL(2,u) q p, u 2d + 1. | 2017-07-29T18:58:44.645Z | 1979-01-01T00:00:00.000 | {
"year": 1979,
"sha1": "41c054f064509e9bfa6769c40a9e2840e996b733",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijmms/1979/341753.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4f8f6ec85989b36630fd3052f169e491de13996a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
84199124 | pes2o/s2orc | v3-fos-license | Morphology and distribution of two epizoic diatoms ( Bacillariophyta ) in Brazil
(Morphology and distribution of two epizoic diatoms (Bacillariophyta) in Brazil). The epizoic diatoms Pseudohimantidium pacificum and Falcula hyalina, which live on copepods, were investigated using light and electron microscopes, based on material gathered from different marine environments along the Brazilian coast. Pseudohimantidium pacificum is reported for the first time for the Southwestern Atlantic Ocean, significantly enlarging its range of geographic distribution. This species usually covers the entire body surface of the copepods Corycaeus amazonicus and Euterpina acutifrons, and of cypris larvae of Cirripedia. Falcula hyalina uses a higher number of copepod hosts, particularly Oithona oswaldocruzii, Pseudodiaptomus richardii and Acartia spp. The valve morphology and biometrical data of both diatoms were within the range limits recorded in the literature, including the original publications. Both species occurred in all the sampling stations along the Brazilian coastline stretching from 12°S down to 28°S. Falcula hyalina had already been found as far as latitude 31°S in the Southwestern Atlantic Ocean.
Introduction
The exoskeleton of crustacean copepods constitutes a convenient habitat for a variety of epibiont microorganisms such as bacteria, microalgae and protozoans (Carman & Dobbs 1997;Walkusz & Rolbiecki 2007).Among these organisms, pennate diatoms have been commonly reported growing on different parts of the body of copepods (Carman & Dobbs 1997).Originally revealed by Giesbrecht (1892) in the Adriatic Sea, epizoic diatoms have since then been recorded in different regions around the world.They are widespread in the Pacific Ocean along the Sea of Japan to the coasts of South America (Hiromi et al. 1985;Rivera et al. 1986).Several sparse records were made from both margins of the United States and Mexico, Caribbean, Mediterranean Sea and East Africa (Voigt 1960;1961;Gibson 1979a;1979b;Navarro 1982;Prasad et al. 1989;Garate-Lizarraga & Muñeton-Gomez 2009).
In this report, we describe two species of epizoic diatoms and their occurrence on the surface of the body of copepods, which were detected in samples from different environments along the Brazilian coast.Pseudohimantidium pacificum and Falcula hyalina were examined using light and electron microscopes to detect possible morphological and metric differences compared to other specimens described in the literature.
Material and methods
The sample collection (n=30) used in this study was gathered in different localities along the Brazilian coastline (12°S to 28°S), between 1994 and2011 (Tab.1).Only the material sampled in Paranaguá Bay, Paraná State, had the host copepods identified to the species level.Zooplankton was collected using plankton nets with a 300μm mesh size.Each sample was screened using a stereomicroscope to detect copepods with epizoic diatoms.Individuals were picked up with a micropipette and placed in 15 ml centrifuge tubes for further processing of the diatom frustules.
The diatom cells and the copepods were washed in distilled water followed by cleaning of frustules from the organic matter according to the technique of Hasle & Fryxell (1970).Permanent slides were mounted using Naphrax (refractive index = 1.74) as a mounting medium.For light microscopy (LM), specimens were measured and photographed using an Olympus BX-51 microscope.Samples for scanning electron microscopy (SEM) were prepared by adding a drop of sample onto aluminum stubs, which were air-dried and sputter-coated with gold.Samples were examined using a Jeol-JSM 6360LV scanning electron microscope at an accelerating voltage of 20 kV.Terminology followed Ross et al. (1979), with the additions of Round et al. (1990).A literature search was facilitated by consulting Gaul et al. (1993) and Henderson & Reimer (2003).
The species occurred in all the samples examined (Tab.1).In the samples collected in Paranaguá Bay, the cells of P. pacificum appeared in high abundance on the copepods Corycaeus amazonicus Dahl F. and Euterpina acutifrons Dana; a few cells were also detected living on a cypris larva of Cirripedia.Diatoms occupied almost all the available copepod exoskeleton, especially the locomotory appendages and antennae, though a few cells were attached on the dorsal surface.
Discussion
Epizoic diatoms have been recorded from different oceans around the world, despite the peculiar habitat they grow in.Surprisingly, only a few publications have reported these interesting diatoms for the South Atlantic Ocean.This is the first report of Pseudohimantidium pacificum from the Southwestern Atlantic Ocean, found along the Brazilian coast (12°S to 28°S).The species was recorded in the North Atlantic from the Caribbean Sea to the east coast of North America, the Pacific Ocean and the Mediterranean Sea (Gibson 1979a;Hiromi et al. 1985;Garate-Lizarraga & Muñeton-Gomez 2009).
The other species found in this study, Falcula hyalina, had been reported only for the Pacific Ocean (Japan and Western Australia) and the North Atlantic Ocean (Gulf of Maine, Florida) (Takano 1983;Hiromi et al. 1985;Prasad et al. 1989).Regarding its distribution in Brazil, F. hyalina was previously known only from the Bay of Tijucas in the state of Santa Catarina (Souza-Mosimann et al. 1989) and the Lagoon of Peixe (31°00'46"S, 51°09'51"W) in the state of Rio Grande do Sul (Donadel & Torgan 2010), the latter corresponding to the most austral report of F. hyalina in the Atlantic Ocean.In both of these places, the authors reported F. hyalina living on the copepod Acartia lilljeborgii Giesbrecht.In our material, both species were found in all the samples examined from the Brazilian coastline, encompassing a wide variety of environments, such as estuaries, bays and coastal waters, thus suggesting that their geographic distribution might be wider than previously recorded.Regarding the material gathered from Paranaguá Bay, Paraná, where the copepods and other crustaceans were identified to the species level, the epizoic diatoms presented some degree of preference in relation to the copepod hosts.Pseudohimantidium pacificum was on Corycaeus amazonicus and Euterpina acutifrons, confirming what has been reported in the literature already.However, Falcula hyalina was growing on a higher number of hosts, such as Acartia lilljeborgii, Acartia tonsa, Oithona oswaldocruzii and Pseudodyaptomus richardi.From these copepods, only A. tonsa had been recorded as a host for F. hyalina; thus, our findings add three new hosts to the list of Hiromi et al. (1985).To date, the copepods mentioned above correspond to the most abundant species of the year-round zooplankton community of Paranaguá bay and adjacencies (Lopes et al., 1998).
Biometrical data of P. pacificum from the Brazilian material agree well with the original (Krasske 1941) and other later publications (Simonsen 1970;Gibson 1978;Rivera et al. 1986).To date, in Krasske's (1941) original material the apical axis varied from 44 μm to 78 μm, while the transapical axis was 9-11 μm.However, the number of striae (30-32 in 10 μm) was lower than in Brazilian specimens (30-40 in 10 μm).Furthermore, Rivera et al. (1986) found a larger range for the transapical axis, 9.8 μm to 19.0 μm, and the number of striae as low as 24 in 10 μm.Several discrete morphological differences were found in the Brazilian specimens compared with what has been published in the literature.For instance, there were 4-10 rimoportulae in the valves from Brazil, while other authors recorded a maximum of nine (Russel & Norris 1971;Rivera et al. 1986).Moreover, some valves of our material presented the external opening of rimoportulae almost orthogonal to the ventral margin instead of being parallel as usually observed elsewhere.
The valves of Falcula hyalina examined in this work fit well into the dimensions published for other regions (Gibson 1978;Hiromi et al. 1985;Prasad et al. 1989).Measurements reported in the original description of Takano (1983) are 20-38 μm apical axis, 3.9-5.5 μm transapical axis and 20-23 striae in 10 μm.Regarding the earlier reports from Brazil, our material has similar dimensions to the specimens investigated by Souza-Mosimann et al. (1989) and Donadel & Torgan (2010) from Southern Brazil. | 2019-03-21T13:09:28.394Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "067ce2d28678a4a080c79db8ba10bdf513393d5b",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/abb/a/YHCvMt7LNv4Yxm54RBsKJNQ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "067ce2d28678a4a080c79db8ba10bdf513393d5b",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
43946720 | pes2o/s2orc | v3-fos-license | Analytical Study of Incremental Approach for Information Dissemination in Wireless Networks
In many scenarios, control information dissemination becomes a bottleneck, which limits the scalability and the performance of wireless networks. Such a problem is especially crucial in mobile ad hoc networks, dense networks, networks of vehicles and drones, sensor networks. In other words, this problem occurs in any scenario with frequent changes in topology or interference level on one side and with strong requirements on delay, reliability, power consumption, or capacity on the other side. If the control information changes partially, it may be worth sending only differential updates instead of messages containing full information to reduce overhead. However, such an approach needs accurate tuning of dissemination parameters, since it is necessary to guarantee information relevance in error-prone wireless networks. In the paper, we provide a deep study of two approaches for generating differential updates - namely, incremental and cumulative - and compare their efficiency. We show that the incremental approach allows significantly reducing the amount of generated control information compared to the cumulative one, while providing the same level of information relevance. We develop an analytical model for the incremental approach and propose an algorithm which allows tuning its parameters, depending on the number of nodes in the network, their mobility, and wireless channel quality. Using the developed analytical model, we show that the incremental approach is very useful for static dense network deployments and networks with low and medium mobility, since it allows us to significantly reduce the amount of control information compared to the classical full dump approach.
I. INTRODUCTION
In many existing and emerging scenarios, it is highly necessary for the networking devices to have relevant information from their neighbors. For example, various routing protocols for mobile ad hoc networks (MANET) exchange information about links and their quality [1] to distributively build a graph representing network topology. Outdated routing information may lead to loops and user data losses. Another example is dense deployment, where access points may reduce interference by coordinating transmissions in their networks with a sort of time division [2], [3], dynamic sensitivity control [4] or adaptive transmission power control [5] enabled by the novel IEEE 802.11ax amendment. Information dissemination is especially crucial for vehicular networks [6].
The research was done at IITP RAS and supported by the Russian Science Foundation (agreement No 14-50-00150).
Often, both user and control information share the same channel resources. Thus, efficient control information dissemination increases capacity for user traffic, in addition to improving scalability in terms of both the number of nodes which generate information updates and the rate of such updates. This makes the problem of control information dissemination crucial for narrow-band networks (see [7]) and networks with highly mobile devices.
Typically, in ad hoc networks, control information is broadcast without being acknowledged. Because of error-prone nature of the wireless channel, broadcast messages can be lost. That is why in many protocols (e.g. OLSR [1], RR-ALOHA [8]), such broadcast messages contain full dumps of information and are sent periodically, even if no changes occur in the network. The period of such messages is chosen as a trade-off between information relevance and channel time consumption. Despite the simplicity of the full dump approach, it is robust to packet losses since the neighboring nodes will recover lost information with the next successfully received message. Moreover, when a new node appears in the network, it receives all necessary information in the first received message. The cost for these benefits is a huge overhead (e.g. see [9]).
Another approach referred to as group-based one is proposed in the IEEE 802.11-2012 standard [10] for the deterministic channel access protocol called Mesh coordination function Controlled Channel Access (MCCA, for details see [11]). According to this approach, a node divides various pieces of information (information elements) into a relatively small number of groups and periodically sends information only about those groups that have been changed. Specifically, with MCCA protocol, a station periodically sends information about time intervals reserved for transmissions [12]. In [13], [14] various group management algorithms are studied and compared in terms of the amount of control information, using the developed analytical models and simulations. However, these papers do not take into account the relevance of information at neighboring nodes, since it is assumed that all control messages are transmitted reliably.
The popular approach to reduce control information is to interleave full dump messages with short differential updates, which contain only modified information. This approach can be implemented in two ways: with cumulative and incremental updates. The cumulative differential updates contain all information elements modified since the last full dump message. Such an approach is used by the DSDV [15] routing protocol. In contrast, incremental differential updates contain only information elements modified since the previously transmitted message (a full dump or an incremental message). This approach is adopted by the OSPF-MDR [16] and the PSR [17] routing protocols. Obviously, when all control messages are transmitted reliably, the cumulative approach produces more control information than the incremental one. However, a failure in differential update transmission for the incremental approach may lead to irrelevant information, while, for the cumulative approach, any successfully received control message makes the information relevant, except for the case when a full dump message has been lost.
In this paper, we compare the two approaches and show that the incremental approach produce less control information compared to the cumulative one at the same information relevance level. We develop an analytical model of the incremental approach, which allows finding its optimal parameters in terms of the minimal amount of control information subject to some predefined probability of information relevance. Using the developed analytical model, we show that the incremental approach is very useful for static dense network deployments and networks with low and medium mobility, since it allows significantly reduce the amount of control information compared to the classical full dump approach.
The rest of the paper is organized as follows. In Section II, we specify the considered network scenario. Section III compares the cumulative and incremental approaches by means of simulation. In Section IV, we develop an analytical model of incremental approach which allows estimating the amount of generated control information and the probability of information relevance. Based on this model, we provide an algorithm for selecting the optimal parameters. In Section V, we validate our model by simulation and evaluate the performance of the incremental approach and the developed algorithm using this model. Section VI concludes the paper.
II. SCENARIO
In this paper, we consider the following scenario. Each node of a wireless network periodically broadcasts to its neighbors control messages of two types: full dump and differential update messages. We refer to the period of control message transmission as a slot. We assume that a node transmits a control message at the beginning of each slot. To reduce the amount of sent control information, the node interleaves differential update messages and the full dump ones. Let N be the transmission period for the full dump messages, i.e., a node transmits full dumps at the beginning of every N -th slot, while all other messages are differential updates.
The size of control messages and their loss probabilities depend on the rate of appearing new information elements (e.g., appearing of new scheduling information for channel access protocols or new links and metric updates for routing protocols) and their lifetimes. In this paper, we assume that the number of information elements appearing at a node in a time slot has a Poisson distribution with rate λ. The lifetime of each information element has an exponential distribution with mean 1/µ slots. The size of each information element is constant and equals V 0 bits. Similar to our previous works [13], [14], we assume that the average lifetime of an information element is much longer than one slot, i.e. 1/µ 1. Also, we limit the number of information elements tracked by each node with threshold R.
Let us consider a node (referred to as node A) having M neighbors. We assume that the wireless channel is error prone and the probability that neighbor i cannot decode a control message from node A containing s information elements equals p (i) err (s). For definiteness, we estimate p (i) err (s) as follows: where ber (i) is the Bit Error Rate (BER) at node i. However, any other dependency can be considered as well.
We assume that the network topology is dynamic, i.e., the nodes are mobile. Hence, node A loses connections with its neighbors when they move far away and establishes new connections with the nodes appearing in its coverage area. In this paper, we assume that the duration of the connected phase for each neighbor (i.e. the time interval during which it is connected to node A) has the exponential distribution with mean 1/γ.
When a neighbor fails to receive an update from node A, the information at this neighbor becomes irrelevant. To obtain upto-date information, the neighbor needs to successfully receive a full dump message from node A in case of the incremental approach or any type of control message in case of the cumulative one. Let us define information relevance probability as the probability that at any arbitrarily selected time slot all connected neighbors have relevant information generated by node A. To provide correct operation of a protocol which disseminates control information, we should guarantee that the relevance probability of that control information is higher than some threshold p thresh .
To increase the relevance probability, nodes use unsolicited retries scheme. It means that nodes broadcast messages several times in row. Let us denote n f and n d the number of transmission attempts for the full dump and differential update messages, respectively. These two variables together with period N of full dump messages are considered as the parameters and can be tuned in order to minimize the amount of generated control information and to keep the relevance probability above the threshold p thresh . We estimate the amount of control information as the average number of information elements which are sent at the beginning of a slot.
III. PRELIMINARY ANALYSIS
In this section, we compare incremental and cumulative approaches for generating differential updates. For that, we use an event-driven custom simulation program, run experiments in the scenario described in Section II, and average the obtained results over 20 simulation runs. We consider transmission of control information via a single link (i.e. M = 1). For higher values of M , we obtain similar results. Unless otherwise stated, further we set: R = 1000, γ = 0.001, V 0 = 2 bytes. BER on the considered link is set to ber = 6.6 · 10 −6 , which means that the probability of incorrect reception of a message containing R information elements equals p err (R) ≈ 10%.
For each of two approaches, we find via the exhaustive search such triple (N * , n * d , n * f ) that minimizes the amount of control information subject to the relevance probability being higher than p thresh = 0.95. Fig. 1 shows the average amount of control information for optimal triple (for each λ and µ) as a function of the load defined as λ µR . We can see that for the load higher than 1.0, the amount of control information does not change since the number of information elements tracked by node A is close to R and cannot further increase. Note that the curves corresponding to the cumulative approach experience a significant growth at the load of 0.4 .. 0.5. At this point, the cumulative approach -which is more robust to packet losses by design -needs to increase the number of retries for full dumps and/or differential updates in order to satisfy the given requirement on the information relevance probability. Certainly, the incremental approach also increases the number of retries with the load. However this leads only to a slight increase of the amount of control information, since the size of differential update messages for the incremental approach is much lower than for that of the cumulative one.
From the presented above results, we can conclude that the incremental approach outperforms the cumulative one: for all the considered loads and µ values, the incremental approach generates three times less control information. So, further in this paper, we focus on the incremental approach and develop an analytical model which allows selecting its optimal parameters (i.e. N , n d and n f ).
IV. ANALYTICAL MODEL
Let us develop an analytical model of the incremental approach which allows estimating the average amount of generated control information and the information relevance probability. Based on this model, we propose an algorithm to optimize its performance in terms of minimizing the amount of control information subject to a given requirement on information relevance.
A. Estimation of the average amount of control information
Let us consider node A and estimate the average amount of the generated control information. For that we need to find stationary probabilities π r that the node has r information elements. In our previous paper [13], we have developed an analytical model that allows estimating the average amount of generated control information for the full dump approach (i.e. for the case when N = 1). The model is based on a discrete time Markov chain with state r and the time unit equal to the slot. For that chain, we have found the stationary probabilities π r , which we further use in this paper.
Following [13], let us introduce the following notation: d is the number of information elements deleted in the current slot; n is the number of new information elements added in the current slot.
Since the lifetime of each information element has exponential distribution, the probability of deleting a particular information element during a slot equalsp = 1 − e −µ . As lifetimes of different information elements are mutually independent random variables, the probability of deleting d out of r information elements equals Using π r and (2), we can find the average numbers E[r] and E[d] of the total and deleted information elements, respectively: At the steady state, the average number of information elements E[n] added during a slot equals the average number of information elements E[d] deleted during a slot. Since every differential update message contains information elements, which are added and deleted during the slot, the average size of a differential update message equals 2E[d]. Since a full dump message contains all information elements, its average size equals E[r]. Taking into account that each full dump message and each differential update message are repeated n f and n d times, respectively, we can estimate the average amount of control information as follows:
B. Information relevance probability
Now let us estimate the information relevance probability. First, we consider a pair of nodes (e.g. node A transmits control messages to node B) and estimate the probability p rel that the control information obtained at node B is relevant. After that, we generalize the result for the case when node A has several neighbors.
Let us calculate the information relevance probability within the connected phase of node B (see Fig. 2). For that, we consider N consecutive slots (the full dump cycle) and assume that at the beginning of the first slot, node A sends a full dump message. We can consider different full dump cycles independently since only the full dump message sent at the beginning of each full dump cycle can recover information relevance at node B. Hence, we only need to estimate the relevance probabilityp rel within one full dump cycle.
From [13], the conditional probability of adding n new information elements for given r and d values equals: where p k (k) = λ k k! e −λ . Using stationary probabilities π r and equations (2) and (4), we can estimate loss probability p f after n f retries of a full dump message and loss probability p d after n d retries of a differential update message as follows: where p err (r) is calculated according to equation (1) (we omit index i here).
If a full dump message is lost, the information at node B is irrelevant for the whole full dump cycle, i.e. N slots. Otherwise, if the i-th differential update (out of N − 1 differential updates) is the first lost control message within the full dump cycle, the information at node B is relevant for i time slots and irrelevant for the other N − i slots. Hence, the relevance probability within one full dump cycle equalsp rel = After summing up, we obtain: According to the considered scenario, node B can be in two states, connected and disconnected. The average duration of connected phase equals 1 γ . Since the duration of the full dump cycle N should be small enough compared to the duration of connected phase to meet information relevance requirement p rel ≥ p thresh (note that p thresh is close to 1), we can write 1 γ >> N . Hence, the probability that node B has relevant information subject to the condition that node B is in connected state and node A has already sent full dump message equalsp rel calculated according to equation (5). The startup period, i.e. the period between the time when node B connects to node A and the time of first full dump message transmitted by node A (see Fig. 2) has uniform distribution over the interval [0, N ] with the mean value N 2 . Thus, the information relevance probability at node B -when it is in the connected state -can be calculated as follows: Now let us consider the case when node A disseminates control information to M neighbors. Assuming that all neighbors process messages and move independently, we can calculate the probability that the control information is relevant at all neighbors as follows: where p (i) rel is calculated according to equation (6).
C. Asymptotic analysis
Let us study the asymptotic case when λ → ∞. In this case, the total number (and the average number E[r]) of information elements r in every slot is constant and equals R.
When in a particular slot d information elements are deleted, the same number of information elements are added. Hence, for the loss probabilities p f and p d and the average number of deleted information elements per slot E[d] we have: Therefore, the average amount of control information can be recalculated as follows: Thus, for the considered asymptotic case, we obtain closedform expressions which allow estimating the average amount of control information and the relevance probability without need in solving the Markov chain and finding stationary probabilities π r , which significantly reduces computational complexity.
D. Tuning parameters
The performance of the incremental approach depends on its parameters (i.e. N , n f and n d ). These parameters should be selected in order to minimize the amount of control information V subject to p (all) rel ≥ p thresh . The latter inequality imposes a restriction on N , since for each node the probability that information is irrelevant cannot be less than the ratio of rel ≥ p thresh holds, can be found as follows: We propose the following algorithm for selecting the optimal parameters for the incremental approach: 1) Find the maximal value N max according to (7).
2) Find triples (N, n f , n d ), for which condition p (all) rel ≥ p thresh holds, e.g., using the exhaustive search (N = {1..N max }, n f and n d within reasonable limits, e.g. n f , n d = {1..7}), 3) Choose from the triples found at step 2 the triple (N * , n * f , n * d ) providing the minimal amount of control information V according to (3). In Section V, we also consider triple (Ñ ,ñ f ,ñ d ), which is found with the same algorithm for the case λ → ∞, and show that this triple provides close to optimal results.
A. Model validation
Let us estimate the accuracy of the analytical model developed in Section IV. For that, we vary µ and load, and compare the average amount of control information sent per slot obtained with the analytical model and simulations. We consider the optimal parameter values (N * , n * f , n * d ), which are chosen according to the algorithm in Section IV. We also consider the optimal parameter values (Ñ ,ñ f ,ñ d ) for the asymptotic case λ → ∞. As in the preliminary analysis presented in Section III, we validate the analytical model in a single link scenario. We set threshold p thresh = 0.95, mobility γ = 0.001 and BER corresponding to p err (R) = 10%. Fig. 3 shows that the curves obtained with analytical and simulation models almost coincide. Since the algorithm of selection (N * , n * f , n * d ) requires an accurate estimation for both information relevance probability p rel and the amount of control information V , we can conclude that the developed analytical model provides high accuracy.
We can see that the amount of sent control information increases with the load and reaches its maximal value at high load (load > 1.0), where the estimations provided by the analytical model (solid line) and asymptotic analysis (dashed line) coincide. Note that parameter configuration (Ñ ,ñ f ,ñ d ) provides results very close to optimal ones for medium and high load values. Specifically, for λ µ·R >= 0.5 the amount of control information for both configurations is almost the same. Moreover, closed-form expressions obtained in Section IV allow tuning parameters online. Hence, further we investigate the incremental approach in the asymptotic case λ → ∞.
B. Performance evaluation
Now let us evaluate the performance of the incremental approach combined with the algorithm for tuning its parameters, based on the analytical model. The performance of the approach depends on the scenario parameters, such as mobility γ, number of neighboring nodes M , BER at each link ber (i) and information elements generation parameters λ and µ. Due to the lack of space, in all the experiments below we set µ = 0.01. However, our experiments show that for other values of µ the results are quite similar. Fig. 4 shows the dependencies of the amount of control information V and optimal full dump cycle durationÑ on mobility γ for M = {10, 20, 50}. For all nodes, we set BER to the same value, which corresponds to p err (R) = {1%, 10%}. Specifically, in Fig. 4(a) we consider the ratio of the amount of control information V for the incremental approach with parameter configuration (Ñ ,ñ f ,ñ d ) to the amount of control information V f ull dump for the full dump approach. Here, the full dump approach means that the node broadcasts full dump messages in every slot and does the minimal number of retries in order to satisfy requirement p (all) rel ≥ p thresh . Note that the full dump approach is the special case of the incremental one with the full dump cycle set to 1. We can see that V rises with γ as higher mobility requires more frequent full dump message transmissions. Besides, we can see that V also rises with BER since the node has to increase the number of retries n f and n d for the messages. At a particular level of mobility, the amount of control information generated by the node increases with the number of neighboring nodes, because it needs to maintain a given information relevance level at all neighbors. Fig. 4(b) shows that the full dump cycle durationÑ decreases when mobility, BER and the number of stations increases.
It should be noted that N max calculated according to (7) can reach 0, i.e., at some critical level of mobility, a node cannot further tune parameters to satisfy condition p (all) rel ≥ p thresh . Consequently, we can see that curves have no points in high mobility area γ > γ critical . Specifically, the less the number of neighboring stations, the higher is the critical level γ critical .
So, we can conclude that the incremental approach allows significantly reducing the amount of generated control information compared to the classical full dump approach, while providing a high level of information relevance for dense network deployments with low and medium mobility.
VI. CONCLUSION In this paper, we have studied the problem of control information dissemination in error-prone wireless networks. We have compared two approaches (cumulative and incremental) for generating differential update messages and have shown that the incremental approach generates significantly lower amount of control information compared to the cumulative one while providing the same level of information relevance. We have developed an analytical model which allows tuning parameters of the incremental approach in order to minimize the amount of generated control information and to meet a given requirement on information relevance. Numerical results have shown a high accuracy of the developed analytical model. In addition, for the asymptotic case we have provided closedform expressions which allow finding suboptimal parameters with a low complexity algorithm.
Using the developed analytical model, we have studied how the performance of the incremental approach depends on the number of nodes in the network, their mobility and wireless channel quality. The results have shown that at low and medium mobility the incremental approach significantly reduces the amount of generated control information compared to the classical full dump approach and at the same time provides a high level of information relevance. So, the incremental approach with the proposed algorithm for tuning its parameters is very useful for static dense network deployments and networks with low and medium mobility.
In our further work, we are going to study the efficiency of the incremental approach combined with the proposed algorithm for tuning its parameters using system level simulations (e.g. using NS-3). In particular, we are going to consider existing channel access or routing protocols which require control information dissemination and more realistic mobility models. Also, we are going to compare the performance of different approaches for control information dissemination, including the group-based approach, in various scenarios. | 2018-05-23T13:07:04.500Z | 2018-04-03T00:00:00.000 | {
"year": 2020,
"sha1": "117585c696ab721ca49a692bd14e2669e610d0f5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.02005",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4d910ddf3dcf3981b9317291b83e747e3a3a4585",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
21085073 | pes2o/s2orc | v3-fos-license | Intracellular Targeting of CEA Results in Th1-Type Antibody Responses Following Intradermal Genetic Vaccination by a Needle-Free Jet Injection Device
The route and method of immunization, as well as the cellular localization of the antigen, can influence the generation of an immune response. In general, intramuscular immunization results in Th1 responses, whereas intradermal delivery of DNA by gene gun immunization often results in more Th2 responses. Here we investigate how altering the cellular localization of the tumor antigen CEA (carcinoembryonic antigen) affects the quality and amplitude of DNA vaccine-induced antibody responses in mice following intradermal delivery of DNA by a needle-free jet injection device (Biojector). CEA was expressed either in a membrane-bound form (wild-type CEA) or in two truncated forms (CEA6 and CEA66) with cytoplasmic localization, where CEA66 was fused to a promiscuous T-helper epitope from tetanus toxin. Repeated intradermal immunization of BALB/c mice with DNA encoding wild-type CEA produced high antibody titers of a mixed IgG1/IgG2a ratio. In contrast, utilizing the DNA construct that resulted in intracellular targeting of CEA led to a reduced capacity to induce CEA-specific antibodies, but instead induced a Th1-biased immune response.
INTRODUCTION
The route and method of DNA delivery can impact the outcome of vaccination with gene-based constructs. By targeting the same antigen to different tissues by intramuscular injection or biolistic (gene gun) intradermal immunization, the resulting immune response can be of either a predominantly Th1 or a Th2 type [1], favoring either the cellular or humoral arm of the immune system. In addition to the route and method of delivery, the nature of the induced immune response is also influenced by the localization of the expressed antigen. Several investigations into how cellular localization of the antigen affects the generation of an immune response after regular intramuscular [2,3,4,5,6,7,8] or intradermal gene gun [9] immunization have been reported. A study on intramuscular injection of different forms of the model antigen ovalbumin (OVA) showed that secreted OVA induced higher antibody levels than a membranebound or cytoplasmic form of the antigen. While immunization with a gene encoding a secreted form of OVA led to a Th2-biased immune response, immunization with genes encoding membrane-bound and cytoplasmic OVA resulted in a Th1-type response [2]. Similarly, intradermal gene gun immunization of cytoplasmic OVA resulted in a Th1-type response, while the secreted or membrane bound forms of OVA induced a mixed response [9].
While gene gun immunization is carried out by propelling DNA-coated gold particles directly into the cells of the skin, the needle-free jet injection device, Biojector, delivers DNA as a solution by creating an ultrafine stream of high-pressure fluid that penetrates the skin, and is distributed in the dermis or intramuscularly depending on the amount of pressure used. Although the location of administration is the same, the actual delivery or uptake of DNA into the expressing cells might be entirely different. Therefore, it is possible that the influence of the localization of the expressed antigen on immune stimulation could differ between the two methods.
For cancer vaccination, the generation of a robust Th1-type immune response is usually considered preferable. Th1 cells produce proinflammatory cytokines like IFNγ and TNFα, which support the stimulation of tumor-specific CD8+ T cells with cytotoxic capacity. Moreover, the Th1 milieu shifts the balance of the humoral immune response towards production of antibodies of IgG1 and IgG3 subclasses in humans [10] and IgG2a in mice [11,12], which are important for humoral effector functions, such as complement lysis and antibody-dependent cellular cytotoxicity (ADCC) [13]. Therefore, knowledge about how antigen characteristics, immunization method, and the route of delivery influences the generation of an antitumor response is of importance.
We have previously had good experience using the Biojector for intradermal delivery of HIV DNA vaccines in both mice and humans [14,15,16], and plan on employing this device for cancer vaccination of human subjects in the future. Here we investigate how cellular antigen localization affects the quality and amplitude of CEA DNA vaccine-induced antibody responses following intradermal Biojector immunization.
Plasmids and Antibodies
The three CEA-containing plasmid constructs -p91023(B), pKCEA6, and pKCEA66 -were previously described [17]. Briefly, p91023(B) contains the full-length, wild-type CEA sequence driven by the adenovirus major late promoter and with an SV40 poly A tail [18]. The pKCEA6 encodes a truncated form of wtCEA in which the N-and C-terminal signal sequences were removed [17], and the remaining part of the CEA gene was inserted into the pKCMV vector. This vector holds a CMV promoter, a HPV16 poly(A) signal, and an Escherichia coli origin of replication and encodes kanamycin resistance. In pKCEA66, the truncated gene was fused to the promiscuous helper T-cell epitope QYIKANSKFIGITEL (representing amino acids 830−844 of tetanus toxoid) [17]. Mouse monoclonal antibodies directed against human CEA used were clone II-7 (DakoCytomation Norden AB, Stockholm, Sweden), Col-1 (BD Pharmingen, Franklin Lakes, NJ), 1C11 (Abcam, Cambridge, U.K.). The rabbit antihuman CEA ab15987 was from Abcam Ltd (Cambridge, U.K.). TRITC-labeled goat antimouse IgG and FITC-labeled swine antirabbit IgG were obtained from DakoCytomation (Stockholm, Sweden). HRP-conjugated rabbit antimouse IgG and HRP-conjugated goat antimouse were from DakoCytomation (Stockholm, Sweden). The HRP-conjugated goat antihuman antibody was purchased from Bio-Rad Laboratories, (Richmond, CA) and the HRP-conjugated rabbit antigoat antibody was from DakoCytomation (Stockholm, Sweden).
Western Blot Analysis of CEA Expression
HeLa or HEK293 cells were transiently transfected using Lipofectamine 2000 (Invitrogen AB, Stockholm, Sweden) according to the manufacturer's recommendation. After 48 h, transfected cells were detached by trypsinization. After washing in PBS, the cells were lysed in Laemmli buffer (Bio-Rad, Hercules, CA). Gel electrophoresis and immunoblotting were performed using the Readygel Electrophoresis System (Bio-Rad, Hercules, CA).
Immunofluorescence
Cells growing on glass cover slips were transiently transfected with CEA using Lipofectamine 2000. The cells were fixed in ice-cold acetone/methanol (80:20) for 10 min at +4°C. After fixation, the cover slips were kept at room temperature for 30 min, washed two times with PBS, and incubated with CEA-specific antibodies (mixture of mouse monoclonals clone II-7 1:10, Col-1 1:15, and 1C11 1:15 or rabbit anti-CEA IgG 1:200) in PBS for 30 min at room temperature. After washing to remove unbound antibody, specific staining was visualized by incubation with TRITC-labeled goat antimouse IgG or FITC-labeled swine antirabbit antibodies. Following additional washing in PBS and water, slides were mounted in PBS/glycerol (1:9) and kept in the dark at +4°C until analysis. Surface staining for CEA was performed on non-permeabilized HeLa cells using the Col-1 monoclonal (1:10 dilution). After staining the cells were fixed in 4% paraformaldehyde.
CEA Quantification in Growth Medium
Growth medium from HeLa cell cultures, transiently transfected with the different CEA constructs, was collected 72 h post-transfection, centrifuged to remove potential residual cellular debris, and stored at -20°C for subsequent quantification of soluble CEA content. Culture supernatants from transfected cells were concentrated using Centricon centrifugal filter devices Ultracel YM-50 (Millipore, Billerica, MA) with a molecular cut-off of 50 kDa, according to the manufacturer's recommendation.
The CEA protein content in growth medium from transfected cells was quantified by a CEA inhibition ELISA as described previously [17]. Briefly, rCEA-coated plates were blocked with 5% milk in PBS for 2 h at 37°C. Concentrated growth medium from transfected cells and recombinant CEA protein was serially diluted in 2.5% milk in PBS and incubated for 1 h at 37°C with monkey antihuman CEA sera at a final dilution of 1/6000. Growth medium from the CEA expressing human tumor cell line LS174T was used as a positive control. Plates were washed with ELISA buffer (0.05% Tween20, 0.15 M NaCl in dH 2 O) and samples preincubated with monkey sera were added to the plates. After 2-h incubation at 37°C, plates were washed and goat antihuman HRP conjugated IgG diluted 1/3000 in 1.25% milk in PBS was added. Two hours later, the plates were developed by addition of O-phenylene diamine buffer. The reaction was stopped by addition of 2.5 M H 2 SO 4, and the absorbance was measured at 490-650 nm. The amount of CEA protein present in growth medium was calculated from a titration curve obtained by serial dilution of rCEA. Expression of the transgenes was verified by Western blot analysis of transfected cell lysates.
CEA ELISA
Plates were coated with 0.1 μg per well of rCEA (Protein Science, Meriden, USA) diluted in 0.05 M Na 2 CO 3 (pH 9,6) and incubated at room temperature over night. After washing in ELISA buffer (0.05% Tween20, 0.15 M NaCl in dH 2 O), the plates were blocked in 5% milk in PBS for 2 h. Sera from immunized mice diluted in 2.5% milk in PBS were added to the plates, and following incubation over night at 37°C, excess sera was removed by washing in ELISA buffer. After a 2-h incubation with a goat antimouse HRP conjugate diluted 1:4000 in 1.25% milk to detect CEA-specific antibodies, the plates were washed with ELISA buffer and developed by addition of O-phenylene diamine buffer (Sigma) activated with H 2 O 2 . After 10 min, the reaction was stopped by addition of 2.5 M H 2 SO 4 and the optical density was read at 490 and 650 nm. A monoclonal mouse anti-CEA Ab (1C11) (AbCam, Cambridge, U.K.) diluted 1/100 was used as a positive control. For IgG1/IgG2a subclass determination, CEA-specific antibodies were detected using goat antimouse IgG1 or IgG2a antibodies at 1:5000 concentration (DAKO Sweden AB, Stockholm, Sweden) followed by an HRP-conjugated rabbit antigoat antibody diluted 1:4000.
Expression of the CEA Plasmid Constructs in Human Cells
Mammalian expression of the CEA DNA vaccine constructs was analyzed by Western blot on transiently transfected HEK293 (Fig. 1) or HeLa (data not shown) cells. All three proteins (wtCEA, CEA6, CEA66) were readily detected at 48 h post-transfection (Fig. 1). Wild-type CEA has a molecular weight of about 180-200 kDa, and a band of the corresponding size was detected in total cell lysates from p91023(B)transfected cells (Fig. 1) matching the size of CEA detected in a lysate of LS174T cells, a cell line expressing endogenous CEA (Fig. 1). Cells transfected with the wild-type construct also gave rise to a band of around 150 kDa, matching that of an early, not yet fully glycosylated, form of CEA [19]. In contrast, detection of CEA in samples from cells expressing the truncated CEA proteins CEA6 and CEA66 (lacking signal peptides) revealed a band of about 80 kDa, corresponding to the reported size of nonglycosylated CEA (78 kDa) [20].
Removal of the CEA Signal Peptides Alters Protein Localization and Prevents Secretion of CEA from Human Cells
In order to understand fully how the cellular localization of the antigen affects immunization outcome following intradermal Biojector administration of the vaccine, we studied localization of the different CEA protein products in detail. Anticipating that removal of the N-terminal signal sequence of CEA would prohibit cotranslational translocation of the CEA protein into the endoplasmic reticulum, thereby prevent transport of the truncated CEA proteins to the cell surface, we performed immunofluorescent staining of CEA-expressing HeLa cells to analyze the effects of signal sequence deletion on the cellular distribution of CEA. As shown in Fig. 2, wild-type CEA, which is attached to the plasma membrane by a GPIanchor [21,22], accumulated around the edges of the cells, a clear sign of membrane localization. In contrast, the truncated CEA proteins CEA6 and CEA66 were distributed throughout the cytoplasm and showed no signs of increased staining at the cell boundaries. Interestingly, cells expressing the CEA6 construct often displayed perinuclear aggregation of CEA protein (Fig. 2B), a phenomenon that was not as common in cells expressing the CEA66 construct containing the Tet-epitope. Since the normal site of expression of wildtype CEA is on the outside of the plasma membrane [18,21,22], we also performed immunofluorescent staining of nonpermeabilized cells to verify and more accurately distinguish between plasma membrane localization of the wild-type protein and the intracellular staining pattern of the truncated constructs. As shown in Fig. 3, no CEA protein could be detected on the surface of cells expressing either CEA6 or CEA66 constructs, as assessed by fluorescence microscopy. However, cells expressing the wild-type form of CEA displayed a homogenous expression of CEA covering the entire cell surface.
CEA is shed from the surface of both normal and cancerous cells of the colon via a nonproteolytic mechanism mediated by a phospholipase [22,23,24]. Since the production of soluble CEA protein products could have an effect on what type of immune responses are induced by the modified CEA vaccine constructs, we investigated whether the truncated forms of CEA also could be secreted from the expressing cell. As shown in Fig. 4, culture supernatant from HeLa cells transfected with wtCEA or the LS174T cell line expressing endogenous CEA contained similar amounts of soluble CEA. In contrast, cells expressing the truncated CEA6 or CEA66 constructs did not secrete detectable levels of CEA (Fig. 4), arguing that the truncated CEA protein products indeed are retained inside the cell.
Cytoplasmic Antigen Localization Following Intradermal Biojector Immunization Results in Reduced Antibody Responses in BALB/c Mice, but Shifts Antibody Responses Towards a Th1 Profile
To investigate how intracellular targeting of CEA would impact the induction of CEA-specific antibody responses after intradermal Biojector vaccination, BALB/c mice were immunized three times at a 4-week interval, using the different CEA constructs. At 14 days after the third immunization, serum from mice immunized with a construct encoding wild-type CEA contained extremely high titers of CEA-specific antibodies with an endpoint titer of 10 6 (Fig. 5). Immunization with CEA6 DNA still resulted in high CEA-specific antibody titers of 10 4 (Fig. 5), however, repeated intradermal administration of plasmid DNA encoding CEA66 (CEA6 fused to a tetanus T-helper epitope) did not raise the level of antigenspecific antibodies above background (Fig. 5, data not shown).
We also performed IgG2a/IgG1 subclass analysis of the induced CEA-specific antibody responses to estimate the induced T-helper profile. While we considered the antibody response to CEA66 to be too low for subclass analysis, repeated intradermal immunization with wild-type CEA led to a robust and balanced immune response, with an IgG2a/IgG1 ratio of approximately 1 (Fig. 6A,C). In contrast, the IgG2a/IgG1 ratio in CEA6-induced responses was around 2, indicating a shift towards a Th1 type of immune response (Fig. 6B,C).
DISCUSSION
To investigate how cellular antigen localization influences immune induction following intradermal needle-free jet injection, we have employed three CEA-encoding DNA constructs: wtCEA, CEA6, and CEA66. The constructs were compared with respect to the expression pattern and cellular localization of the protein products in mammalian cells, and investigated in relation to their capacity to induce CEAspecific antibody responses in mice.
All three constructs could be readily expressed in human HeLa (Fig. 4B) and HEK293 cells (Fig. 1). The detection signal from cells expressing the wild-type CEA was always stronger than that obtained with the constructs encoding the truncated CEA proteins. There can be several explanations for this. Differences in promoter strength between the adenovirus major late promoter controlling wtCEA expression from the p91023(B) plasmid and the CMV promoter controlling expression of the truncated CEA6 and CEA66 constructs from the pKCMV vector is a possible, however unlikely, explanation. A study comparing the activity of several promoters, including the CMV promoter and the adenovirus 2 major late promoter, demonstrated that the CMV promoter was superior to all other promoters examined [25]. Another explanation could be that the antibody (clone II-7) used by us to detect CEA expression in western blot has a higher affinity for the glycosylated wild-type CEA than the nonglycosylated truncated forms of CEA. Indeed, analysis of the epitope specificity of this antibody suggested that antibody binding occurs to a conformation-dependent epitope involving carbohydrate structures [26], which presumably would be absent on the nonglycosylated truncated forms of CEA (CEA6, CEA66). A third explanation could be differences in subcellular localization and degradation of the CEA protein products. Intracellular proteins like CEA6 and CEA66 should be more accessible for proteasomal degradation than the wtCEA, an extracellular membrane protein, which could result in a lower net amount of whole protein in the cell during steady-state conditions [27]. The sequence of wild-type CEA contains both N-and C-terminal signal peptides. The N-terminal signal peptide ensures proper cotranslational translocation of the wild-type CEA polypeptide into the ER, where it is heavily glycosylated, and its subsequent transport to the plasma membrane [19]. During posttranslational processing of the protein, the C-terminal signal sequence is removed and replaced with a glycosylphosphatidylinositol (GPI) membrane anchor [21,28]. Indeed, intracellular immunofluorescent staining of cells expressing the wild-type CEA construct revealed a staining pattern typical of a membrane protein, with intensified staining around the edges of the cell, indicating accumulation of the CEA protein at the plasma membrane (Fig. 2), a finding that was confirmed by immunofluorescent staining of surface CEA on nonpermeabilized cells (Fig. 3).
Our modifications of the CEA genes in the CEA6 and the CEA66 constructs, which include the removal of both signal peptides, should prevent the transfer of CEA polypeptides into the endoplasmic reticulum and result in the production of nonglycosylated protein products, which are retained in the cytoplasm. Cells expressing the truncated CEA proteins (CEA6, CEA66) displayed a more even cytoplasmic staining pattern expected of an intracellular protein (Fig. 2), but did not express CEA at the plasma membrane (Fig. 3). In detail, CEA6-expressing cells (lacking both signal peptides) often exhibited perinuclear aggregation of CEA6 protein (Fig. 2B), perhaps as a consequence of defective intracellular transport. Intracellular staining for the CEA66 protein also revealed the appearance of cytoplasmic vacuoles from which CEA66 protein was excluded (Fig. 2), a phenomenon that may be associated with the function of the translocation domain of tetanus toxin, from which the epitope is derived [29]. That the truncated CEA protein products indeed are retained within the producing cell was confirmed by performing a CEA inhibition ELISA on concentrated culture supernatants from cells expressing the different CEA constructs. Supernatant from HeLa cells expressing wild-type CEA contained similar levels of soluble CEA protein, as did supernatants from the LS174T cell line expressing endogenous CEA (Fig. 4). In contrast, no soluble CEA could be detected in supernatants from cells expressing the CEA6 or CEA66 constructs, demonstrating that truncated CEA proteins are retained within the expressing cell.
The cellular localization of the CEA antigen did have an impact on the resulting immune responses. As expected, immunization with the wtCEA construct, resulting in production of both membrane-bound and soluble CEA, generated antibody titers of high magnitude (Fig. 5). This is in agreement with previous studies showing that secreted antigens are more efficient in inducing antibody response, both after intramuscular and intradermal gene gun immunizations [2,7,8,9]. On the other hand, intracellular targeting of CEA through deletion of the CEA signal sequences resulted in a decreased, but still potent, antibody response (Fig. 5). The decreased capacity of cytoplasmic CEA to stimulate a humoral response could be a result of increased intracellular degradation of the protein product, as suggested by the Western blot results (Fig. 1). This would reduce the availability of native protein for efficient antibody induction [30,31,32]. Interestingly, fusion of a promiscuous tetanus T-helper epitope to the truncated CEA construct (CEA66) did not result in increased antibody production, but rather had a negative impact on the induction of antibody responses to CEA (Fig. 5). It is possible that the addition of this epitope, stemming from a protein translocation domain, further prohibited release of whole CEA66 protein for Bcell receptor recognition. However, the negligible antibody response observed after repeated immunization with CEA66 DNA could be readily boosted by immunization with recombinant CEA protein, demonstrating the antibody priming capacity of CEA66 DNA. Previously, tetanus T-helper epitopes in DNA vaccines have mostly been used in combination with secreted or soluble antigens [33,34,35]. FIGURE 6. Cytoplasmic antigen localization tilts the antibody response to CEA towards a Th1 profile. Three weeks after the third immunization, serum was collected and pooled before analyzed for CEA-specific IgG1 and IgG2a antibodies. The levels of CEA-specific antibodies of IgG1 and IgG2a subclasses from animals immunized with wtCEA (A) and CEA6 (B) were determined. The IgG2a/IgG1 subclass ratio at serum dilutions of 1:100 and 1:1000 are shown in panel C. One representative experiment out of two performed is shown in the figure.
The main source of antigen for presentation on MHC class II molecules, which are recognized by CD4+ T-helper cells, derives from internalized extracellular proteins. Therefore, it is possible that conjugation of the Tet epitope to a nonsecreted antigen does not allow for optimal induction of T-cell help since the amount of exogenous antigen available for uptake by antigen-presenting cells will be limited.
The ultimate goal of cancer vaccination is the induction of cytotoxic T cells that can effectively target and lyse antigen-presenting tumor cells. Therefore, a tumor DNA vaccine should optimally lead to the induction of a Th1 type of immune response, which is generally considered to drive cellular immune responses [36]. A Th1-type immune response is associated with production of cytokines like IL-2 and IFNγ [36], but also affects what subclasses of antibodies are induced. In mice, a Th1 response results in a higher production of IgG2a over IgG1 antibodies, whereas a Th2 response favors IgG1 over IgG2a [11,12]. Here, repeated intradermal administration of wtCEA DNA induced a mixed Th1/Th2 response (Fig. 6A,C). In contrast, immunization with CEA6 DNA, encoding a truncated form of CEA lacking both the N-and C-terminal signal peptides, resulted in an IgG2a/IgG1 antibody ratio of about 2, clearly shifting the immune response towards a desired Th1-type response (Fig. 6B,C). This shift towards a Th1 response resulting from intradermal immunization with an intracellular antigen could be explained by the reduced amount of soluble antigen in its native form, available for B-cell priming in the lymph node [2,37], according to the belief that antigen dose can influence the Th1/Th2 balance of an immune response [37]. This effect could be further augmented by the different glycosylation patterns of the CEA antigens since dendritic cells are more efficient in capturing and internalizing glycosylated antigens for presentation through the MHC class II pathway [38,39].
The Biojector has been used to deliver a wide range of genetically encoded antigens, e.g., HIV, rotavirus, herpes, and dengue, to both animals and human subjects, resulting in potent humoral as well as cellular immune responses [14,16,40,41,42]. However, according to our knowledge, this is the first study investigating how antigen localization might influence the induction of antibody responses after intradermal immunization with the Biojector. In conclusion, this study demonstrates that cellular localization of the DNA-encoded antigen affects both strength and quality of the humoral immune response resulting from intradermal Biojector immunization. Expression of the tumor antigen CEA as a membrane protein resulted in a mixed Th1/Th2 response with high antibody titers. Targeting of CEA to the cytoplasm reduced the antibody-stimulating capacity, but more importantly, led to an increased production of Th1-type antibody subclasses that are associated with humoral effector functions [13]. The results have important implications for design of DNA-encoded antigens intended for intradermal vaccine delivery by Biojector. | 2018-04-03T05:29:18.375Z | 2007-06-12T00:00:00.000 | {
"year": 2007,
"sha1": "180b240d200a6e59d4306b85ba3904ecbd5110af",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/tswj/2007/698326.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94bb2d5f018f0ba33acc91a7674ab752d2331c5c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16476804 | pes2o/s2orc | v3-fos-license | Potential role of ixekizumab in the treatment of moderate-to-severe plaque psoriasis
Background Psoriasis is a debilitating autoimmune skin disease that affects 2%–3% of the world’s population. Patients with moderate-to-severe plaque psoriasis suffer from a decreased quality of life as well as comorbidities. Newer biological agents have been shown to be more effective than traditional therapies. In this article, we assess the potential role of ixekizumab, an anti-interleukin (IL)-17 antibody, in treating moderate-to-severe plaque psoriasis. Method We reviewed PubMed for articles regarding ixekizumab and the epidemiology and management of plaque psoriasis. Results In a Phase I clinical trial, treatment with ixekizumab resulted in both clinical and histopathologic improvement of psoriasis, which suggests that IL-17 may be a key driver in the pathogenesis of psoriasis. In a Phase II clinical trial, treatment with ixekizumab resulted in rapid clinical improvement of psoriasis, which lends further support to its role as an effective treatment for patients with chronic moderate-to-severe plaque psoriasis. Reductions in Psoriasis Area and Severity Index (PASI) score are comparable to those associated with currently marketed biologics. Conclusion Literature concerning the effects of ixekizumab on chronic moderate-to-severe plaque psoriasis is currently limited to two clinical trials. Results suggest that ixekizumab shows great therapeutic promise. However, more large-scale and long-term trials are needed to establish safety and efficacy.
Background
Psoriasis is a chronic, inflammatory autoimmune skin disease that affects 2%-3% of the world's population. 1 Most patients with psoriasis have plaque psoriasis, or psoriasis vulgaris. Patients with over 5% body surface area (BSA) involvement or psoriasis of the palms, soles, head and neck, or genitalia are considered to have moderate-to-severe psoriasis 2 and require treatment with phototherapy, traditional systemic agents, and/or newer biological agents. 1 Although psoriasis can appear at any age, onset before age 30 is most common. 3 Thus, most patients unfortunately are affected during the most active and productive years of their life. Psoriasis is associated with a decreased quality of life 4 as well as with comorbidities, such as obesity, depression, metabolic syndrome, cardiovascular disease, and malignancies. 5 Together, increased health care utilization and time lost from work create an additional financial burden for patients.
Over the past decade, as knowledge of the pathogenesis of psoriasis has increased, treatments directed at specific components of the immune system have been devel-oped. Although biologics are more expensive than other forms of therapy by about US$30,000 per patient per year, 6 they may indirectly lessen costs for some patients by reducing the need, or length of hospitalization, 7 and by increasing productivity and reducing work limitations. 8 Although greater patient satisfaction has been reported with the use of biologics 9 , psoriasis remains incurable and potentially debilitating in severe cases.
Development of new biologics is favored because traditional topical therapies, phototherapy, and systemic medications have been associated with patient frustration 10 and low compliance. 11,12 Furthermore, topical treatments and phototherapy do not improve joint damage that is ongoing in psoriatic arthritis, and traditional systemic agents can cause long-term organ damage, such as pulmonary fibrosis and cirrhosis in patients on methotrexate. Psoriatic patients on biologics show greater improvement than do patients on topicals, phototherapy, or conventional systemic agents, and both patients and their dermatologists express greater satisfaction with biologics. 13 Ixekizumab (LY2439821), a promising new humanized IgG4 anti-IL-17 monoclonal antibody, is now undergoing Phase II testing. 14 The aim of this article is to review the findings of ixekizumab testing thus far and to comment on its potential role in the treatment of moderate-to-severe plaque psoriasis alongside four currently approved biological agents -adalimumab, etanercept, infliximab, and ustekinumab.
Methods
Relevant articles were selected from the PubMed database using individual or combinations of search terms such as: ixekizumab, LY2439821, IL-17, plaque psoriasis, psoriasis vulgaris, comorbidity, quality of life, epidemiology, cost, biologic, adalimumab, etanercept, infliximab, ustekinumab, brodalumab, adverse effect, and efficacy. Additional publications were gathered from the reference lists of identified articles and from related citations in PubMed. As of January 2013, two clinical trials have been identified. In addition, 37 other articles were reviewed and referenced.
Ixekizumab: mechanism of action
Recent progress in the understanding of the immune factors that drive pathogenic inflammation in psoriasis has directed our attention to type 17 helper T (Th17) cells and IL-17 as targets for therapy. 15,16 Th17 cells secrete proinflammatory cytokines, including IL-17A (IL-17), and have been found in the dermis of psoriatic skin lesions. 17 In addition, higher levels of IL-17 have been associated with lesional (as opposed to perilesional) skin samples of patients with severe plaque psoriasis. 18 Cell-culture experiments suggest that IL-17 can directly activate over 40 genes in keratinocytes and that synergistic interaction with TNF-alpha can result in an even larger pool of inflammatory products. 19 A clinical trial evaluating the efficacy of a single dose of AIN457, an anti-IL-17 antibody, showed a 58% reduction in the Psoriasis Area and Severity Index (PASI) score relative to baseline, 20 further illustrating the potential role of IL-17 antagonists in the treatment of psoriasis.
Ixekizumab: phase I study
In a 20-week-long randomized, double-blind, placebo-controlled Phase I trial 21 evaluating the effects of neutralization of IL-17 on chronic moderate-to-severe plaque psoriasis, 40 subjects received 5, 15, 50, or 150 mg of subcutaneous ixekizumab or placebo at 0, 2, and 4 weeks. For each patient, punch biopsies were obtained from the same lesion at baseline, 2 weeks, and 6 weeks. Attenuation of the IL-17 pathway and improvements in disease biomarkers and clinical presentation were measured.
At 2 weeks, patients treated with ixekizumab had reduced keratinocyte proliferation, epidermal hyperplasia, dermal infiltration of T cells and dendritic cells, and keratinocyte expression of IL-17-regulated products (eg, cathelicidin, beta-defensin 2). In addition, there was a dose-dependent reduction in cytokine transcripts associated with activated Th1, Th17, and Th22 T cells as well as a reduction in IL-23. At 6 weeks, there was near normalization of skin in patients treated with 50-and 150-mg of ixekizumab but not in patients receiving placebo.
Clinical efficacy of ixekizumab was assessed using PASI 75. At 6 weeks, the proportion of patients who achieved a reduction in PASI score by at least 75% was significantly greater in the 15-mg, 50-mg, and 150-mg ixekizumab groups than in the placebo or 5-mg groups (Table 1). Significant differences were sustained through the 20 week trial. Ixekizumab was well tolerated, and there were no deaths or treatment-related adverse events during the study period.
In addition to rapid blockade of the IL-17 pathway, ixekizumab significantly suppressed 765 disease-related submit your manuscript | www.dovepress.com Dovepress Dovepress genes in week 2 biopsy specimens in the 150-mg group but not the placebo group. In contrast, the TNF-alpha antagonist etanercept was shown to modulate only 200 genes by two weeks. 23 Furthermore, 643 of 1200 genes usually upregulated in psoriatic lesions were found to be normalized after 2 weeks of treatment with ixekizumab compared to only 104 genes after 2 weeks of treatment with etanercept. Normalization of many of these genes was found to suppress Th1-, Th22-, and Th17-associated cytokines as well as downstream pathways active in psoriasis. Meta-analysis further showed that ixekizumab is more effective than etanercept at reducing keratin 16 and beta-defensin 4 mRNA, epidermal thickness, and keratinocyte proliferation. Ixekizumab had the greatest influence on genes regulated by IL-17 alone and was more effective than etanercept at modulating genes co-regulated by IL-17 and TNF-alpha. 23
Selected study patients were at least 18 years old with a history of chronic moderate-to-severe plaque psoriasis of at least 6 months, psoriasis involving at least 10% BSA, a PASI score of at least 12, and a static Physician's Global Assessment (sPGA) score of at least 3. Patients with nonplaque psoriasis, a significant flare of psoriasis within 12 weeks before randomization, an active infection within 5 days before administration of ixekizumab, a recent infection necessitating hospitalization or antibiotics, receipt of conventional systemic psoriasis therapy or phototherapy within 4 weeks or topical psoriasis therapy within 2 weeks before randomization, or recent use of any biologic agent were excluded from the study. During the study, patients were allowed to use topical moisturizers, bath oils, oatmeal baths, and topical salicylic acid preparations as needed. Use of weak topical steroids was limited to the face, axillae, or genitalia. All topical agents were discontinued 24 hours prior to PASI assessments.
The primary end point was the proportion of patients with a reduction in the PASI score by at least 75% at 12 weeks. Secondary end points included the proportion of patients who achieved a reduction in PASI score by at least 90% and 100%, the static Physician's Global Assessment (sPGA) score, the joint-pain Visual Analogue Scale (VAS), the Nail Psoriasis Severity Index (NAPSI), the Psoriasis Scalp Severity Index (PSSI), and patient-reported itch VAS and Dermatology Life Quality Index (DLQI) scores.
Results showed that at 12 weeks, the proportion of patients who achieved PASI 75 or PASI 90 was significantly greater in the 25-mg, 75-mg, and 150-mg ixekizumab groups than in the placebo group (Table 2). In addition, PASI 100 was significantly greater in the 150-mg and 75-mg ixekizumab groups than in the placebo group. Significantly more patients in the 25-mg, 75-mg, and 150-mg ixekizumab groups had a sPGA score of 0 (clear of disease) or 1 (minimal disease) than in the placebo group at week 12. Significant differences between the 150-mg group and the placebo group in PASI 75 and sPGA were identified at as early as 2 weeks and sustained through 20 weeks. About 40% of patients in the 150-mg and 75-mg groups had complete clearance of psoriasis plaques (PASI 100 or sPGA score of 0) at 12 weeks.
Among patients with scalp psoriasis, significant reductions in the PSSI score (P # 0.01 versus placebo) were observed at 12 weeks in the 25-mg (−87.1 ± 23.6), 75-mg (−94.8 ± 14.5), and 150-mg (−84.8 ± 41.5) ixekizumab groups. Similarly, among patients with nail psoriasis, significant reductions in the NAPSI score (P , 0.05 versus placebo) were observed at 2 weeks in the 75-mg (−57.1 ± 36.7) and 150 mg (−49.3 ± 35.9) ixekizumab groups, and among Adverse events (eg, nasopharyngitis, upper respiratory infection, injection-site reaction, headache) occurred equally (63%) in both the combined ixekizumab groups and the placebo group. There were no reports of serious adverse events (eg, major cardiovascular events, serious infections) or dose-related patterns in the frequency or severity of adverse events. However, four patients discontinued the study due to hypertriglyceridemia, peripheral edema, hypersensitivity, or urticaria. There were no sustained significant changes in liver enzyme levels in any ixekizumab group. Two ixekizumab patients developed grade 2 neutropenia without infection.
Discussion
Literature concerning the effects of ixekizumab on chronic moderate-to-severe plaque psoriasis is currently limited to two randomized, double-blind, placebo-controlled Phase I and Phase II trials involving 182 patients. Results of these studies show that ixekizumab, a humanized anti-IL-17 monoclonal antibody, improves both pathologic skin features and clinical symptoms of chronic moderate-to-severe plaque psoriasis. This suggests that IL-17 is an important driver of psoriasis pathogenesis. However, in order to establish long-term safety and efficacy of ixekizumab, additional trials following a greater number of patients for a longer amount of time are needed.
One concern is that blocking IL-17-mediated chemokine production -and consequently, neutrophil trafficking -may increase susceptibility to klebsiella 24 and candida 25 infections. In addition, potential formation of neutralizing antibodies could affect both initial response to and long-term efficacy of ixekizumab. 26,27 Biological agents currently used to treat moderateto-severe plaque psoriasis are infliximab, adalimumab, etanercept, and ustekinumab. Infliximab, adalimumab, and etanercept inhibit TNF-alpha while ustekinumab inhibits IL-12 and IL-23. Based on indirect comparisons of primary endpoints, a meta-analysis of 20 short-term (10-16 weeks) trials has shown that infliximab 3-10 mg/kg has the highest predicted mean probability of response, followed by ustekinumab 90 mg every 12 weeks, ustekinumab 45-90 mg every 12 weeks, adalimumab 40 mg every 1 to 2 weeks, etanercept 50 mg twice weekly, and etanercept 25 mg twice weekly (Table 3). 28 Taken together, the PASI scores for established biologics are impressive, but there is still room for improvement. In addition, these results may not adequately reflect long-term effects of treatment as different drugs may achieve maximal effect and lead to side effects at different rates. 29,30 It is difficult to compare the efficacy of ixekizumab against these other biological agents due to there being only two small trials thus far. However, based on PASI data obtained from the existing Phase I and Phase II trials, it appears that ixekizumab 150 mg, 75 mg, 50 mg, and 25 mg may be comparable to or more effective than infliximab or ustekinumab 90 mg. A greater understanding of ixekizumab's safety profile and recommended dosage is needed before such conclusions can be drawn though. Future trials may also consider the efficacy of combining low doses of ixekizumab and other therapeutic agents such that an optimal balance between reduction in disease severity and risk of side effects is achieved.
Although they are rapid-acting and highly effective, the TNF inhibitors infliximab, adalimumab, and etanercept are all associated with serious infections, autoimmune conditions, and lymphoma. 31 However, a recent study found that during the first year of treatment, the rate of success with anti-TNF therapy was several orders of magnitude greater than the likelihood of serious toxicity. 32 In contrast to the TNF antagonists, ustekinumab has not been associated with a significant risk of malignant neoplasm or infection. 33,34 Furthermore, it has been shown to benefit patients who have an inadequate response or contraindications to systemic therapies and anti-TNF biologics. 35,36 Like ustekinumab, recent clinical trials have shown ixekizumab to be fast-acting and well-tolerated. Because it blocks a different component of the inflammatory pathway than the anti-TNF agents and ustekinumab, future studies of ixekizumab may reveal that it offers a risk-benefit ratio that is more beneficial to some patients. Although studies have suggested that etanercept and infliximab may potentially be effective in treating pediatric patients with severe recalcitrant psoriasis, the impact of currently marketed biologics on pregnant women and fetuses is unknown and indications for use in the pediatric population is not well supported 37 due to an understandable avoidance of treating young patients with high-risk medications. However, researchers might consider these special populations as avenues for future research and as a potential way for ixekizumab to distinguish itself from the other biological agents. Future evaluations of the potential role of ixekizumab in the treatment of chronic moderate-to-severe plaque psoriasis should also take into consideration its cost-effectiveness, which may vary across countries, and associated level of patient satisfaction. In a recent Spanish study evaluating the cost-efficacy of adalimumab, etanercept, infliximab, and ustekinumab for moderate-to-severe plaque psoriasis, adalimumab at a dose of 40 mg every other week beginning 1 week after a loading dose of 80 mg was found to be most efficient in terms of cost per patient achieving PASI 75 while ustekinumab at a dose of 90 mg was found to be the least efficient (€8013 versus €17,981). 38 In contrast, a US study of etanercept, infliximab, and adalimumab found infliximab 3 mg/kg intravenous to be most efficient in terms of cost per patient achieving PASI 75, followed by infliximab 5 mg/kg and adalimumab 40 mg SQ every other week. 39 Factors that affect patient satisfaction include dosing schedule and mode of administration. Etanercept, adalimumab, and ustekinumab, which are administered subcutaneously, are easier to take than infliximab, which is given intravenously. In addition, ustekinumab offers the most convenient dosing schedule (every 12 weeks following initial injections at 0 and 4 weeks). It remains to be seen how ixekizumab will compare.
Disclosure
The authors have no conflicts of interest in this work. | 2017-07-11T18:49:09.152Z | 2013-03-14T00:00:00.000 | {
"year": 2013,
"sha1": "7de76b1b56dc91438ce24a581e12366055e2704a",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=15477",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20ce1ba12c3f91346cde6ee9f41ac62a572b6bd6",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
96798312 | pes2o/s2orc | v3-fos-license | Transport in Suspended Monolayer and Bilayer Graphene Under Strain: A New Platform for Material Studies
We develop two types of graphene devices based on nanoelectromechanical systems (NEMS), that allows transport measurement in the presence of in situ strain modulation. Different mobility and conductance responses to strain were observed for single layer and bilayer samples. These types of devices can be extended to other 2D membranes such as MoS2, providing transport, optical or other measurements with in situ strain.
In this letter we present transport measurements of suspended monolayer graphene (MLG) and bilayer graphene (BLG) nanoelectromechanical (NEM) devices, who allows in situ modification of strain up to 5%. We study the device behavior before and after repeated straining cycles. For MLG devices, the two-terminal conductance G vs. gate voltage Vg curve becomes smoother, and the minimum conductance shows minimal change (<1%), in agreement with prior results 24 . For BLG, the minimum conductance decreases by more than 10% and field effect mobility increases. The different behaviors between MLG and BLG devices may arise from the relative shear between the two layers in BLG, or the presence of stacking domains (e.g. AB-BA) whose boundaries are particularly susceptible to strain. Our results underscore the rich interplay between strain and transport offered by suspended devices. Furthermore, these types of NEMS devices are compatible with optical measurements and can be used to study other two-dimensional materials.
Experimental section
2.1 Device fabrication: Graphene sheets were extracted from bulk graphite using standard mechanical exfoliation techniques on top of SiO2/Si substrates or a layer of the LOR resist. The number of layers was initially identified via optical microscopy and subsequently confirmed with Raman spectroscopy after completion of transport measurements ( Figure 1a). To perform transport measurement and in situ stretching, we fabricated nano-electromechanical system (NEMS)-based graphene devices, using two different techniques: In Method A, devices were fabricated with multi-level lithography based on the resists consist of LOR layer on top of PMMA layer. Detailed fabrication process is described in our previous work [27] (Figure 1b). Devices thus fabricated have relatively large areas, and graphene are "held" up by electrodes that are suspended above the SiO2/Si substrates (Figure 1c). The central electrode was designed to be wider and shorter than the neighboring electrodes, so that it can sustain higher actuating voltages. imaging, all SEM characterizations were done on devices after finishing all the transport measurements or "SEM-imaging-only" devices.
Results and discussion
where h denotes the maximum vertical deflection of the suspended electrode under the electrostatic force, L0 indicates the initial length of suspended graphene sample.
We estimate that at maximum load, up to 5% strain can be induced in the graphene sheets. when Vg is reduced to 0. We note that when the measurement is repeated on a control device with the same geometry but without the graphene flake, the suspended electrode collapses at much smaller voltage Vg~7V. Since the electrostatic force is proportional to Vg 2 , we estimate that at Vg=30V, ~95% of the electrostatic force is exerted on the graphene sheet. Figure 2d shows another device before and upon applying Vg~100V. Periodic ripples appears in the graphene sheet afterwards arises from the longitudinal strain induced [17]. Figure 2g displays the same device when Vg is returned to 0V, and the suspended electrode returns to its original height. To avoid collapsing the samples, we typically limit the actuating voltage to less than 60V.
To perform transport measurements, the devices are cooled down to 4.2K in vacuum.
Current annealing was applied to remove contaminants on the graphene sheet. The devices are first characterized by measuring its conductance G as a function of Vg; ( Figure 3a, red curve) here Vg is limited to <±10V, so that strain is negligible. All devices show repeatable G(Vg,) curves over such small Vg range.
After extracting data from its initial state, we start stretching the sample by gradually ramping up actuating voltage to -50V. Figure 3b shows the conductance changes as time elapsed, when the actuating voltage is maintained at 50V. The conductance fluctuates noticeably and decreased by more than 20 μS (~ 1%). We note that this effect cannot be explained by the changing capacitance between graphene and the gate -at the strained position, the device has stronger coupling to the gate, thus should give rise to a higher conductance value. Thus the modulation in conductance must be induced by movement of the electrode itself, e.g. strain and/or changing the graphene-electrode interface. Compared with single layer samples, bilayer devices behaves quite differently. Figure 4a shows the G(Vg) curves before and after stretching from a typical bilayer devices.
After releasing from external strain, the curve becomes steeper and smoother, and the mobility improves. Interestingly, the minimum conductance decreased considerably.
This can also been seen in the I-V curves, which is more non-linear after stretching ( Figure 4b). Typically, after stretching process, the conductance of bilayer devices decreases by 10%~15%. After several stretching cycles several times, the device's G(Vg) becomes stable (Figure 4c) with the improved mobility and lower minimum conductance. The device shows no appreciable change in appearance after the stretching cycles (Figure 4d). These intriguing observations suggest the rich interplay between strain and transport offered by suspended devices. The improvement in device mobility likely arises from releasing the strain or ripples that are built-in during the fabrication process. The different behaviors between single layer and bilayer devices are particularly intriguing, e.g. the significant decrease in minimum conductance is unique to bilayer devices. A possible explanation is the improved contact at the electrode-graphene interface; however, one expects that this scenario should occur in single-layer devices as well. We also exclude strain-induced cracks, which should occur at much higher strain [31,32] and also lead to lower mobility. Our present proposal is that the decrease in minimum conductance may be caused by relative shift and/or shear between two layers induced by the stretching cycles, [33][34][35] or the presence of AB-BA stacking domains whose boundaries may shift in response to strain. [36,37] This hypothesis can be verified by low temperature transport measurements, as the modified band structure is expected to lead to reduced density of states and different Landau level spectrum than that of an AB-stacked bilayer graphene.
Conclusion
In conclusion, we developed two types of NEMS-like devices to stretch suspended single crystal graphene samples and perform in situ measurements. The stretching process can be observed via SEM imaging. Transport property investigation shows that after stretching process, the gate response of conductance from graphene samples improved, and dramatic decrease in minimum conductance is observed in bi-layer graphene samples. The experimental system and method introduced in this work provides a new approach in strain engineering researches.
Supporting Information
Supporting Information is available from the Elsevier website or from the authors. | 2019-04-06T13:06:36.715Z | 2013-08-06T00:00:00.000 | {
"year": 2013,
"sha1": "e2a82a670e8dbc58a0834adabfcd6581eeebf366",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1308.1182",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e68e76d003b7ed7fff0d6d2190d196b25ea45f59",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
6788060 | pes2o/s2orc | v3-fos-license | The Exceptional Jordan Eigenvalue Problem
We discuss the eigenvalue problem for 3x3 octonionic Hermitian matrices which is relevant to the Jordan formulation of quantum mechanics. In contrast to the eigenvalue problems considered in our previous work, all eigenvalues are real and solve the usual characteristic equation. We give an elementary construction of the corresponding eigenmatrices, and we further speculate on a possible application to particle physics.
Introduction
In previous work [1,2,3] we considered both the left and right eigenvalue problems for 2 × 2 and 3 × 3 octonionic Hermitian matrices, given explicitly by and respectively. We showed in [1] that the left eigenvalue problem admits nonreal eigenvalues over both the quaternions H and the octonions O, while the right eigenvalue problem admits nonreal eigenvalues only over O. Some of the intriguing properties of the eigenvectors corresponding to these nonreal eigenvalues were considered in [3], and in [4,5] we discussed possible applications to physics, including the remarkable fact that simultaneous eigenvectors of all 3 angular momentum operators exist in this context.
However, the main result in [1] concerned real eigenvalues in the 3 × 3 octonionic case. For this case, there are 6, rather than 3, real eigenvalues [6]. We showed that these come in 2 independent families, each consisting of 3 real eigenvalues which satisfy a modified characteristic equation rather than the usual one. Furthermore, the corresponding eigenvectors are not orthogonal in the usual sense, but do satisfy a generalized notion of orthogonality (see also [2,7]). Finally, all such matrices admit a decomposition in terms of (the "squares" of) orthonormal eigenvectors. However, due to associativity problems, these matrices are not idempotents (matrices which square to themselves).
It is the purpose of this paper to describe a related eigenvalue problem for 3×3 Hermitian octonionic matrices which does have the standard properties: There are 3 real eigenvalues, which solve the usual characteristic equation, and which lead to a decomposition in terms of orthogonal "eigenvectors" which are indeed (primitive) idempotents. This is accomplished by considering the eigenmatrix problem where V is itself an octonionic Hermitian matrix and • denotes the Jordan product [8,9] A • B = 1 2 (AB + BA) (4) which is commutative but not associative. We further restrict V to be a (primitive) idempotent; as discussed below, this ensures that the Jordan eigenvalue problem (3) reduces to the traditional eigenvalue problem (2) in the non-octonionic cases. The exceptional Jordan algebra of 3 × 3 octonionic Hermitian matrices under the Jordan product, now known as the Albert algebra, was extensively studied by Freudenthal [10,11,12], 1 and is well-known to mathematicians [13,14,15,16]. In particular, the existence of a decomposition in terms of orthogonal idempotents, and its relationship to the eigenvalue problem (4), was shown already in [9]. Furthermore, since any Jordan matrix can be diagonalized by an F 4 transformation [11], and since F 4 is the automorphism group of the Jordan product [17], the eigenmatrix problem (3) is easily solved in theory. However, we are not aware of an elementary treatment along the lines presented here.
Our motivation for studying this problem is the well-known fact that the Albert algebra is the only exceptional realization of the Jordan formulation of quantum mechanics [8,9,18,19]; over an associative division algebra, the Jordan formalism reduces to standard quantum mechanics. Furthermore, the 4 division algebras R, C, H, and O are fundamentally associated with the Killing/Cartan classification of Lie algebras -corresponding to physical symmetry groups -into orthogonal, unitary, symplectic, and exceptional types. This most exceptional quantum mechanical system over the most exceptional division algebra provides an intriguing framework to study the basic symmetries of nature.
We begin by summarizing the properties of the Albert algebra in Section 2. In order to make our work accessible to a wider audience, we first motivate our subsequent computation by briefly reviewing the Jordan formulation of quantum mechanics in Section 3, before 1 Freudenthal's early work on this topic was originally distributed in German in mimeographed form [10], parts of which were later summarized in [11], which we henceforth cite. Many of these results can also be found in English in [12].
presenting the mathematical details of the eigenvalue results in Section 4. In Section 5, we include a brief but suggestive discussion of possible applications, such as its relevance for our recent work on dimensional reduction [4,5]. Finally, in the Appendix, we show explicitly how to diagonalize a generic Jordan matrix using F 4 transformations.
The Albert Algebra
We consider the Albert algebra consisting of 3 × 3 octonionic Hermitian matrices, which we will call Jordan matrices. 2 The Jordan product (4) of two such matrices is commutative but not associative. We have in particular that and we define which differs from the cube of A using ordinary matrix multiplication. Other operations on Jordan matrices are the trace, denoted as usual by tr(A), and the Freudenthal product [11] where I denotes the identity matrix and with the important special case where There is also trace reversal and, finally, the determinant which can equivalently be defined by Expanding (12) using (8), we obtain the remarkable result that Jordan matrices satisfy the usual characteristic equation [11] A 3 − (tr A) Explicitly, a Jordan matrix can be written as with p, m, n ∈ R and a, b, c ∈ O, where the bar denotes octonionic conjugation. The definitions above then take the concrete form The Cayley plane, also called the Moufang plane, consists of those Jordan matrices V which satisfy the restriction [12,15] We will see below that elements of the Cayley plane correspond to projection operators in the Jordan formulation of quantum mechanics. As shown in [15], the conditions (16) force the components of V to lie in a quaternionic subalgebra of O (which depends on V). Basic (associative) linear algebra then shows that each element of the Cayley plane is a primitive idempotent (an idempotent which is not the sum of other idempotents), and can be written as where v is a 3-component octonionic column vector, whose components lie in the quaternionic subalgebra determined by V, and which is normalized by Note that v is unique up to a quaternionic phase. Furthermore, using (8) and its trace (9), it is straightforward to show that, for any Jordan matrix B, which agrees with (16) up to normalization, and which is therefore the condition that that ±B can be written in the form (17) (without the restriction (18)). Note further that for any Jordan matrix satisfying (19), the normalization tr B can only be zero if v, and hence B itself, is zero, so that B * B = 0 = tr B ⇐⇒ B = 0 (20) since the converse is obvious. We will need the following useful identities for any Jordan matrix A, which can be verified by direct computation. Finally, we also have the remarkable fact that which follows by polarizing (21) 3 and which ensures that the set of Jordan matrices satisfying (19), consisting of all real multiples of elements of the Cayley plane, is closed under the Freudenthal product. Before proceeding further it is illuminating to consider the restriction to real column vectors. If u, v, w ∈ R 3 , then where · denotes the usual dot product (and where the Hermitian conjugate of a real matrix is of course just its transpose). We also have where × denotes the usual cross product. We can therefore view the Jordan product as a generalization of the (square of the) dot product, and the Freudenthal product as a generalization of the (square of the) cross product. This somewhat simplified perspective is nevertheless extremely useful in grasping the essential content of the corresponding octonionic manipulations. For instance, the linear independence of (real) u, v, w is given by the condition where Q is the matrix whose columns are the vectors u, v, w. Note that and of course det(QQ † ) = | det(Q)| 2 . But using the definition (11) for real u, v, w leads to the identity which not only emphasizes the role played by the determinant in determining linear independence, but also makes plausible the cyclic nature of the trace of the triple product obtained by polarizing (11).
The Jordan Formulation of Quantum Mechanics
In the Dirac formulation of quantum mechanics, a quantum mechanical state is represented by a complex vector v, often written as |v , which is usually normalized such that v † v = 1.
In the Jordan formulation [8,9,19], the same state is instead represented by the Hermitian matrix vv † , also written as |v v|, which squares to itself and has trace 1 (compare (16)). The matrix vv † is thus the projection operator for the state v, which can also be viewed as a pure state in the density matrix formulation of quantum mechanics. Note that the phase freedom in v is no longer present in vv † , which is uniquely determined by the state (and the normalization condition). A fundamental object in the Dirac formalism is the probability amplitude v † w, or v|w , which is not however measurable; it is the the squared norm | v|w | 2 = v|w w|v of the probability amplitude which yields the measurable transition probabilities. One of the basic observations which leads to the Jordan formalism is that these transition probabilities can be expressed entirely in terms of the Jordan product of projection operators, since A similar but less obvious translation scheme also exists [19] for transition probabilities of the form | v|A|w | 2 , where A is a Hermitian matrix, corresponding (in both formalisms) to an observable, so that all measurable quantities in the Dirac formalism can be expressed in the Jordan formalism. So far, we have assumed that the state vector v and the observable A are complex. But the Jordan formulation of quantum mechanics uses only the Jordan identity for 2 observables (Hermitian matrices) A and B. As shown in [9], the Jordan identity (31) is equivalent to power associativity, which ensures that arbitrary powers of Jordan matrices -and hence of quantum mechanical observables -are well-defined. The Jordan identity (31) is the defining property of a Jordan algebra [8], and is clearly satisfied if the operator algebra is associative, which will be the case if the elements of the Hermitian matrices A, B themselves lie in an associative algebra. Remarkably, the only further possibility is the Albert algebra of 3 × 3 octonionic Hermitian matrices [9,18]. 4 In what follows we will restrict our attention to this exceptional case.
The Jordan Eigenvalue Problem
Consider finally the eigenmatrix problem (3). Note first of all that since A and V are Jordan matrices, the left-hand-side is Hermitian, which forces λ to be real.
Suppose first that A is diagonal. Then the diagonal elements p, m, n are clearly eigenvalues, with obvious diagonal eigenmatrices. But there are also other "eigenvalues", namely the averages (p + m)/2, (m + n)/2, (n + p)/2. However, the corresponding eigenmatrices -which are related to Peirce decompositions [13,14] -have only zeros on the diagonal. Thus, by (20), they can not satisfy (16), and hence can not be written in the form (17). To exclude this case, we therefore restrict V in (3) to the Cayley plane (16), which ensures that the eigenmatrices V are primitive idempotents; they really do correspond to "eigenvectors" v. Recall that this forces the components of V to lie in a quaternionic subalgebra of O (which depends on V) even though the components of A may not.
Next consider the characteristic equation It is not at first obvious that all solutions λ of (32) are real. To see that this is indeed the case, we note that A can be rewritten as a 24 × 24 real symmetric matrix, whose eigenvalues are of course real. However, as discussed in [1], these latter eigenvalues do not satisfy the characteristic equation (32)! Rather, they satisfy a modified characteristic equation of the form det(A − λ I) + r = 0 where r is either of the roots of a quadratic equation which depends on A. As shown explicitly using Mathematica in Figure 5 of [2], not only are these roots real, but they have opposite signs (or at least one is zero). But, as can be seen immediately using elementary graphing techniques, if the cubic equation (33) has 3 real roots for both a positive and a negative value of r, it also has 3 real roots for all values of r in between, including r = 0. This shows that (32) does indeed have 3 real roots. Alternatively, since F 4 preserves both the determinant and the trace (and therefore also σ) [11,15], it leaves the characteristic equation invariant. Since F 4 can be used to diagonalize A [11,15], and since the resulting diagonal elements clearly satisfy the characteristic equation, we have another, indirect, proof that the characteristic equation has 3 real roots. Furthermore, this shows that these roots correspond precisely to the 3 real eigenvalues whose eigenmatrices lie in the Cayley plane. We therefore reserve the word "eigenvalue" for the 3 solutions of the characteristic equation (32), explicitly excluding their averages. The above argument shows that these correspond to solutions V of (3) which lie in the Cayley plane; we will verify this explicitly below.
Restricting the eigenvalues in this way corresponds to the traditional eigenvalue problem in the following sense. If A, v = 0 lie in a quaternionic subalgebra of the octonions, then the Jordan eigenvalue problem (3) together with the restriction (16) becomes Multiplying (34) on the right by v and simplifying the result using the trace of (34) leads immediately to Av = λv (with λ ∈ R), that is, the Jordan eigenvalue equation implies the ordinary eigenvalue equation in this context. Since the converse is immediate, the Jordan eigenvalue problem (3) (with V restricted to the Cayley plane but A octonionic) is seen to be a reasonable generalization of the ordinary eigenvalue problem. We now show how to construct eigenmatrices V of (3), restricted to lie in the Cayley plane, and with real eigenvalues λ satisfying the characteristic equation Thus, setting so that Q λ is a solution of (3). Due to the identity (21), we have If Q λ = 0, we can renormalize Q λ by defining Each resulting P λ is in the Cayley plane, and is hence a primitive idempotent. Due to (38), we can write and we call v λ the (generalized) eigenvector of A with eigenvalue λ. Note that v λ does not in general satisfy either (1) or (2). Rather, we have Writing out all the terms and using (10) and (22), one computes directly that If λ, µ are solutions of the characteristic equation (32), then using (37) leads to If we now assume λ = µ and Q λ = 0 = Q µ , this shows that eigenmatrices corresponding to different eigenvalues are orthogonal in the sense where we have normalized the eigenmatrices. We now turn to the case Q λ = 0. We have first that Denoting the 3 real solutions of the characteristic equation (32) by λ, µ, ν, so that we then have But by (38) and (20), Q λ = 0 if and only if tr(Q λ ) = 0. Using (46) and (49), we therefore see that Q λ = 0 if and only if λ is a solution of (32) of multiplicity greater than 1. We will return to this case below.
Putting this all together, if there are no repeated solutions of the characteristic equation (32), then the eigenmatrix problem leads to the decomposition in terms of orthogonal primitive idempotents, which expresses each Jordan matrix A as a sum of squares of quaternionic columns. 5 We emphasize that the components of the eigenmatrices P λ i need not lie in the same quaternionic subalgebra, and that A is octonionic. Nonetheless, it is remarkable that A admits a decomposition in terms of matrices which are, individually, quaternionic. We now return to the case Q λ = 0, corresponding to repeated eigenvalues. If λ is a solution of the characteristic equation (32) of multiplicity 3, then tr A = 3λ and σ(A) = 3λ 2 . As shown in [1] in a different context, or using an argument along the lines of Footnote 5, this forces A = λ I, which has a trivial decomposition into orthonormal primitive idempotents. We are left with the case of multiplicity 2, corresponding to A = λ I and Q λ = 0.
Since Q λ = 0, A − λ I is (up to normalization) in the Cayley plane, and we have with the components of w in some quaternionic subalgebra of O. While ww † is indeed an eigenmatrix of A, it has eigenvalue µ = tr(A) − 2λ = λ. However, it is straightforward to construct a vector v orthogonal to w in a suitable sense. For instance, if and only minor modifications are required to adapt this example to the general case. But (51) now implies that so that we have constructed an eigenmatrix of A with eigenvalue λ.
We can now perturb A slightly by adding ǫ vv † , thus changing the eigenvalue of vv † by ǫ. The resulting matrix will have 3 unequal eigenvalues, and hence admit a decomposition (50) in terms of orthogonal primitive idempotents. But these idempotents will also be eigenmatrices of A, and hence yield an orthogonal primitive idempotent decomposition of A. 6 In summary, decompositions analogous to (50) can also be found when there is a repeated eigenvalue, but the terms corresponding to the repeated eigenvalue can not be written in terms of the projections P λ , and of course the decomposition of the corresponding eigenspace is not unique. 7
Discussion
We have argued elsewhere [4,5] that the ordinary momentum-space (massless and massive) Dirac equation in 3 + 1 dimensions can be obtained via dimensional reduction from the Weyl (massless Dirac) equation in 9 + 1 dimensions. This latter equation can be written as the eigenvalue problem P ψ = 0 (59) where P is a 2 × 2 octonionic Hermitian matrix corresponding to the 10-dimensional momentum and tilde again denotes trace reversal. The general solution of this equation is where θ is a 2-component octonionic vector whose components lie in the same complex subalgebra of O as do those of P , and where ξ ∈ O is arbitrary. (Such a θ must exist since det(P ) = 0.) It is then natural to introduce a 3-component formalism; this approach was used by Schray [21,22] for the superparticle. Defining Ψ = θ ξ (62) 6 More formally, with the above assumptions we have The Freudenthal square of (56) is zero by (23), which shows that det(A + ǫ vv † − λ I) = 0 by (21), so that λ is indeed an eigenvalue of the perturbed matrix A + ǫ vv † . Furthermore, (56) itself is not zero (unless v or w vanishes) since (54) implies that which shows that λ does not have multiplicity 2. 7 An invariant orthogonal idempotent decomposition when λ is an eigenvalue of multiplicity 2 is where the coefficient of µ = tr(A) − 2λ is the primitive idempotent corresponding to the other eigenvalue and the coefficient of λ is an idempotent but not primitive. An equivalent expression was given in [9].
we have first of all that P := ΨΨ † = P ψ ψ † |ξ| 2 (63) so that Ψ combines the bosonic and fermionic degrees of freedom. Lorentz transformations can be constructed by iterating ("nesting") transformations of the form [23] which can be elegantly combined into the transformation This in fact shows how to view SO(9, 1) as a subgroup of E 6 ; the rotation subgroup SO(9) lies in F 4 . It turns out that the Dirac equation (59) is equivalent to the equation which shows both that solutions of the Dirac equation correspond to the Cayley plane and that the Dirac equation admits E 6 as a symmetry group. Using the particle interpretation from [4,5] then leads to the interpretation of (part of) the Cayley plane as representing 3 generations of leptons. The modern description of symmetries in nature is in terms of Lie algebras. For instance, one describes angular momentum by taking an infinitesimal rotation, regarding it as a selfadjoint operator, and studying the resulting eigenvalue problem. Thus, if A is the (selfadjoint version of the) infinitesimal rotation M, then the rotation (65) leads to the eigenvalue problem Aψ = λψ. But the infinitesimal form of (64) is essentially A • P , although in the octonionic case, it is not clear how best to make A self-adjoint. It thus seems natural to study the (3 × 3) Jordan eigenvalue problem associated with (66).
Finally, we refer to decompositions of the form (50) as p-square decompositions, where p is the number of nonzero eigenvalues, and hence the number of nonzero primitive idempotents in the decomposition. If det(A) = 0, then A is a 3-square. If det(A) = 0 = σ(A), then A is a 2-square. Finally, if det(A) = 0 = σ(A), then A is a 1-square (unless also tr(A) = 0, in which case A ≡ 0). It is intriguing that, since E 6 preserves both the determinant and the condition σ(A) = 0, E 6 therefore preserves the class of p-squares for each p. If, as argued above, 1-squares correspond to leptons, is it possible that 2-squares are mesons and 3-squares are baryons?
APPENDIX: Diagonalizing Jordan Matrices Using F 4
We start with a Jordan matrix in the form (14), and show how to diagonalize it using nested F 4 transformations. As discussed in [15], a set of generators for F 4 can be obtained by considering its SO(9) subgroups, which in turn can be generated by 2 × 2 tracefree, Hermitian, octonionic matrices.
Just as for the traditional diagonalization procedure, it is first necessary to solve the characteristic equation for the eigenvalues. Let λ be a solution of (32), and let vv † = 0 be a solution of (3) with eigenvalue λ. 8 We assume further that the phase in v is chosen such that where x, y ∈ O and r ∈ R. Define where the normalization constants are given by N 2 1 = |x| 2 +r 2 and N 2 2 = N 2 1 +|y| 2 ≡ v † v = 0. (If N 1 = 0, then A is already block diagonal.) It is straightforward to check that is a 2 × 2 octonionic Hermitian matrix (with z ∈ O and s, t ∈ R). The final step amounts to the diagonalization of X, which is easy. Let µ be any eigenvalue of X (which in fact means that it is another solution of (32)) and set where N 3 = (µ − t) 2 + |z| 2 . (If N 3 = 0, X is already diagonal.) This finally results in and we have succeeded in diagonalizing A using F 4 as claimed. | 2014-10-01T00:00:00.000Z | 1999-10-01T00:00:00.000 | {
"year": 1999,
"sha1": "c35a994d85cb4b8cc155a0f520ce68fe9c9c34e5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5819562d0429dc8b0506963e643332efc8e813d1",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
235907653 | pes2o/s2orc | v3-fos-license | Structure and dynamics of semaglutide and taspoglutide bound GLP-1R-Gs complexes
The glucagon-like peptide-1 receptor (GLP-1R) regulates insulin secretion, carbohydrate metabolism and appetite, and is an important target for treatment of type II diabetes and obesity. Multiple GLP-1R agonists have entered into clinical trials, such as semaglutide, progressing to approval. Others, including taspoglutide, failed through high incidence of side-effects or insufficient efficacy. GLP-1R agonists have a broad spectrum of signalling profiles. However, molecular understanding is limited by a lack of structural information on how different GLP-1R agonists engage with the GLP-1R. In this study, we determined cryo-electron microscopy (cryo-EM) structures of GLP-1R-Gs protein complexes bound with semaglutide and taspoglutide. These revealed similar peptide binding modes to that previously observed for GLP-1. However, 3D variability analysis of the cryo-EM micrographs revealed different motions within the bound peptides and the receptor relative to when GLP-1 is bound. This work provides novel insights into the molecular determinants of peptide engagement with the GLP-1R.
INTRODUCTION
G protein-coupled receptors (GPCRs) mediate diverse physiological processes through selective modulation of cellular function and thus represent important drug targets (Hauser et al. 2017). The glucagon-like peptide-1 receptor (GLP-1R), a class B1 subfamily GPCR, signals primarily through the stimulatory G (Gs) protein. It mediates important physiological effects of the endogenous peptide, GLP-1, in regulating insulin secretion, carbohydrate metabolism and appetite, and thus is well established as a long term clinical target for treating type II diabetes and obesity . Multiple GLP-1R peptide agonists that are used clinically or that have entered clinical trials display differential signalling profiles and different clinical efficacies or side-effect profiles , Sun et al. 2014, Koole et al. 2010, Jones et al. 2018, Fletcher et al. 2018, Pickford et al. 2020. Of note are two GLP-1 derived peptides, semaglutide and taspoglutide that have distinct clinical profiles. Once-weekly semaglutide, approved by the U.S. Food and Drug Administration (FDA) in December 2017, provided superior glucose correction and weight loss in phase 3 clinical trials relative to the two most commonly used therapeutics exenatide and liraglutide (Ahmann et al. 2018, O'Neil et al. 2018. In contrast, taspoglutide, while meeting its primary efficacy end point, failed in late phase 3 clinical trials due to intolerable side effects (Rosenstock et al. 2013). Notably, both peptides are analogues of human GLP-1 (7-36); semaglutide has two amino acid substitutions (Aib8, Arg34) and is derivatised at Lys26 with C18 diacids via a γGlu-2xOEG linker, while taspoglutide is an Aib8 and Aib35 substituted analogue ( Figure S1A). Their high sequence identity but distinctive clinical profiles make understanding the molecular details that underlie the differences in their mechanism of action important for future development of novel drugs targeting GLP-1R.
Advances in cryo-electron microscopy (cryo-EM) have enabled structure determination of GPCRs coupled to heterotrimeric G proteins (Garcia-Nafria and Tate 2019), with recently solved cryo-EM structures of class B1 GPCRs achieving global resolutions of 2.5 Å or below , Dong et al. 2020. Cryo-EM can also capture ensembles of conformations present during vitrification, providing insight into the dynamic behaviour of proteins (Lau et al. 2018, Murata andWolf 2018), a crucial factor in molecular understanding of agonist binding and GPCR activation. With large, high-resolution (sub-3 Å) data sets, conformational heterogeneity can be parsed into principal components by 3D variability analysis implemented in software packages such as cryoSPARC (Punjani et al. 2017, Punjani andFleet 2020) to gain insight into relative dynamics of specific ligand-GPCR complexes (Dong et al. 2020. In this study, we report the structures of GLP-1R-Gs complexes with semaglutide and taspoglutide at global resolutions of 2.5 Å. 3D variability analysis was used to reveal differences in the conformational dynamics of semaglutide and taspolutide-coupled GLP-1R-Gs complexes. The consensus and dynamic results were compared with the endogenous peptide GLP1-GLP-1R complex structure reported previously , providing new insights into engagement of the GLP-1R by distinct peptide agonists.
Structure determination
Human GLP-1R, dominant negative Gαs (DNGαs) , Gβ1 and Gγ2 constructs were co-expressed in Trichoplusia ni (Tni) insect cells to facilitate G proteincoupled complex formation. Formation of GLP-1R-Gs complexes was initiated by addition of 10 µM of either semaglutide or taspoglutide. Stabilization of the complex was achieved by combination of apyrase to remove guanine nucleotides, nanobody 35 (Nb35) that bridges the Gα-Gβ interface ) and use of DNGαs . Purified semaglutide-and taspoglutide-GLP-1R-Gs complexes were resolved as monodisperse peaks through size exclusion chromatography (SEC) that contained all expected components as confirmed by western blot and Coomassie stained SDS-PAGE (Figures S1B and S1C).
Samples were vitrified and imaged on a 300kV Titan Krios cryo-EM, generating particles that exhibited high-resolution features following 2D classification (Figures S1B, S1C and S2).
These data were subsequently refined in 3D to yield consensus maps with global resolutions of 2.5 Å for both peptide-bound complexes at gold standard FSC 0.143 (Figures 1, S1B-C and S2). As there was only limited density for the α-helical domain (AHD) of the Gα subunit, this was masked out during the consensus map refinement.
In the consensus maps the local resolution ranged from 2.4 Å to > 4.0 Å for both complexes (Figures S1B-C) with the highest resolution observed for the G protein, receptor transmembrane domain (TMD) and peptide N-terminus. Lower resolution was observed for the extracellular and intracellular loops, extracellular N-terminal domain (ECD) and peptide C-terminus that is likely due to greater flexibility in these domains. To improve the resolution of poorly resolved regions, focused 3D refinements were performed on the extracellular portion of the TMD and the ECD, for both receptors, which dramatically improved resolution of the ECD, extracellular loops (ECLs) and peptide C-terminus. This enabled robust modelling of the majority of the complex, including confident assignment of most sidechain rotamers ( Figure S3). However, residues K130-S136 of the TM1-ECD linker and M340-K342/T343 of ICL3 remained poorly resolved and were omitted from final models. Likewise, there was no density for the receptor C-terminus beyond amino acid L422 8.63 in the C-terminal Helix 8 (H8), or for residues 24-28 within the far N-terminus of the mature protein, and these were not modelled.
While density for the Gαs AHD was poor for both complexes, a higher signal was observed in the taspoglutide data. As such, an additional focused refinement was performed on the G protein of this complex. This improved the resolution of the AHD enabling the backbone of this region to be modelled, although there was insufficient density to robustly model sidechains. In this map, the Gαs AHD was in a relatively closed orientation (Figures 1B and S1Cv).
Peptide binding to GLP-1R
Similar to most peptide agonists present in active class B1 GPCR complex structures (Zhao et al. 2019, Dong et al. 2020, Qiao et al. 2020, both semaglutide and taspoglutide adopted a continuous helix with their C-terminus bound to the ECD and N-terminus inserted deeply into the TM core ( Figures 1 and S4). The N terminus of GLP-1, particularly residues at position 7, 8,9,10,12,13 and 15, as well as several residues at the C terminus are crucial for GLP-1 binding and/or activity (Adelhorst et al. 1994). Given the high sequence similarity between GLP-1 and GLP-1 analogues ( Figure 1A), it is perhaps not surprising that the packing of the receptor TM core, and the interactions made by the N-terminal half of the peptide and the receptor core are highly conserved in the consensus structures (Figures 2 and S4).
Similar to GLP-1, His7 of semaglutide and taspoglutide formed extensive interactions with Q234 3.37 , V237 3.40 , W306 5.36 , I309 5.39 , R310 5.40 and I313 5.43 of GLP-1R via Van der Waals forces, and a hydrogen bond with Q234 3.37 ; a highly conserved interaction site for class B1 peptides ). Ala8 of GLP-1 is substituted with aminoisobutyric acid (Aib) in semaglutide and taspoglutide, engendering DPP-IV resistance while maintaining high GLP-1R activity (Deacon et al. 1998). In both structures, Aib8 formed hydrophobic interactions with L384 7.39 and E387 7.42 and this differed from the predominant interactions of Ala8 of GLP-1 that were with E387 7.42 and L388 7.43 ( Figure 2). Consequently, alanine mutation of L384 7.39 had a greater impact on semaglutide-and tasopglutide-mediated cAMP production compared to GLP-1 ( Figure S5). The backbone of Aib8 in semaglutide was involved in water-mediated hydrogen bond interactions with Y241 3.44 , K383 7.38 and E387 7.42 . This network was also observed for the backbone of Ala8 in GLP-1 .
Although density for the water molecules in this network was not observed in the taspoglutide-bound structure, the almost identical rotamer placements of the three polar receptor residue sidechains suggested that similar interactions were likely to occur ( Figure 2C). Glu9 formed hydrogen bonds with Y152 1.47 and R190 2.60 , and Thr11 formed a hydrogen bond with D372 ECL3 in both structures (Figures 2A-B). Taken together, His7, Aib8, Glu9 and Thr11 of semaglutide and taspoglutide appear to have similar patterns of receptor interaction to that of GLP-1 that stabilize the peptide above the crucial central polar network (Wootten et al. 2013) of the TM core that is important for receptor activation.
As GLP-1 derived peptides, semaglutide and taspoglutide have multiple additional polar residues within the N-terminal helix that interact with the receptor core, and participate in conserved hydrogen bond interactions for both peptides. Thr13 interacted with K197 2.67 , while Asp15 interacted with R380 7.35 at the top of the TM7/ECL3 interface, which provides important coordination of the peptide with these domains as evidenced by mutation of either receptor residue leading to substantial loss of peptide activity ( Figure S5). ECL2 that is a key receptor domain involved in G protein coupling (Koole et al. 2012, Dods and Donnelly 2015, made a series of coordinated hydrogen bonds with Ser14 (N300 ECL2 ), and Ser17 (T298 ECL2 , R299 ECL2 ), with the latter serine also involved in a hydrogen bond interaction with Y205 2.75 at the top of TM2 at the junction with ECL1. Glu21 formed hydrogen bonds with R299 ECL2 and with the far N-terminus of the receptor ECD to provide coordination of these domains, while Glu21 in taspoglutide also formed a polar interaction with Y205 2.75 that further linked the ECD and TMD. In addition to these polar interactions, there were extensive hydrophobic interactions between the receptor and each of the peptides; many of these interactions are conserved across other class B1 GPCRs, such as L141 Table 2). Four of these conserved residues (Y145 1.40 , L201 2.71 , M233 3.36 and L384 7.39 ) were randomly selected for alanine mutagenesis and each individual mutant dramatically reduced the cAMP accumulation elicited by semaglutide or taspoglutide. With the exception of L384 7.39 A, these mutants had equivalent effects with GLP-1, suggesting that they play equivalent roles in different peptideinduced receptor activation ( Figure S5).
As noted above, The C-terminus of the peptide was close to the ECD and ECL1 regions of GLP-1R, and the ECD-focused refinement enabled resolution to model the interactions between them ( Figure S4). Generally, the side chain rotamer placements of the C-terminal halves of the peptides and their interactions with the receptor were similar for both structures (Figures 3,S4A and Table 2) and with our previously reported GLP-1 bound structure . ECL1 had limited direct interactions with GLP-1 derived peptides, with the exception of W214 ECL1 that exhibited π-π stacking with Trp31 in all three structures (Figures 3, S4A and Table 2) . The ECD formed extensive hydrophobic interactions with the C-terminal halves of the peptides, including W39 ECD , E68 ECD , Y69 ECD , Y88 ECD , L89 ECD , P90 ECD and W91 ECD . In addition, hydrogen bonds were formed between Glu21 and Ser31 ECD at the N far terminus of the receptor, and between the backbone of Val33 and R121 ECD , in all three structures (Figures 2, 3, S4A and Table 2). The importance of these residues in peptide binding and signalling of GLP-1R is supported by previous alanine mutagenesis studies (Wilmen et al. 1997, Day et al. 2011, Underwood et al. 2010). The C terminus of taspoglutide, beyond Arg34, extended the α-helical secondary structure of the peptide, however, for GLP-1 and semaglutide the helix terminated at residue 34 with the last two residues lacking secondary structure ( Figure 3).
In semaglutide, there is an Arg substitution of the Lys34 in GLP-1, however neither Arg34 of semaglutide or Lys34 of GLP-1 or taspoglutide formed interactions with the receptor ) ( Figure S4A and Table 2), in line with previous reports of limited effect of Arg34 substitution on GLP-1R binding and signalling (Lau et al. 2015). Unlike GLP-1 and taspoglutide, semaglutide is acylated on Lys26 with a γGlu-2xOEG linker and C18 fatty diacid moiety ( Figure S1A) that enables enhanced binding to plasma albumin and extended half-life in vivo (Lau et al. 2015). While this substitution was unresolved in the consensus map, density within the cryo-EM map was evident for the modified Lys26 within the ECD-focused map.
Two distinct continuous densities could be observed that likely represent the most stable positions of the derivatised lysine ( Figure S6A). Of the two densities, the one positioned down towards the top of TM1 had clearer density than the alternative density that extended up towards the receptor ECD. Nonetheless, only the linker Lys26-2xOEG moiety was able to be modelled in either orientation, before the density merged into the lower resolution regions of the detergent micelle or the receptor ECD ( Figure S6B). Only the downward conformation of the Lys26-2xOEG moiety that had greater density was included in the final model. The structure of the C-terminal half of semaglutide and the receptor ECD closely resembles the crystal structure of semaglutide-bound GLP-1R ECD (PDB: 4ZGM) ( Figure 3A). However, semaglutide Lys26 is not acylated in the ECD crystal structure (PDB: 4ZGM), and instead interacts with E128 ECD ( Figure 3A); an equivalent interaction can be observed in GLP-1 and taspoglutide bound receptor structures that also have an unmodified Lys26 ( Figure 3B). In the semaglutide-bound structure, the acylated Lys26 did not form this interaction ( Figure 3). Semaglutide maintains high affinity and potency, which is consistent with the previous findings that substation of Lys26 or mutation of E128 ECD had limited effect on receptor affinity (Lau et al. 2015).
Conformational comparison of different GLP-1 derived peptide-GLP-1R-Gs complexes
The GLP-1 and GLP-1 derived peptide complexes exhibited a similar global conformation for both receptor and Gs protein ( Figure 4). The only notable distinction between taspoglutideand GLP-1-bound GLP-1R in the consensus maps occurred within the top of TM2 and ECL1 region, whereas this region exhibited an identical conformation for semaglutide-and GLP-1 complexes ( Figures 4A-B). Interestingly, the TM2/ECL1 conformation of the taspoglutide-GLP-1R structure is very similar to the previously published ExP5-GLP-1R structure (PDB: 6B3J), despite the distinct amino acid sequences between taspoglutide and ExP5 . Moreover, both peptides formed similar interactions with W214 ECL1 and H212 ECL1 of GLP-1R. This latter interaction was not observed in the GLP-1and semaglutide-bound structures with the H212 ECL1 sidechain located on the opposite side of ECL1 in these structures ( Figure 4D).
With the exception of TM2/ECL1, the metastable conformation of the receptor extracellular side, including the ECD and ECLs, was identical between the GLP-1 and GLP-1 derived peptides complexes ( Figure 4) and all three receptors exhibited a similar reorganisation of the polar networks blow the peptide binding site and at the base of the receptor. These conformational changes facilitate the sharp kink at the centre of TM6 and a large outward movement of the intracellular half of TM6, characteristic of all the class B1 GPCR active state structures . Likewise, the rest of the intracellular side of the GLP-1R displayed an identical conformation ( Figure 4C) and equivalent interactions with Gs, across the GLP-1 and GLP-1 derived peptide bound structures, including those interactions mediated by structural waters. Most of the observed interactions between the receptor and Gs protein were also consistent with other published GLP-1R-Gs structures, including those with small molecule agonists , which further supports a model of ligand-independent common interaction networks of the GLP-1R upon Gs protein coupling. The overall conformation of the G protein itself was also remarkably similar among the three structures ( Figure 4E), including the structural waters within each subunit and their interfaces. Moreover, despite the high mobility of AHD of Gα subunit, the metastable conformation that was resolved in G protein-focused refinements in the taspoglutide-bound structure was similar to that previously described for the GLP-1 bound structure ) ( Figure 4E).
3D variability analysis reveals distinct dynamic motions in different GLP-1 derived peptide complexes
As described above, the static consensus structures for taspoglutide, semaglutide and GLP-1 bound, active GLP-1Rs are remarkably similar. For other class B1 GPCRs, we have demonstrated that the active complexes are dynamic and that dynamics can contribute to the pharmacological behaviour of bound ligands (Dong et al. 2020. To understand and visualize the dynamic motions in GLP-1R complexes, 3D variability analysis implemented in cryoSPARC was performed. In 3D variability analysis, particles of individual complex were partitioned into the top five principal components (modes) and reconstructed into a continuous series of 20 3D volumes (frame000 -frame019), each mode corresponding to a different type of variability. The top three principal components of both semaglutide-and taspoglutide-bound GLP-1R-Gs complexes were recorded in Video S1, and revealed common twisting and rocking motions of the GLP-1R relative to the Gs protein (Video S1 component 2 and 3), where the extent of motion was comparable to that previously reported for GLP-1-bound complex ).
However, clear differences in the dynamics of each complex were observed, with the most notable changes occurring at the extracellular side of the receptor, as shown in Video S1 component 1. The extracellular end of TM2 and ECL1 of the semaglutide-bound receptor exhibited a robust motion, consistent with the lower resolution of this region compared to maps with the other peptide (Figures S1B-C). Modelling the backbone of the GLP-1R into the two maps at the extremes of the mode (frame000 and frame019) of component 1 revealed a 7 Å and 14 Å movement of the top of TM2 and ECL1, respectively ( Figure 5A). This region was also dynamic in GLP-1-bound GLP-1R, but with less movement (limited movement of the top of TM2 and only a 7 Å movement of ECL1) . Moreover, differences were observed in the direction of motion between the GLP-1-and semaglutide-bound structures. In the GLP-1-bound GLP-1R, ECL1 moved towards TM1 , whereas there was movement towards TM1 linked with a large outward movement in the semaglutide-bound structure (Video S1, Figure 5A). In contrast, the TM2/ECL1 region of taspoglutide-bound GLP-1R was relatively stable compared to that of the semaglutide and GLP-1 complexes (Video S1, Figure 5B). Interestingly, while the extracellular portions of TMs1/6/7 and ECL3 were relatively stable in the complex with GLP-1 , these regions underwent a large motion away from the TM core in the semaglutide complex, transitioning 7 Å and 8 Å at the top of TM1 and TM7, respectively (Video S1, Figure 5A). Moreover, the extracellular tip of TM4/5 and ECL2 also moved outward. In parallel, the whole semaglutide peptide co-ordinately moved almost 5 Å towards TMs1/6/7, with the exception of the very end of its N terminus (Video S1, Figure 5A). A similar, but less dramatic, motion was observed in the taspoglutide-bound structure, with 3 Å movement of the taspoglutide peptide towards TMs1/6/7, facilitated by a smaller outward motion of TM1 top, TM6/ECL3/TM7 and TM4/ECL2/TM5 relative to the remainder of the TM bundle (Video S1, Figure 5B).
The extent of motion of the extracellular side of TM bundle and peptide N-terminus corresponded to the relative change in the ECD (Video S2 component 4), consistent with the lower resolution of the ECD relative to the remainder of the receptor in both semaglutide-and taspoglutide-bound structures ( Figure S1B-C). The ECD of the taspoglutide-bound receptor shifted 12 Å when measured between the Cα of G52 ECD in the models built in the maps at the extremes of component 4 ( Figure 6A), compared to a 7 Å motion of the ECD in GLP-1-bound structures through an equivalent analysis . The ECD of the semaglutidebound GLP-1R was more dynamic than the other peptides (Videos S1 & S2), leading to ambiguous backbone modelling in the extreme map of the dynamic state (frame000), and thus the ECD was rigid body fitted into frame000 map of component 4 ( Figures 6A, S7B). As the metastable position of the ECD, particularly the N terminal helix, was identical among these structures ( Figure 4A), the consensus semaglutide-bound GLP-1R model was shown as a reference to compare the direction of motion of the ECD in the receptor complexes with different peptides. In the semaglutide-bound structure, the ECD shifted between the consensus position and TM1, while it moved towards ECL1 in the GLP-1-and taspoglutide-bound structures, with the angle between the two transitional routes around 90 o (Figure 6B).
The G protein heterotrimer (with the exception of AHD) in the presence of Nb35 was relatively stable with only slight motions at the protein surface (Video S1). In contrast, the AHD was Figure 7B). Although there were more particles for the AHD in the open conformation (51%), the closed form containing only 14% of the particles was the best resolved enabling this to be improved in the G protein-focused refinement and, subsequently, for this region to be modelled at a backbone level ( Figure S1C). The distribution of particles in the five classes of the semaglutide complex, however, was very different to that of the taspoglutide-bound complex, with two classes (19% and 28% particles) occupying the open position, and one class of 17% particles in a location similar to the middle position of AHD in the taspoglutide-bound complex. The other two classes were largely varied and appeared to cross positions between different classes ( Figure 7A). Moreover, none of these classes was well resolved. A morph between the major conformations was recorded in Video S2.
DISCUSSION
Taspoglutide has 93% identity with endogenous GLP-1 (7-36)NH2, with replacement of amino acids at position 8 and 35 by Aib to improve the DDP-4 and serine protease resistance; the latter substitution was also designed to increase the C-terminal helicity to increase the receptor binding affinity (Sebokova et al. 2010). In line with this, continual helical secondary structure for the far C-terminus of taspoglutide could be resolved, whereas the C terminus (Gly35-Arg36) of GLP-1 and semaglutide lack secondary structure ( Figure 3C). Despite a comparable clinical efficacy for glycaemic control and weight loss to approved GLP-1R agonists, the development of taspoglutide was terminated in late phase 3 clinical trials due to severe gastrointestinal side effects and systemic allergic reactions (Rosenstock et al. 2013). In contrast, once-weekly semaglutide, approved in 2017, was designed to maintain structural similarity to endogenous GLP-1 but with reduced DPP4 metabolism and enhanced plasma albumin binding to decrease clearance (Lau et al. 2015). This is achieved by Aib8 substitution, and replacement of Lys34 with Arg to enable site-specific acylation of Lys26 (Lau et al. 2015).
Comparison of the GLP-1-and semaglutide-bound GLP-1R structures revealed that the Arg34 substitution did not modify the secondary structure of the peptide C terminus ( Figure 3C).
While density for the modified Lys26 was not resolved in initial maps, the linker could be resolved by focused refinement of the ECD. The most prominent density for the linker was oriented towards the detergent micelle, suggesting that the lipid component extends into the micelle and likely interacts with the lipid bilayer in native membranes, stabilising the downward position of the lysine and linker.
Despite these minor differences in peptide structure, semaglutide and taspoglutide share high sequence identity with GLP-1, and they exhibited similar potency in cAMP accumulation compared to GLP-1 in CHO cells that overexpress ( Figure S5). Thus, it is not surprising that the peptide binding mode, and global conformations of the receptor and Gs protein were very similar for their active structures (Figures 2-4). Indeed, in the consensus models, the only notable difference in receptor structure occurred in ECL1 (S206-S219) that adopted different conformations between taspoglutide-and semaglutide-bound complexes, with the latter similar to that of the GLP-1-bound structure . Nonetheless, only limited direct interactions occurred between ECL1 and the peptides (Figure 3 and Table S2). This is consistent with earlier alanine mutagenesis studies, where individual mutation of S206-S219 had only limited effect on GLP-1-induced receptor affinity and cAMP signalling , albeit that further structure-function studies of this region for semaglutide, and in particular, taspoglutide, are required to understand the importance of this region for their function.
Semaglutide has become the gold standard for GLP-1R agonists demonstrating more robust effects in control of glycaemia and body weight than other GLP-1R agonists, with a comparable safety profile (Tsoukas et al. 2017, Ahmann et al. 2018, O'Neil et al. 2018, Pratley et al. 2018. Moreover, semaglutide has greater efficacy in reducing cardiovascular risk than the other currently marketed peptides (Marso et al. 2016) and holds promise for the treatment of non-alcoholic steatohepatitis (NASH) where it is in phase 2 clinical trials (NCT02970942).
In contrast taspoglutide did not reduce cardiovascular risk (Henry et al. 2012). While only limited pharmacological data is currently available for taspoglutide, both semaglutide and taspoglutide are reported to have distinct recycling and trafficking profiles relative to GLP-1 (Fletcher et al., 2019). Consequently, there must be distinct mechanisms that contribute to these pharmacological differences beyond the metastable consensus structures that exhibit almost identical patterns of interactions between the peptides and the receptor. As accumulated data suggests the GPCR dynamics may influence their functions (Dong et al. 2020, we further analysed the conformational dynamics of semaglutide-or taspoglutide-bound active complexes. The ability of cryo-EM to capture spectrums of conformations present during vitrification allows insight into conformational dynamics of proteins (Lau et al. 2018, Murata andWolf 2018), and was critical in the identification of distinctions in the behaviour of the different GLP-1R peptide complexes. The potential for differences in the mobility of the complexes could be inferred from the relatively lower resolution for the ECD, extracellular portion of the TM domain and the AHD of Gαs, suggesting that these domains were more dynamic.
Therefore, we hypothesized that the conformational dynamics of GLP-1R complexes, rather than the consensus metastable interactions, could contribute to their distinct pharmacological profiles.
Parsing out the principal components contributing to different conformational ensembles using 3D variance analysis revealed that the extracellular side of TMs1/6/7 and ECL3 underwent a large movement away from the TM core in semaglutide structure ( Figure 5A), while this region was relatively stable in the GLP-1-bound structure . Active GLP-1R structures in complex with the biased agonists, CHU-128, TT-OAD2 and ExP5 have different conformations of this region relative to that of GLP-1-bound GLP-1R , and previous mutagenesis studies have shown that this region is crucial for biased agonism . In parallel with the conformational change in TM1/6/7/ECL3, there was outward motion of TM2/ECL1/TM3 and TM4/ECL2/TM5, leading to enlargement of the binding cavity and a 5 Å shift of the semaglutide peptide towards TMs1/6/7, resulting in the loss of interactions with ECL2 ( Figure 5A). Extensive mutagenesis studies of the GLP-1R have suggested the ECL2 interactions are essential for all peptide-induced receptor activation (Koole et al. 2012, Dods and Donnelly 2015).
While the outward motions were also observed in the taspoglutide-bound complex, the extent of motion was lower. We speculate that the "open" state might represent an intermediate conformation that occurs during peptide-receptor association / dissociation.
The ECD in the taspoglutide-bound structure was also less dynamic than that of the semaglutide-bound receptor, and this was linked to stronger peptide interactions with ECL1 that limit the motion of the peptide, as well as the dynamics of ECL1 (Videos S1 & S2). The ECD in both structures underwent a twisting motion that is consistent with the predicted motion of the ECD as it moves from an inactive state to agonist-bound states ), however, this motion was greater in the semaglutide-bound complex, leading to a shift in the position of the N terminal helix of the ECD from the consensus position towards TM1 ( Figure 6B). In contrast, the more restricted ECD motion in the taspoglutide-bound GLP-1R moved the N-terminal helix towards ECL1. The direction of motion for the taspoglutide complex was more consistent with that seen for the GLP-1-bound GLP-1R ECD, and also that of TT-OAD2 ); ~90 o different from the semaglutide complex ( Figure 6B). Previous work has established a critical role for the GLP-1R ECD in peptide-induced cAMP signalling (Yin et al. 2016), however, how the ECD dynamics influence the cell signalling is still unknown.
Nonetheless, recent work on CGRP/adrenomedullin receptors has provided evidence that the ECD dynamics of these receptors plays a key role in peptide selectivity . Intriguingly, although equivalent 3D variance analysis was not performed, multiple 3D classes that differed in the location of the ECD were resolved for the active complex of the human parathyroid hormone receptor-1 (Zhao et al. 2019). In this study, the data were suggestive of separation of peptide C-terminus and the ECD in one of the classes, but with the peptide N-terminus still engaged with the TMD core. Whether an equivalent process also occurs for the GLP-1R is unclear, but the loss of resolution where larger motions were observed could be consistent with partial disengagement, and the dynamics of the complexes are likely to play an important role in peptide dissociation.
The AHD of the Gα subunit has been poorly resolved in most cryoEM structures of GPCRs have previously demonstrated ligand-dependent differences in conformational sampling of Gs that could be linked to GTP binding and G protein turnover . As such, the differences in observed conformational dynamics of Gs in our structures could provide insight into efficacy differences between ligands.
In conclusion, we have solved 2.5 Å cryoEM structures of GLP-1R-Gs complexes with two peptides, taspoglutide and semaglutide, that had distinct outcomes in clinical trials. These revealed highly similar metastable peptide binding modes and conformations of the active receptor-Gs complex that were also similar to that of the native GLP-1 peptide. However, each GLP-1R complex displayed unique peptide-dependant dynamics suggesting that conformational dynamics may be critical for pharmacological differences between the peptides. While much more structural data, including complexes with different transducers, and complexes in lipidic environments, will be required to fully understand the link between structure and function, our structures and the dynamic information captured by cryo-EM provide mechanistic insights into the ligand binding, receptor activation and G protein coupling of class B1 GPCRs.
Declarations of interest
The authors declare no conflict of interest. 1R-Gs complex. The distance between the top of TMs1/7 in the two extreme maps was 6.7 Å and 8.4 Å when measured at Cα of E139 1.34 and G377 7.32 , respectively. Motions of TM2 and EC1 were observed spanning 7.5 Å and 13.6 Å when measured at Cα of M204 2.74 and D215 ECL1 , respectively. The peptide underwent a 4.7 Å movement measured between Cα of Val33 (frame 000) and Ala30 (frame 019). b, Taspoglutide-GLP-1R-Gs complex. The motions of top of TMs1/7 were 3.6 Å and 5.4 Å when measured at Cα of E139 1.34 and G377 7.32 , respectively. The peptide had a 3.2 Å movement measured between Cα of Ala30 (frame 000) and Ile29 (frame 019). Colours are highlighted on the figure panels. Also see Figure S7 and Videos S1 and S2. Figure S7 and Video S1. The other two classes differed between complexes and had less well-defined AHDs (grey and yellow). a, Semaglutide-bound complex; Two classes have an "open" conformation, and one with an "intermediate" conformation. b, Taspoglutide bound complex; The AHD adopted three major conformations. The percentage of particles contributing to each class is shown above the density maps, labelled in the same colour. Also see Video S2.
CONTACT FOR REAGENT AND RESOURCE SHARING
Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Denise Wootten (denise.wootten@monash.edu).
EXPERIMENTAL MODELS
Protein expression for biochemical analysis and purification was performed in Spodoptera frugiperda (Sf9) and Trichuplusia ni (Tni) insect cells, maintained in ESF 921 serum free media (Expression systems) at 30°C. CHOFlpIn cells used for mammalian cell studies were maintained in DMEM supplemented with 5% FBS at 37°C in 5% CO2. Escherichia Coli (E. coli) strain BL21, used to express Nb35, was cultured in Luria-Bertani (LB) liquid medium (10 g tryptone, 10 g NaCL and 5 g yeast extract per litre) with continuous shaking (180 rpm) or on LB agar plate (LB medium with 15 g agar per litre) at 37°C.
Constructs
The human GLP-1R was modified to replace the native signal peptide with that of haemagglutinin (HA) to improve receptor expression and contain N-terminal Flag tag epitope and C-terminal 8xHis tag. 3C protease cleavage site (LEVLFQGP) was inserted between both tags and the receptor. The construct were generated in both insect and mammalian cell expression vectors, and the modification did not affect the receptor pharmacological profile . Dominant negative Gαs (DNGαs) includes mutations S54N, G226A, E268A, N271K, K274D, R280K, T284D, I285T and A366S of Gαs, which has been described preciously to reduce nucleotide binding affinities and enhance the stability of GLP-1R and Gs heterotrimer complex . 8xHis tagged Gβ1-γ2 construct and 8xHis tagged Nb35 were provided by B.Kobilka .
Insect cell expression
Human GLP1R, human DNGαs and human Gβ1-γ2 constructs were co-expressed in Tni insect cells using baculovirus expression system as previously described . Briefly, Tni insect cells were grown in ESF 921 serum-free media to a density of 3.5 million/mL and infected with GLP1R, DNGαs and Gβ1-γ2 baculovirus at multiplicity of infection (MOI) ratio of 3:3:1. Culture was harvested by centrifugation 48 h post infection and cell pellets were stored at -80 o C.
The resin was packed into a glass column and washed with 20 column volumes of 20 mM HEPES pH 7.4, 100mM NaCl, 2mM MgCl2, 5mM CaCl2, 1µM peptide, 0.01% (w/v) LMNG and 0.0006% (w/v) CHS followed by elution with buffer containing 5mM EGTA and 0.1mg/ml FLAG peptide. The complex was then concentrated using an Amicon Ultra Centrifugal Filter (MWCO, 100 kDa) and subjected to size-exclusion chromatography (SEC) on a Superdex 200 Increase 10/300 column (GE Healthcare) that was pre-equilibrated with 20mM HEPES pH 7.4, 100mM NaCl, 2mM MgCl2, 1µM peptide, 0.01% (w/v) LMNG and 0.0006% (w/v) CHS to separate complex from contaminants. Eluted fractions consisting of receptor and G-protein complex were pooled and concentrated to 2-4mg/mL. The complex samples were flash frozen in liquid nitrogen and stored at -80 o C.
SDS-PAGE and Western blot analysis
Samples from important steps during purification were collected and analysed by SDS-PAGE and western blot. TGX™ Precast Gel (BioRad) was used to separate proteins within samples at 200V for 30min. Then gels were either stained by Instant Blue (Sigma Aldrich) or immediately transferred to PVDF membrane (BioRad) at 100V for 1h. The proteins on the PVDF membrane were probed with two primary antibodies simultaneously, rabbit anti-Gs C-18 antibody (cat. no. sc-383, Santa Cruz) against Gαs subunit and mouse poly-His antibody (cat. no. 34660,QIAGEN) against His tag. The membrane was washed and incubated with secondary antibodies followed by probing with FLAG-FITC antibody (prepared in the lab) against Flag tag on GLP-1R. The membranes were imaged using Typhoon 5 imaging system (Amersham).
Negative staining and data processing
The complex samples were diluted to 0.006mg/mL in 20mM HEPES pH 7.4, 100mM NaCl, 2mM MgCl2 and 1µM peptide and applied to the continuous carbon grids (EMS). The grids were stained with 0.8% (w/v) uranyl formate solution and imaged using cryo-EM Tecnai™ T12 TEM at 120 kV. Around 50 images for each complex were collected with a magnified pixel size of 2.06 Å, and ~20,000 particles of each complex were auto picked, extracted and 2D classified by RELION-3.0-beta.
Vitrified sample preparation and data collection
Samples (3µL) were applied to glow-discharged Quantifoil R1. with the experimental parameters listed in Table S1 using a 9-position beam-image shift acquisition pattern by custom scripts in SerialEM (Schorb et al. 2019).
Data processing
As Figure S2 shows, 7488 and 8759 movies of semaglutide-and taspoglutide-GLP-1R-DNGs complexes, respectively, were motion corrected using MotionCor2 (Zheng et al. 2017) and subjected to contrast transfer function (CTF) estimation using Kai Zhang's GCTF software (Zhang 2016). Particles were picked from corrected micrographs using cryOLO software (Wagner et al. 2019) for the semaglutide dataset and RELION (version 3.0.7) (Zivanov et al. 2018) reference-based picker with a 20 Å low-pass filtered 3D map reference for the taspoglutide dataset. Picked particles were extracted and 2D classified using RELION (version 3.0.7). The selected 2D particles were used to generate the initial 3D model based on Stochastic Gradient Descent (SGD) algorithm, and subsequently applied to 3D classification. Particles from the best-looking class were subjected to Bayesian particle polishing, 2D classification, CTF refinement and cycles of 3D auto-refinement in RELION. 886,738 and 625,241 particles for semaglutide-and taspoglutide-GLP-1R complex were chosen to generate a final map using 3D auto-refinement and sharpened with a B-factor of -55Å and -70Å, respectively. Local resolution was determined using RELION with half-reconstructions as input maps. The map density of detergent micelle and Gαs helical domain was averaged out during the final map reconstruction for clarity.
To improve the local map quality, each refined particle was subjected into another 3D classification with a loose mask of ECD in RELION. The best resolved classes, 233K and 346K particles of semaglutide and taspoglutide complexes respectively, were further refined to generate ECD-focused maps. The refined particle stack was also 3D classified into 5 classes using a broad mask of the AHD without alignment. Each particle class (244K, 204K, 168K, 144K or 104K particles of semaglutide complex; 317K, 85K, 84K, 79K or 61K particles of taspoglutide complex) was 3D auto-refined, and the best resolved classes of taspoglutide complex (158K particles) was further refined and post processed with a G protein mask to generate a final map of the AHD.
Atomic model refinement
The model of GLP-1-GLP-1R-DNGs (PDB:6X18) used as initial template was fitted in the cryo-EM density maps with the MDFF routine in namd2 (Chan et al. 2012). The fitted model for the GLP-1R transmembrane domain (TMD), Gs protein and Nb35 were further refined by manual model building in COOT (Emsley et al. 2010) and real space refinement, as implemented in the Phenix software (Adams et al. 2010). The peptide C terminus, ECD and ECLs were modelled manually without ambiguity with the aid of ECD-focused map. Density of the linker (K130 ECD -S136 ECD ) and ICL3 (L399 ICL3 -T343 ICL3 ) was not clear and these residues were omitted from the final model. The AHD of taspoglutide complex was modelled according to the AHD-focused map with side chain of 31 residues deleted. However, the AHD of the semaglutide complex was not able to be modelled due to the low resolution of this region.
The final models were subjected to global refinement and comprehensive validation. The cryo-EM data collection, refinement and validation statistics are reported in Table S1.
Model residue interaction analysis
Interactions in the PDB files between the bound peptide and the receptor were analyzed using the "Dimplot" module within the Ligplot+ program (v2.2)(Laskowski and Swindells 2011).
Hydrogen bonds were additionally analysed using the UCSF Chimera package (Pettersen et al. 2004), with relaxed distance and angle criteria (0.4 Å and 20 degree tolerance, respectively).
CryoEM dynamics analysis
3D variability analysis implemented in cryoSPARC (v2.9) (Punjani et al. 2017) was performed to understand and visualize the dynamics in GLP-1R complexes, as previously described for analysis of the dynamics of adrenomedullin receptors ).
The particle stack of semaglutide and taspoglutide-GLP-1R-Gs complexes from RELION consensus refinement were imported into the cryoSPARC environment. 3D refinement was performed, using a low pass filtered RELION consensus map as an initial model and a generous mask created in cryoSPARC as default. 3D variability of these GLP-1R complexes was analysed in 5 motions and the 20 volume frame data in each motion was generated in cryoSPARC (Punjani et al. 2017). Output files were visualized in UCSF ChimeraX volume series and captured as movies (Goddard et al. 2018).
The backbone of the both peptide-bound GLP-1R was rigid body fitted into the two maps at the extremes of the mode (frame000 and frame019) of component 1 and further refined by manual model building in COOT. The ECD of taspoglutide-bound receptor was modelled by rigid body fit and manual model building into the two extreme maps (frame000 and frame019) of component 4. The consensus ECD of semaglutide-bound receptor was rigid body fitted into frame000 map of component 4.
Stable cell lines generation
The wild-type (WT) and mutant GLP-1R constructs were integrated into FlpIn-CHO cells using FlpIn Gateway technology system (Invitrogen). Stable CHOFlpIn expression cell lines were selected using 600 μg/ml hygromyocin B, and maintained in DMEM supplemented with 5% (V/V) FBS (Invitrogen) at 37°C in 5% CO2. cAMP accumulation assay cAMP accumulation assay was performed as described previously (Hager et al. 2017).
CHOFlpIn WT or mutant GLP-1R cells were seeded at a density of 30,000 cells per well into a 96-well plate and incubated overnight at 37°C in 5% CO2. All values were converted to cAMP concentration using cAMP standard curve performed parallel and data were subsequently normalized to the response of 100μM forskolin in each cell line, and then normalized to the WT for each peptide agonist.
Data were analysed using a 3-parameter logistic fit in Prism and assessed for differences in fitted parameters from the parental construct at 95% confidence intervals. Differences in globally fitted curves were also assessed using an extra sum of squares F test at P<0.05, with post-hoc assessment of individual fitted parameters where curves were statistically different.
Data availability
The atomic coordinates and the cryo-EM density maps generated during this study are available at the protein databank (https://www.rcsb.org) and the electron microscopy databank (https://www.ebi.ac.uk/pdbe/emdb) under accession numbers XXXX and XXXX, and EMDB entry ID EMD-XXXXX and ID EMD-XXXXX for the semaglutide and taspogltuide complexes, respectively. , .. , _. | 2021-01-22T14:10:24.353Z | 2021-01-13T00:00:00.000 | {
"year": 2021,
"sha1": "8078d0e05a3db7537b1d82ebe0422e2cbe700246",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2211124721007725/pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "8078d0e05a3db7537b1d82ebe0422e2cbe700246",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158066 | pes2o/s2orc | v3-fos-license | Comparison of Proliferative and Multilineage Differentiation Potential of Sheep Mesenchymal Stem Cells Derived from Bone Marrow, Liver, and Adipose Tissue
Background Background: Despite major progress in our general knowledge related to the application of adult stem cells, finding alternative sources for bone marrow Mesenchymal Stem Cells (MSCs) has remained to be challenged. In this study successful isolation, multilineage differentiation, and proliferation potentials of sheep MSCs derived from bone marrow, adipose tissue, and liver were widely investigated. Methods The primary cell cultures were prepared form tissue samples obtained from sheep 30-35 day fetus. Passage-3 cells were plated either at varying cell densities or different serum concentrations for a week. The Population Doubling Time (PDT), growth curves, and Colony Forming Unit (CFU) of MSCs was determined. The stemness and trilineage differentiation potential of MSCs were analyzed by using molecullar and cytochemical staining approaches. The data was analyzed through one way ANOVA using SigmaStat (ver. 2). Results The highest PDT and lowest CFU were observed in adipose tissue group compared with other groups (p<0.001). Comparing different serum concentrations (5, 10, 15, and 20%), irrespective of cell sources, the highest proliferation rate was achieved in the presence of 20% serum (p<0.001). Additionally, there was an inverse relation between cell seeding density at culture initiation and proliferation rate, except for L-MSC at 300 cell seeding density. Conclusion All three sources of fetal sheep MSCs had the identical trilineage differentiation potential. The proliferative capacity of liver and bone marrow derived MSCs were similar at different cell seeding densities except for the higher fold increase in B-MSCs at 2700 cells/cm 2 density. Moreover, the adipose tissue derived MSCs had the lowest proliferative indices.
Introduction
Mesenchymal Stem cells (MSCs) are more homogenous sub-population of mononuclear cells and comprise a rare population of multi-potent progenitors. These cells are capable of both supporting haematopoiesis and differentiating into different tissues originating from Avicenna J Med Biotech 2013; 5(2): 104-117 mesoderm ranging from bone and cartilage to cardiac muscle 1,2 .
The main factors required for a cell to be regarded as a mesenchymal stem cell are their adhesion potential in monolayer culture during in vitro conditions, maintain their undifferentiating characteristics during extended passaging, and differentiation potential into chondrocytes, osteocytes and adipocytes in vitro [3][4][5] .
These adult stem cells are attractive candidates for cell-based therapeutic strategies, substantially because of their easy isolation, purification and amplification, as well as their intrinsic ability to self bio-preserved with minimal loss of potency, their multipotential differentiation, and their amenable to genetic manipulation without adverse reactions to allogeneic versus autologous MSCs transplants 5,6 .
In the ensuing decades, extensive research has gone into unlocking the therapeutic potential for MSCs. Bone marrow stroma is the most common sources of these cells for clinical use and designated as the gold standard 7,18 . This source of MSCs was first identified in 1960s by Ernest A McCulloch and James E 19 . Further experiments in the 1970s and 80s by Friedenstein et al expanded upon the potential of MSCs by demonstrating their capacity for self-renewal and multilineage differentiation 20,21 .
Although bone marrow provides a universal source of MSCs, but due to certain shortcomings of obtaining the MSCs including pain, morbidity, low cell number upon harvest, high degree of viral contamination and decrease in the proliferative/differentiation capacity along with age, alternate sources for MSCs have been sought and subjected to intensive investigation 14,21 .
Among different sources of MSCs, adipose tissue, like bone marrow, is derived from the embryonic mesenchyme and possesses abundant and easily accessible MSCs with less invasive method 22 besides its easily growth under standard tissue culture conditions 23 .
Recent advances in cosmetic surgery add to its advantage with huge amount of available fatty tissue. Moreover, it has a further advantage when the morbidity associated with large volume bone marrow harvests is taken into the consideration. Its multilineage differentiation has been first identified by Zuk et al 22 .
One alternative source is liver tissue. Different types of fetal liver (stem) cells during development were identified, and their advantageous growth potential and bipotential differentiation capacity towards liver or biliary cells has been shown [24][25][26] . Moreover, there are evidences indicating the higher proliferative capacity and immunosuppressive effect of fetal MSCs compared to the adult MSCs 27 . In this context there has been an increasing interest in recent years in development of cell based therapeutic strategies as a novel approach in tissue engineering for the treatment of liver diseases 26 . In addition, MSCs appear to share similar characteristics across species, which has facilitated the application of MSC in translational studies using animal models. The aim of the present study was to compare growth characteristics and multilinage differentiation potential of sheep foetal MSCs derived from bone marrow (B-MSCs), liver (L-MSCs) and adipose tissues (A-MSCs).
Materials and Methods
Except where otherwise indicated, all chemicals were obtained from the Sigma (St. Louis, MO, USA). Ovine 30-35 day fetus was obtained from the slaughterhouse and transported to the laboratory in Dulbecco's PBS (DPBS) with penicillin/streptomycin on the ice.
Bone marrow cell culture
Bone marrow was collected by flushing femurs and tibias with DMEM (Dulbecco's modified Eagle's medium) diluted with Phosphate Buffer Solution (PBS 1:2) and supplemented with 100 IU/ml penicillin, 100 IU/ml streptomycin. Mononuclear cells fraction were harvested by Ficoll separation of marrow cells (20 min, 600 g). After separation of cloudy corona and dilution with PBS, it was centrifuged in 200 g for 10 min and washed three times in 4-5 ml PBS. The cells were then incubated in complete medium composed of DMEM, 10% FCS, NEaa , NaHCO 3 (3.7 mg/ml), L-glutamin and penicillin/streptomycin at a cell density of 5×10 6 cells/mL in 5% CO 2 and 37°C. The adherent elongated cells, MSCs, were exhibited homogeneous fibroblast-like morphology with a spindle or triangular-shaped cell bodies, large and ellipse nuclei and growing outward in a "swirling fibroblast-like" pattern. The nonadherent cells were removed after 48 hr with medium change. After 3-4 days the cultures at 80-90% confluency were tripsinized using 0.05% trypsin/1 mM EDTA and then passaged at 1:2 ratios into fresh 25 cm 2 culture flasks. Subculture was repeated till passage 3 when sufficient cells were provided for the next stage of experiment.
Adipose tissue cell culture
The adipose tissue from lumbar paravertebral regions were separated and collected in 15 ml sterile tubes containing PBS supplemented with BSA (20 mg), 100 IU/ml penicillin (Sigma, USA) and 100 IU/ml streptomycin (Sigma, USA). After washing 2 times with PBS, under laminair hood, the specimen was minced into small pieces and separate fibrous tissue.
The specimen subjected to enzymatic digestion using collagenase type IV (0.6 mg) at 37°C for 120 min. At the end of this time, the floating cells were separated from the vascular stromal fraction by centrifugation in 200 g for 10 min. The pellet (stromal vascular fraction) washed 2 times and filtered through a 150 μm nylon mesh to remove undigested tis-sue. Mononuclear cells were harvested by Ficoll separation to obtain the mononuclear fraction of marrow cells.
After separation of cloudy corona and dilution with PBS, centrifuged in 200 g for 10 min and washed tree times in 4-5 ml PBS. The cells were suspended in proliferation medium including DMEM (Dulbecoo Modified Eagle Medium, Sigma, USA) containing 10% FCS (fetal bovine serum, Gibco, Germany), NEaa, NaHCO 3 (3.7 mg/ml), L-glutamin and penicillin/streptomycin (100 U/mL and 100 mg/ mL, respectively) (Sigma, USA) and plated at 10 6 cells/ml in 25 cm 2 -culture flasks. The cultures were incubated in an atmosphere of 5% CO 2 and 37°C. Meanwhile 45-48 hr after culture initiation, the medium was discarded and the cells were washed with PBS and fed with fresh medium. Medium changes were performed each 3 days until the culture became confluent. At this time, the cultures were tripsinized using 0.05% trypsin/1 mM EDTA and passaged at 1:2 ratios into fresh 25 cm 2 culture flasks. Subculture was repeated till passage 3 when sufficient cells were provided for the next stage of experiment.
Liver cell culture
The liver from the fetus was dissected with great care without any rush and placed in 15 ml sterile tube containing Hank's balanced salt solution without calcium and magnesium supplemented with 100 IU/ml penicillin and 100 IU/ml streptomycin with pH adjusted to 7.3 and washed 2 times with PBS. Under laminar flow hood, the specimen was mechanically minced into small pieces and then subjected to enzymatic digestion using collagenase IV (0.6 mg/ml) at 37°C for 75 min. At the end of this period, the floating cells were separated by centrifugation at 200 g for 10 min. The pellet was then washed 2 times with PBS and filtered through a 150 μm nylon mesh to remove undigested tissue.
Mononuclear cells fraction were harvested by Ficoll separation (20 min, 600 g). After separation of cloudy corona and dilution with PBS, it was centrifuged in 200 g for 10 min in 4-5 ml PBS (three times). The cells were then suspended in proliferation medium including DMEM containing 10% FCS, NAaa, NaHCO 3 (3.7 mg/ml), L-glutamin and penicillin/streptomycin and plated at 10 6 cells/ml in 25 cm 2culture flasks. The flasks were then incubated in an atmosphere of 5% CO 2 and 37°C. After 45-48 hr of culture, the medium was removed and the cells were washed with PBS and refreshed with fresh medium. Medium refreshment was performed after 3 days until the cells became confluent. At this time, the cultures were tripsinized using 0.05% trypsin/ 1 mM EDTA and passaged at 1:2 ratios into fresh 25 cm 2 culture flasks. Subcultures were repeated till passage 3 when sufficient cells were provided for the next stage of the experiment.
Verification of MSCs
In addition to identification of MSCs based on their morphologic or phenotypic characteristics, their multilineage differentiation capacity into the bone, fat, and cartilage were evaluated. Moreover, the stemness property of MSCs and the expression of one related gene to each cell lineage were confirmed by molecular approach.
Osteogenic potential
Osteogenesis of MSCs was induced in vitro by treating a monolayer culture with a proosteogenic cocktail. In brief, MSCs in passage 3-4 were cultured at 3×10 4 cells in 6 well plates containing proliferation medium and allowed to attain 70-80% confluency. The medium was then replaced by differentiating medium composed of DMEM medium supplemented with 10% FCS, 50 μg/ml ascorbic 2phosphate (Sigma, USA), 10 nM dexamethasone and 10 mM β glycerol phosphate. The cells were kept in differentiating medium for 28 days with medium changes of twice weekly. At the end of this period, the number and size of mineralizing nodules were maximized. For evaluation of mineralized matrix, cells were stained with 2% Alizarin-red S solution. Briefly, the differentiated cells were washed twice with PBS and fixed with 10% formalin for 10 min at room temperature. The cells were then washed thoroughly with PBS and stained with 2% Alizarin Red S solution, pH=4-4.1, with 0.5% NH4OH for 2-5 min. The mineralized matrix was identified by the presence of red foci in stained specimen.
Adipogenic potential
To promote adipogenic differentiation, MSCs at passage 4 were plated in a concentration of 3×10 4 /ml in 6 well culture plates until 70-80% confluency. The proliferation medium was replaced with adipogenic differentiating medium consisted of DMEM supplemented with 10% FCS, 50 μg/ml ascorbic 2-phosphate, 100 nM dexamethasone and 50 μg/ml indomethacine. The cultures were kept for 21 days during which the medium was changed twice weekly. The first sign of the adipocyte differentiation became evident 2-3 weeks after culture when lipid droplet appeared in the differentiating cells. Eventually, the lipid-rich vacuoles within cells were combined together and filled the cells. Accumulation of lipid in these vacuoles was assayed histologically by oil red O staining. Briefly, the intracellular accumulation of lipid-rich vacuoles was stained with 0.3% Oil Red O solution. To this end, the cells were fixed in 10% formalin for 10 min, washed with PBS and stained with 0.3% oil red O solution for 10 min at room temperature. The intracellular lipid-rich vacuoles were stained as red foci.
Chondrogenic potential
Chondrogenesis of MSCs in vitro mimics that of cartilage development in vivo. To induce chondrogenic differentiation, the passage 4 MSCs at a concentration of 3×10 4 cells/ml were plated in 6 well culture plates until 70-80% confluency. The proliferation medium was replaced with chondrogenic medium consisted of high-glucose DMEM supplemented with 0.1 µM dexamethasone, 10 ng/mL TGF-β1, 50 μg/ml Ascorbic acid and 50 mg/mL ITS + premix (Becton Dickinson), 6.25 µg/mL insulin, 6.25 µg/mL transferrin, 6.25 ng/mL selenius acid, 10% FCS at 37°C for 3 weeks. Medium changes were carried out twice weekly and chondrogenesis was assessed at weekly intervals. For the presence of glycosaminoglycans within the extracellular matrix the cells were stained with toluidine blue. Briefly, the differentiated cells were fixed with 10% formalin for 10 min at room temperature.
After washing, the cells were exposed to Toluidine blue for 30 s at room temperature. Acid mucopolysaccharides and sulfated mucopolysaccharides within the extracellular matrix were stained as violet foci.
Molecular verification of MSCs
The RT-PCR analysis was performed to assess an expression of osteocyte, adipocyte, and chondrocyte related genes in differentiated cell lineages (one gene for each cell lineages) as well as two genes related to the stemness status of MSCs. Total RNA was extracted using RNA Extraction Kit (Rima zol; CinnaGen, Tehran, Iran) according to the manufacturer's instructions.
Before RT, the extracted RNA samples were treated by RNase-free DNaseI (EN0521; Fermentas, Opelstrasse 9, Germany) to ensure that the extracted RNA for synthesis of cDNA is free of DNA contamination. The extracted RNA was reverse-transcribed to cDNA using 1 mg of extracted RNA, random hexamer primers for ovine genes, and M-MuLV Reverse Transcriptase RNase H-(Vivantis Technologies Sdn. Bhd. Selangor D.E. Malaysia). The PCR reactions were performed using an Eppendorf Mastercycler using primer sequences listed in table 1. The GAPDH was considered as a housekeeping gene. PCR products were analysed in 1% agarose gel, stained with ethidium bromide and visualized by Uvitec gel documentation system.
Growth characteristics
Colonogenic assays (Colony Forming Unit: To assess the capacity and efficiency for self renewal, passaged-3 MSCs obtained from bone marrow, adipose, and liver tissues were trypsinized and counted using hematocytometere, plated at 100 cells in 10 cm petri dish and allowed to grow for 9 days. At the end of this period, the plates were stained with 1% crystal violet in 100% methanol for visualizing new fibroblast shape colonies (more than 20 cells) produced at cultures. The petri dishes were then observed with a light microscope to determine the number of pro- duced colonies. Calculation of the CFU-F efficiency was performed according to the following formula: CFU-F efficiency=(counted CFU-F/cells originally seeded)×100. Routinely, five CFU-F assays were performed for each isolated cell population.
Population doubling time (PDT)
The passage-3 MSCs at a concentration of 10 4 cells/cm 2 were plated in 25 cm 2 culture flasks and then cultured until 100% confluency. At this time, the cells were trypsinized and counted with hemocytometer. The PDT was calculated for either studied MSCs according to the following equation: PDT= culture time (CT)/population doubling number (PDN). To determine PDN, the formulae PDN=log N i /N f ×3.31 was used 17 . In this equation N i and N f were the numbers of initiating and harvesting cells, respectively. Routinely, population doubling time assays were performed in triplet for each type of MSCs.
In the following equation; t= culture time, N h and N i are the numbers of harvesting and initiating cells, respectively. PDT=(log 2×t)/ (log N h -log N i ).
Cell seeding density
The MSCs (passage-3) cells were plated at densities of 100, 300, 900, 2700, and 8100 cells/cm 2 in 6-well culture plates and cultured in DMEM supplemented with 10% FBS, 100 IU/ml penicillin and 100 mg/ml streptomycin. Five days after culture, the cells were trypsinized, lifted and counted with hemocytometer. Fold increase were then calculated for each cell density.
Serum concentration
MSCs expansion is strongly dependent on FBS presence in medium. To determine optimal serum concentration, passaged 3 cells were plated at density of 16х10 3 cells/cm 2 in 24-well culture plates with the DMEM medium supplemented with FBS concentrations of 5%, 10%, 15% and 20%. When one of the wells in each group reaches confluent (7 day), all cells were trypsinized and counted with hemocytometer. Fold increase in cell number was determined for each culture group and in this regard, the FBS concentrations were statistically compared.
Growth curve
The cultured cells are usually grown with characteristic pattern in which three phases including lag, log and plateau phase can be recognized. In the present investigation, growth curve was plotted for each MSCs derived from bone marrow, adipose tissue, and liver in order to better compare growth kinetics of the cells. For this purpose, the passaged-3 cells derived from each tissue were plated at 3×10 4 cells/well in 24-well culture plates and allowed to become confluent. In a regular daily basis, some wells were trypsinized and the cell number was determined by hemocytometer count. Using the data growth curves was plotted.
MSCs growth characteristics
MSCs were successfully isolated and expanded in vitro from all 3 sources. In primary cultures, the cells grew rapidly as such the media change for removal of nonadherent cells was carried out 48 hr of culture and 100% confluency was achieved on day 4 in liver and bone marrow MSCs and day 5 of the culture in A-MSCs (Figure 1). The MSCs were kept their phenotypically homogeneous morphologies as well as their multiplication characteristics in cultures in different sequential passages.
Assessment of PDT in both MSCs lines was revealed no difference in PDT between two sources of MSCs when the cells plated at 10 4 cells/cm 2 in a culture medium supplemented with 10% FBS (Table 2).
Concerning the colonogenic capacity of MSCs, no difference was observed in the number of fibroblast shape colonies (each more than 20 cells) following 100 cells seeding in 10 cm petri dish after 9 days between bone marrow and liver derived MSCs (Table 2, Figure 2).
In MSCs lines, (liver and adipose tissue), the maximum proliferation of the cells (fold increase) was achieved when cell seeding was adjusted to 300 cells/cm 2 in the presence of 10% FBS (p<0.001). Indeed, as the cell seeding density was increased the fold increase was decreased except for the 300 cells/cm 2 (Table 3). There was no significant difference in fold increase in different cell seeding densities between bone marrow and liver derived MSCs except for the 2700 cells/cm 2 , in which the fold increase in bone marrow derived MSCs was higher than liver MSCs (p<0.01).
The cells having been plated with 20% FBS had a significantly larger fold increase in cell number as compared with lower concentrations of serum (Table 4). According to the plotted curve, the cells in different sources started proliferating immediately after being plated. The culture reached plateau in approximately 5-9 days after initiation depending on the cell sources ( Figure 3).
Multilineage cell differentiation
In osteogenic media, the morphology of MSCs derived from three sources was changed from spindle-shaped to cubical cells and the mineralized foci were detected as red spotted areas stained with Alizarin red ( Figure 4). The induced chondrogenic differentiation was detected by the presence of glycosaminoglycans within the extracellular matrix stained with toluidine blue as violet foci (Figure 5). The differentiated MSCs to lipocytes, underwent morphological changes and produced vacuoles containing lipid droplets detected with oil red O staining as red areas ( Figure 6).
MSCs molecular verification
At the molecular basis, RT-PCR, the osteogenic, lipogenic, and chondrogenic differenti ation potential of MSCs derived from differ ent sources was further confirmed by expression of specific genes related to differentiated cell lineages (one gene for each cell lineage).
The stemness status of isolated MSCs, from 3 sources, was confirmed by expression of 2 related genes (Figure 7).
Discussion
Physiological properties of mesenchymal stem cells including straightforward manipulation, high proliferative capacity, immunomodulation, multilineage differentiation potential, and tropism for the healing of injuries render them a potentially powerful candidate cell type for regenerative medicine as well as for the study of cellular differentiation 5,28 . The plasticity, self-renewal, and multi-lineage potential of MSCs have generated growing interest in their use in a constantly expanding variety of experimental regenerative therapies and transplantation purposes 29,30 . Neverthe- less, further studies in suitable animal models are needed to translate the potential of MSCs into clinical applications. The discovery of mesenchymal stem cells is credited with Alexander Friedenstein and associates, who over 46 years ago demonstrated that pieces of bone marrow trans-planted under the renal capsule of mice, formed a heterotopic osseous tissue that was self-maintaining, self-renewing, and capable of supporting host cell hematopoiesis 20 . Furthermore, Friedenstein showed that the osseous-forming activity of bone marrow was contained within the fibroblastoid cell fraction isolated by preferential attachment to tissue culture plastic. Nowadays, B-MSC has been the traditional source of human MSCs (hMSCs) for basic research and therapeutic purposes because of its routine harvest and safe procedure 20,31 .
According to the previous investigations, MSCs are present at low frequencies in bone marrow samples 1,32 . Therefore, finding alternative sources of mesenchymal stem cells for research and therapeutic purposes are necessary. In the present study, adherent spindleshaped cells derived from fetal ovine liver, adipose tissue and bone marrow were collected and examined in terms of their growth characteristics, culture requirements and tripotent differentiation potential that is proposed as a criterion indicating MSCs identity of the cells in question.
These cells show a varying cellular morphology from spindle-shaped and elongated, that mostly were observed, towards more cuboidal fibroblast-like cells with shorter cytoplasmatic extensions (Figure 1). More specif- ically, Sekiya and colleagues demonstrated that hMSCs undergo a time-dependent morphological transition from thin (small), spindle-shaped cells (considered stem cells or early progenitors) to wider (larger) spindleshaped cells (looked like more mature cells) when cells were plated at 1 to 1000 cells/cm 2 33 . Moreover, Jung et al showed that the small, spindle-shaped cells proliferate more rapidly and have a higher level of multipotentiality, compared to the slowly replicating large cells, which have lost most of their multipotentiality 34 .
The morphology and size of hMSCs may also be dependent upon culture conditions (e.g., growth media, culture surface). For instance, the cultured hMSCs in bFGF-supplemented media were smaller and proliferated more rapidly than those cultured in bFGFlacking conditions 35 . Culture surfaces (e.g., treated with Matrigel) might also affect the morphology 36 .
The ability of cells to reform colonies, a primitive measure of progenitor cell activity, was initially determined by using a colonyforming assay. Considering the growth characteristics of three sources of MSCs the highest PDT and lowest colony forming unit were observed in A-MSCs. Therefore, it could be inferred that whenever the fast propagation of MSCs is demanded adipose tissue is not a good candidate compared to the liver and bone marrow. Though, the ease of access and the sufficient amount of tissue in isolation and purification of MSCs is another factor that should be considered in choosing the source of MSCs (Table 1).
The colony numbers and PDT in B-MSCs and L-MSCs were comparable to the corresponding values in MSCs derived from different sources in other species [37][38][39][40] . Morphologically, human and porcine bone marrow MSCs comprised of approximately 2-3% of the total nuclear cell fraction in the bone marrow with PDT of 30-50 hr and 50-55 hr, respectively 38,41,42 . The B-MSC PDT in rat is varied between 24 hr to 62 hr 39,43 . In pubertal sheep, the B-MSCs PDT was 24.94 hr when plated at 100 cells/cm 2 32 , though in another report it was at about 50 hr 4 which was slightly different from the results of our study (31 hr).
The PDT in fetal L-MSCs was slightly lower than what reported in another study (28 hr vs. 36 hr) 44 . It seems apart from the tissue source of MSCs, the site from which the tissue is obtained can influence the PDT. In this context, the PDT in epididymal and epicardial adipose tissue derived stem cells has been reported 69±16 hr and 45±9.6 hr, respectively 45 .
There are several approaches to improve conventional tissue culture conditions in order to get a higher number of cells and to prevent loss of their differentiation capacity. Studies on the effect of serum source 46,47 and the use of growth and differentiation factors like, bone morphogenetic proteins and fibroblast growth factors are all of high concerns 48,49 .
FBS contains a high content of growth factors as well as nutritional and physiochemical compounds required for cell maintenance and growth 34 . Therefore, cell propagation is largely dependent on the presence of FBS in culture medium 1,40 . FBS-based media is a common standard medium used for propagation of hMSCs for basic research and clinical studies 34,46 . Therefore, any experimental work with MSCs requires in vitro expansion of the cells with the most appropriate concentration of serum.
According to the previous investigations, the number and proportion of MSCs in different tissues is quiet low, therefore, their expansion is necessary before performing clinical studies 32 . To do so, we cultivated three sources of ovine MSCs in the presence of different concentrations of FBS to determine the least appropriate serum concentration with the highest mitogenic effect. This is of utmost importance especially at cell therapy strategies where the rapid expansion of cells would be desired. In our study condition, there was a positive relationship between serum concentration and proliferative capacity of MSCs. The MSCs exhibited the highest proliferation when being provided with a medium supple-mented with 20% FBS. This finding was in contrast to other study in which the maximum fold increase in ovine and goat MSCs was achieved at 15% serum concentration 32,40 . There was, however, no significant difference between 15% and 20% serum in rat and mouse MSCs propagation 50,51 .
Concerning the effect of serum on cell propagation one should bear in mind how serum concentration may influence the quality, functionality, immunomodulatory properties, and differentiation potential of MSCs in in vitro and in vivo conditions, especially in tissue engineering.
The other factor affecting the rate of MSCs proliferation would be the cell seeding density at culture initiation. The cell proliferation rate is of utmost importance especially at cell therapy strategies where the rapid expansion of cells would be desired and can be used to optimize the culture condition for maximum proliferation of the cells 45 . Based on our findings, there was an inverse correlation between MSCs seeding density and their proliferation which was manifested as a reduction of fold increase concurrent with the increasing of cell seeding density at culture initiation, except for L-MSC at 300 cell seeding density (Table 3). Moreover, in B-MSCs the fold increase at 300 cells/cm 2 cell density, though insignificant, was higher than 100 cells/cm 2 cell density. This finding, somehow, was in agreement with the reports indicating 100 cells/cm 2 cell density is superior in B-MSCs proliferation in sheep 32 , goat 40 , mouse 51 , and rat 50 species.
Despite the higher fold increase at lower densities and considering the finite lifespan of MSCs, the question of how the higher population doubling in lower densities groups may influence the quality and differentiation potential of MSCs remains to be investigated. On the other hand, whether the proliferated cells at lower densities, at culture initiation, have proliferation and differentiation potential similar to cells with higher densities is controversial. Concerning the multilineage differentiation potential of the three sources of MSCs, their multilineage differentiation charac-teristics were confirmed by their differentiation into osteoblasts, adipocytes, and chondrocytes under appropriate culture conditions. Additionally, the differentiation status of MSCs was further confirmed by mRNA expression analysis. No difference was observed in multilineage differentiation potential and mRNA expression analysis between different sources of MSCs.
Conclusion
In conclusion, in our study condition all the three sources of fetal sheep MSCs had the same multilineage differentiation potential. Among the three sources of MSCs the bone marrow and adipose tissue derived MSCs had the highest and the lowest proliferative potential, respectively. In all the three sources of MSCs, there was a negative relationship between cell seeding density at culture initiation and proliferative capacity except for the liver and bone marrow derived MSCs at 300 cells/ cm 2 density. Considering both the PDT and proliferative potential where the rapid expansion of cells would be desired, the liver derived MSCs are good alternative for bone marrow derived MSCs. | 2016-05-12T22:15:10.714Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "c9e46bbef85fa537efd01cc3f8bff5dd64e9c455",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c9e46bbef85fa537efd01cc3f8bff5dd64e9c455",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
226661066 | pes2o/s2orc | v3-fos-license | Ekaterina Protassova, Maria Yelenevskaya, Johanna Virkkula OLD AND NEW HOMES OF THE RUSSIAN LANGUAGE IN EUROPE
In a world in which transnational networking has become the norm, and communication choices are made in real time, and often under pressure, ease of interaction wins. Based on the critical study of documents, interviews, participant observations, and linguistic landscape analysis, this study discusses the situation with the Russian language in some Slavic countries (e.g., Slovenia, Bulgaria, Montenegro) and Greece. Loyalty to their country of origin, or just affinity with its culture increases solidarity of the people speaking the same language independently of whether they have a good command or use it with difficulty. Multidirectional tendencies in education can either lower or raise the level of teaching and language use. A multitude of new language-contact situations have emerged, thus giving rise to the centrifugal tendencies in the development of Russian.
I. Introduction
Historians, sociologists and writers of fiction have devoted many a book to social and economic upheavals and human dramas accompanying the collapse of empires. What remains on the periphery of scholars' purview is changes in the languages that these events trigger and which may themselves cause political and social conflicts. The disintegration of the Soviet Union, sometimes referred to as the "last empire" 1 was no exception in this respect. Most of the newly-formed states rejected the dominance of the Russian language in the public domain which had been the cornerstone of the Soviet language policy since the 1930s. These changes were documented in legal acts. In each of the 15 internationallyrecognized states and six self-proclaimed separatist polities formed on the territory of the former Soviet Union (FSU), a clause about language is included in the constitution. In all the recognized states, except Belarus, Kazakhstan and Kyrgyzstan, Russian lost the status of an official language. 2 Moreover, in some countries, e.g., in Estonia, Turkmenistan, and Kazakhstan, various amendments to the language laws were made later to further elevate the prestige of the titular language and reinforce its role in the political, economic and social life of the country 3 . In Ukraine and Moldova, legislators are currently working on new initiatives determining functions of the titular and minority languages. No wonder that young states put so much emphasis on determining the status and functions of the languages spoken by the population. Language legislation is a core component of a nation's political development, reflecting the aspirations of the elites. At the same time, a new political reality is shaped by the enforcement of language legislation 4 . Whether Russian is dubbed as a minority language, a language of international communication, or not mentioned at all in the language laws of the new states, 5 clearly, its functions have been curtailed, its prestige has dropped and motivation to learn it has decreased, at least in some sections of the population. The change proved to be dramatic for the Russian-speaking populations since in the majority, they were monolingual. Building their life anew, Russian speakers had to face the dilemma of becoming proficient in the titular languages or leaving their native places and joining millions of post-Soviet migrants. Today, almost thirty years after the disintegration of the Soviet Union we witness a paradoxical situation: the total number of proficient speakers of Russian has dropped and is estimated to be around 265 million, 6 but the geography of Russian-language use has greatly expanded, with Russian-speaking enclaves found on all continents.
II. Russian and its variations in the metropolis
Although for a long time Russian was perceived as a monocentric language par excellence, and use of the standard literary language was essential for securing a good place on the social ladder, it is hardly conceivable that the language spoken in huge territories would be completely unified. Indeed, Russian linguists have been documenting dialects and subdialects of the Russian empire since the mid-18th century. In the Soviet period dialectologists continued fieldwork and analysis of the data, including experiments in the repertoire of dialectological methods. 7 In the 1940s and 1950s, atlases of Russian dialects were prepared. 8 At the same time, Russian as it was spoken in the Soviet Republics was not researched. Yet its local varieties began to develop already in the times of the Russian Empire and this process intensified in the Soviet period. Learning Russian at school was compulsory, but not everyone managed to master high-level literacy in Russian, learn its standard grammar, or distinguish between its functional styles. National varieties were influenced by indigenous languages of the republics and differed from the dominant standard variety on many counts. The border regions of Russia and her neighbors were also interesting zones of deviations from standard Russian. As a result of wars and political conflicts, there would be an exchange of population or forced migration triggered by economic deprivations. As a result, in some border zones one can encounter Russian-speaking villages using archaic forms and/or code-mixing Russian with the local idiom.
While Soviet linguists were aware of the importance of studying dialects, the overall attitude to them and to the national varieties as they existed in the Soviet republics was quite skeptical and even patronizing in Soviet society. Thanks to the fast pace of urbanization and the growing prestige of literacy and education, use of dialects decreased dramatically and was limited to the elderly in rural areas. Dialects came to be associated with an archaic culture and socio-economic backwardness. Dialectisms became part of the jokelore, deriding the uneducated and non-sophisticated. Equally, specific features of Russian pronunciation and grammar deviations from standard Russian that are typical of Russian L2 speakers residing in the Soviet republics and the autonomous republics of the Russian Federation were an indispensable part of Soviet ethnic jokes. 9 Notably, the pronunciation of Soviet leaders, many of whom had traces of southern dialects in their speech, was mockingly imitated by the intelligentsia as a sign of the partocrats' poor education.
At the beginning of the post-Soviet period, when the Russian language underwent fast changes the fashion reversed. Shedding the confines of what is "normative", journalists, bloggers and rank-and-file internet users started discussing differences between the local and the standard in the speech of their environment, arguing about etymology, compiling glossaries and tests on the knowledge of regionalisms and "crowd-creating" comic lists explaining differences between the words used in the capital and other parts of the country. A case in point is differences between some lexemes one hears in St. Petersburg and Moscow that often come up in Internet discussions and are successfully used by commercial companies as advertising gimmicks. 10 Another example is a glossary of 150 words collected by the journalists of the central newspaper Komsomol'skaja Pravda on the basis of materials published by its regional branches. In the introduction to the article the author writes: "Planning a trip in Russia, study this short phrasebook. Fine details of translating "from Russian to Russian" in some areas of our Fatherland might puzzle you greatly". 11 The Russian internet abounds in posts and subsequent discussions about regiolects. 12 Reflections about speech habits and increased language awareness are typical of folk linguists. Although many participants have little linguistic knowledge which could help them distinguish between regiolects, sociolects and ideolects, they are sensitive to speech varieties, reflecting on local culture that was not obliterated by the overwhelming standardization of the Soviet period. These observations resonate with Romaine's idea that it is more appropriate to think of a standard language as an idea rather than a reality, as a set of abstract forms to which actual usage may adhere to various degrees. 13 On the whole, the attitude to regiolects on the part of contemporary Russian linguists is positive and they are studied as part of the linguistic landscape, but the versions of the Russian language spoken in the diaspora are often treated as contaminations. Equally, the idea that "the Great and Mighty Russian language" has legitimate varieties in other countries is emotionally rejected by many educated Russians. As Muhr aptly remarks, language communities opposing their status of pluricentricity share a centralist and elitist notion of standard forms, and it takes at least two generations to adapt to the idea that several norms may coexist. 15 Proponents of the theory that Russian is a monocentric language view borrowings from contact languages solely as a sign of language attrition, and then complex processes of linguistic and cultural hybridity are mistaken for a loss of Russian identity. In fact, continuing to be disdainful of diasporic versions of the Russian language ignores the fact that languages are dynamic entities, constantly malleable, constantly segmentable and segmented. They are marked by their internal potential for multiplication and differential developments generated by their users and uses and functionalized in context. Even the language of the communities of "Old Believers", known for their isolated way of life and great efforts to maintain Russian for nearly two centuries, is influenced by the languages of the host countries. 16 3. The Russian World: United or Fragmented by the Language?
The role of the language is cornerstone in the ideology of the "Russian World". A follow-up of the ideas expressed in the early 19 th century, its theoreticians-experts in the diasporas-conceived of it as a multi-ethnic supra-national phenomenon based on shared language, culture and memories. They posit that this imagined community does not only include those who live in and outside the nation, émigrés of different waves and their descendants, but also all those who have affinities with Russia and its culture 17 . Institutions promoting maintenance of the Russian language outside the nation are sponsored by the government. These are the Foundation "Russian World", set up in 2007, and the Federal Agency of CIS Affairs, Compatriots Abroad and International Humanitarian Cooperation, Rossotrudnichestvo, founded in 2008. The attitude to these organizations in the diasporas has been ambivalent since their foundation, and suspicions became stronger after the annexation of the Crimea in 2014. 18 Some analysts admit that the Russian-Ukrainian conflict also accounts for a drop in trust in the Russian media on the part of diasporans. 19 One reason may be that the concept of the "Russian world" broadens the goals of consolidating ties with the diaspora by linking it to the transcendent mission of the Russian people to defend and disseminate concrete values, challenging the democratic values of the West. 20 Another one is that "soft power" may easily transform into "hard" power. 21 Support of and imposition of the standard version of the language, as it is maintained in Russia, is viewed by Russia's present-day elite as a geopolitical necessity 22 . The imposition of the standard goes hand in hand with purism. Many leading Russian linguists are concerned about massive borrowings from English and about slang and "low style" penetrating the media discourse and movies-those very sources that have powerful influence on the speech habits of lay people. Thus, addressing members of the International Association of the Teachers of the Russian Language and Literature, its late president, Lyudmila Verbitskaya, quoted the Russian writer Alexei Tolstoy: "Treating the language carelessly is equal to sloppy, imprecise and incorrect thinking". And she added, "It should be prestigious for the entire Russian World to speak Russian correctly". 23 Linguistic purism is known to be a potent tool in the politics of inclusion and exclusion. 24 But then for a country which wants to promote its values in the diaspora, this can act as a boomerang: in diasporic communities, young people in particular have strong ties with the host cultures. As heritage speakers they are unlikely to be willing to maintain the language of their parents' mother country if it does not incorporate realities of their own life.
Russian in the diaspora: Some common features
Relying on the criteria that make it possible to classify languages as pluricentric, 25 institutions on the teaching of Russian has considerably diminished, the role of standard Russian has also dropped. The format of this essay does not allow us to discuss the situation in the rest of the newly-formed countries. The sociolinguistic situation with Russian in the Baltic States has been analyzed in multiple studies. 26 MAPRYAL and the Russian World Foundation pursue policies which should boost Russian speakers' affinity with Russia and her culture irrespective of their ethnic belonging, place of origin and domicile. These institutions perceive attempts to preserve and solidify unified communicative space as a prerequisite of peaceful co-existence of different ethnicities, state construction and normal functioning of social institutions. They understand that to be effective, language policies should involve research.
The Federal Agency for the Commonwealth of Independent States' Affairs, Compatriots Living Abroad, and International Humanitarian Cooperation (Rossotrudnichestvo), which has operated under the jurisdiction of the Ministry of Foreign Affairs of the Russian Federation since 2008, published a document called "Consolidation of the Russian Language" (rs.gov.ru/en/ activities/9). This paper includes diverse statistics aimed to illustrate the role of Russian in culture and knowledge production. It claims that in terms of translation, Russian occupies fourth position among the languages from which texts are translated and seventh position among those into which various literatures are translated. What is becoming increasingly important is that it is second most often used language on the Internet. Russia and Belarus use it as the state language; in Kazakhstan, Kyrgyzstan, Tajikistan and Uzbekistan it is used for a variety of purposes and in different domains, making it the de facto official language. Many international organizations, such as the UN, the SCO, the WHO, UNESCO, the OSCE, and others use Russian as a working language.
Today, Russian Centers of Science and Culture function in 58 countries, organizing various activities, among them teaching Russian at different levels and for different purposes. Students learn to communicate in Russian in the public sphere, when dealing with administrative issues, conducting business, doing banking and making investments. They are also taught Russian culture and literature, family traditions, and cuisine. They learn to speak about travelling, hobbies and various issues of private life. A new and fast expanding sphere of Russian-language instruction is heritage-language teaching to the children of expats and children from mixed marriages. About 15,000 students of various categories come to study in Russia annually (russia.study). Rossotrudnichestvo supplies Russian schools and instructors abroad with teaching materials and provides methodological guidance. The document cited earlier states that "support and promotion of the Russian language abroad is one of the most important instruments of expanding international cultural-humanitarian cooperation of Russia with other countries". Russia considers educational services as a way to earn money and influence her diaspora. The legal basis of the concept of the "Russian Measures to Implement the Foreign Policy of the Russian Federation", the Foreign Policy concept of the Russian Federation, the concept of long-term socio-economic development of the Russian Federation for the period until 2020, the generally recognized principles and norms of international law and international treaties of the Russian Federation governing the activities of Federal bodies of state power in the sphere of international humanitarian ties, including in education. This concept complements and develops the main policy directions of the Russian Federation in the field of international cultural and humanitarian cooperation, approved by the President of the Russian Federation on December 18, 2010, and provides support to the so-called compatriots living abroad, including protection of their rights (among them, the right to study in Russian). This makes the governments of respective countries fear the soft power of the Russian language. Besides schools at the embassies which take money for studies and examinations, no Russian school managed to comply with the regulations of any country when governed by the RF. Instead, numerous private Russian schools, courses and study groups proliferated in all the countries where Russian speakers reside. Russian businesses and Russian schools are in contact with each other. The export of the educational services also includes branches of Russian universities, Russian and Slavic universities in the countries of the Near Abroad, courses of language and culture organized by Rossotrudnichestvo and Pushkin Centers, periodical grants from the Russkiy Mir Foundation and free lectures and seminars for those who teach Russian abroad. Testing the level of language proficiency is charged for, as are logopedic consultancies. The Russian authorities often donate books and textbooks created in Russia, which is part of the promotion of the ideology among the young learners. The positive image of Russia should attract potential learners to study at the Russian universities, and each country has a quota to send their citizens to get higher education in Russia. 27 Finally, there are big immigrant enclaves in Canada, Finland, Germany, Greece, Israel, the U.S.A and in the countries of Eastern Europe. In Finland and Israel, Russian has become the third most spoken language and immigrant communities have created many cultural institutions supported by the state. Notably, Russian immigrants in these countries, as well as in Germany and Greece, are mainly those who belong to the category of "returning diaspora". In Greece and Israel, a large percentage emigrated from Ukraine, and in Germany from Kazakhstan. The Russian spoken by these people when they migrated deviated from the standard Russian of Russia. Contacts with the titular languages of the host countries added new features to their speech. These differences are most noticeable in prosody and lexis. Russian spoken by immigrants includes a large number of borrowings which can be classified as follows: • Vocabulary of administration and legalese. These words have entered ethnolects of Russian speakers residing in the new states on the territory of the FSU in which Russian does not have the status of an official language; • Cultural borrowings (names of holidays, foods, rituals, clothes, crafts, etc.). In the Russian spoken in the FSU, some of these terms were absorbed much earlier since the language contact situation started as early as in the period of the Russian Empire; • local toponyms; • words expressing emotions; • local slang.
Due to a highly developed system of affixes, newly borrowed words do not stay long as exoticisms: Russian ethnolects in the diaspora quickly "domesticate" them. Many acquire diminutive, endearing or pejorative suffixes and form derivatives. Experimental research has shown that changes in the diasporans' lexicon are reflected on the cognitive level and emerge in verbal associations that differ from those in the metropolis. 28 Russian ethnolects also differ from standard Russian in their pragmalinguistic features. They absorb local forms of politeness, often appearing as calques, and forms of address. One of the most distinctive features is the wide-spread abandoning of the second person plural pronoun "Vy" used to address one person as a feature of politeness and social hierarchy.
Russian in Southern and Central Europe
We will now take the reader to those places in Europe which are seldom discussed in the literature devoted to the functioning of the Russian language outside the nation. This section provides a comparative analysis of immigrant groups diverse in terms of settlement patterns, their length of residence, and degree of acculturation. We will look into their status in the host societies and attitudes to the language of their home countries. We will examine the cultural institutions they have created and the role they play in the economy of their countries. Russian policy in Southern Europe used to be differentiated on the state level; some countries were treated as close allies, while others remained rather distant. 29 A variety of religious issues also played a role: historically, the Orthodox countries supported each other and displayed solidarity in days of trial. Today, after decades of turbulence, the Balkans and Greece have become an attractive tourist destination. Residents of Russia coming for a vacation there no longer opt for package tours but choose to travel independently, and the ability of the hosts to speak Russian is viewed as a boon. Many post-Soviet émigrés settled in the Balkans. They choose various methods of integration and make different decisions concerning native language maintenance in their families. Russia and Greece have had a long history of exchanging populations. Neither émigrés of the post-revolutionary, nor of the post-Soviet waves had to start from scratch but could benefit from the cultural institutions created by their predecessors. Despite significant differences between the 'White' and post-Soviet immigration waves, in terms of demographic features and motives for migration, their patterns of community building in Greece were quite similar.
Besides Russian citizens of various ethnic origins, the Balkans have become home for many Russian-speaking citizens of Kazakhstan and Ukraine. Settling down, the newcomers join Russian-speaking communities but also form their own. Like Russian émigrés, they open schools to facilitate language and culture maintenance in the second generation.
Exploring experiences of the Russian immigrants in Greece, we will demonstrate how Greece, a purely mono-national state accustomed to emigration but lacking experience in hosting immigrants, greeted the waves of Russian "late home-comers". Despite the societal pressure to adapt and assimilate, Russian-speaking immigrants of different waves succeeded in preserving and transferring Russian language and traditions to new generations. Notably, Russophones did not remain on the periphery of Greek society but came to play a significant role in various domains, primarily in science and culture.
The Orthodox Slavs in Southern Europe, especially Serbs, but also Montenegrins, regard Russians as a brotherly nation with a long history of helping Serbs when in need. In the first half of 18 th century, when there was a significant exodus of the Orthodox population from the Ottoman lands to Austro-Hungarian Vojvodina, an important cultural import was that of teachers from Russia, the most famous of whom were Maksim Suvorov and Emanuil Kozačinski. 30 Later the Serbian kingdom and Montenegrin rulers enjoyed the support of the Russian Empire; this support was mostly moral, but at times also political and economic.
Yugoslavia came into existence as the Kingdom of Serbs, Croats and Slovenes after World War I. An Orthodox country, using Cyrillic alongside with Roman alphabet, it welcomed the White Emigration. Alexander I of the Serbian Royal House of Karađorđević, favored Russians who had helped his country and tried to make a new home for them. He sponsored the establishment of Russian schools of the old type, especially praising their success in teaching mathematics. He allowed Russians to receive military education and he welcomed Russian cultural life-among other forms, theater events. Russian professors were permitted to teach at the universities; thus at the University of Ljubljana, six out of eighteen professors were of Russian origin. The first Russian Matica (association) was founded by A.D. Bilimovich in 1924 in Slovenia; afterwards, similar organizations appeared in Serbia and Croatia, aiming to help Russian culture thrive and reinforce Russian national identity away from the Fatherland. The émigrés brought up their children in the spirit of Russian educational traditions. They organized lectures, concerts and theatre performances. They published newspapers, books and journals, and put together a library that got all the new publications released in the USSR. They took part in creating Yugoslav opera and ballet, and they contributed to the development of tertiary education and educational cinema. Serbs were disappointed to see that many were not enthusiastic about mastering Serbian which they perceived as broken Russian; others adopted the local way of life, preferred Serbian schools to those created by their compatriots and welcomed their children's evolving multilingualism. Every year Matica's members and friends went to visit the Russian chapel of St. Vladimir erected in the Slovene Alpes during World War I by the Russian prisoners of war. In the 1930s, Russian youth founded the National Union of the New Generation (later NTS, Narodno-trudovoj sojuz [National Alliance of Russian Solidarists]), which was committed to fighting against communism. After World War II, many displaced people had to leave Europe with fake documents or changed their country of residence. 31 One can find biographies of a considerable number of White Russian emigrants to Yugoslavia in Wikipedia and some of them have English versions.
While paying tribute to the role of the White Russian immigration in its culture and economy, 32 Serbia has a controversial attitude to contemporary Russia. Honoring Russia is sometimes difficult to combine with aspirations to join the European Union. The Russian presence is more visible on the official than on the personal level. In the last decade, some Russians tried to establish businesses, buy property or study in Serbia (see serbialife. ru). At the same time, there were some waves of Serbian migration to Russia. The reasons to stay in Serbia are a pleasant climate, reasonably low prices, an ease of getting the residence permit, a language that is quite comprehensible, the same religion, and positive attitudes of the population towards Russianness. In the linguistic landscape, an observer notices some markers of Russian presence, such as a monument to General Wrangel in Sremski Karlovci, the White Army cemetery in Belgrade, and the Hotel Moskva, a part of the Palace Rossiya built in 1908-all of them reminders of the common past. Among the new markers of Russian presence one can notice advertisements in Russian suggesting that tourists should buy furs and the Russian Railway company, which Serbians most probably perceive as an international company.
Slovenia has received most of its recent Russian-speaking immigrants in the 21 st century because it has the most humane immigration legislation in the EU. Newcomers arrive predominantly from Ukraine and Russia. The reasons for immigration might be political and economic uncertainties in the country of birth, a lack of resources, poor working facilities, as well as a consequence of climate change and pollution. Émigrés are attracted by the European lifestyle secured by a constitutional state. They hope for quality education and bright future for their children and a dignified old age for themselves. They enjoy the unpolluted environment, the Alps and the sea, and reasonable housing prices. The road infrastructure is well developed, cars are inexpensive, and police are "normal". Having left "the sixth largest part of the earth", they like living in a small country. The brochure "Dobro pozhalovat' v Sloveniju! [Welcome to Slovenia!]" and the website dialogslovenia.com entice newcomers by mentioning the climate, security, the culinary and wine culture, medical services, free schooling, Slavic roots, civic conditions for entrepreneurship, the proximity of European attractions in adjacent Italy, Austria and Croatia, and the possibility of travelling to Great Britain and the U.S.A. They admit that while living in Slovenia is comfortable, it is not easy to find a well-paid job. Many owners of capital accumulated in Russia travel to spend it in Slovenia surrounded by compatriots. Russian speakers frequent the Centre for Russian Culture and Science (ruskicenter.si). The country offers favorable conditions for creating businesses, which entitles owners to obtain a residence permit.
Russian businessmen consider small hotels to be reasonable investments, because Slovenia has developed into an attractive tourist destination. Materials published for tourists in Russian are translations and are usually made by competent speakers of both languages, yet, they are not perfect. In the brochure in Russian "Turisticheskij spravochnik" [Tourist guide], posted at visitljubljana.com/ru/posetiteley, errors in Russian stem from interlingual homophones differing in meaning. Thus, ogovorki means "slips of the tongue" in Russian but "conditions" or "terms" in Slovenian, so the use of this word in the phrase intended to be "booking terms" puzzles the Russian reader. Reguljarnyj osmotr -Russian for "regular inspection" is used instead of ezhednevnye tury 'everyday tours', dejatel'nosti 'business, public or occupational activities' for razvlechenija, aktivnost' 'leisure activities, things to do', etc.
Russian-speaking parents are invited to live and study in the country without the family having to discard their native language (ruskasola. si)-the law on education guarantees the right to maintain minority and immigrant languages. There is a full-day Russian school affiliated with the Russian Embassy. Complementary education for children and adolescents (aged 3-17) is conducted in the framework of the school "Vesjolye rebjata [Joyful Children]" in Ljubljana, Novo Mesto, Koper and Radovlica in ordinary school buildings, and the grades are included in the matriculation certificate. A school pupil receives three lessons per week (105 lessons per school year), while pre-primary school children receive only two lessons per week. The school offers a variety of subjects to study: Russian language and literature, communication, creative writing, logic, culture, music and civilization. All students are provided with free teaching materials from Russia; and all the teachers obtained their professional education in Russia. The Russian language Olympics contest, New Year celebrations, Maslenica (Pancake Week and the winter carnival), Pushkin's birthday are traditional festive events. In the school journal "Kljuchik" [Little Key] published by the students once a year, we read that some children come from bilingual families and speak Russian with their mothers and grandmothers. Some speak Ukrainian at home, Russian at school and in their leisure time. One of the students writes that she was born in Russia and couldn't "simply throw out the Russian language", as half of her life, and all her childhood memories are connected to it. Her friends still live there, so, she intends to keep learning Russian for a long time and pledges never to forget it. Among the pupils there are adopted children continuing to learn their heritage language. Clearly, parents trust the school, and the school reciprocates in doing its best.
The international club of Slavic compatriots maintains a center for mutual help and support, "Ruslo". Its mission is to facilitate logistics, help prepare various documents, and provide legal services. The name of the center is an interlingual pun, combining the Russian "river bed" with the Slovenian "canal, track". It plays with the sound similarity of this word with "russkii". The Russian school "Stupen'ki, "Steps" functions under the auspices of the centre. Visitors to the Orthodox church see announcements and greetings in Russian.
Professor Emerita in the Russian Language Department at Ljubljana University, Alexandra Derganc, gave us an interview on September 21, 2017. She was born in 1948 in Maribor. Her father was Russian from Kireevka in the Orel region; her mother was half German and half Slovene. Her grandfather joined the White Army and ended up in Constantinople (Istanbul), where he met English industrialists who invited him to work for them in Slovenia. His wife and children joined him some years later with the help of the Red Cross. A chemical engineer by profession, Alexandra's grandfather worked at the factory, and her grandmother gave French lessons, or as the interviewee put it in archaic Russian davala chasy literally 'gave hours' That was their life. Her grandmother learned the Slovene language rather well, but her grandfather govoril vsju zhizn' kakuju-to smes' 'all his life spoke some mixture'. At home grandparents spoke Russian, and her father went to a Russian school and later to a Russian high school in Beograd. Slovene was not his mother tongue, although both Slovene and German were spoken in his family. As a child, Alexandra could understand but couldn't speak Russian; she studied Russian and English at the university.
Ljubljana University was founded in 1919, and R. Nachtigall who had studied in Graz became the first professor of Slavic languages at the new university. After WWII, many people studied Russian, and it was taught at school, but in 1968 its popularity dropped, and since 1980 it has not been in the school curricula. The lowest number of students enrolled in Russian courses in 1979, the year when Soviet troops invaded Afghanistan. With Perestroika, interest started growing, and now about 100-150 students learn Russian. Some high schools offer Russian as a foreign language again, but most of the students are beginners. A new phenomenon in the system of education is a growing number of heritage speakers who need a different type of instruction from students who learn it as a foreign language. At the University of Koper, Russian is taught for practical use in a variety of contexts.
Montenegro has recently become a major destination of Russian emigration. Most newcomers have invested in summer houses. They have opened boarding schools and camps for Russian-speaking children. Some families have second homes elsewhere. Montenegro has earned a reputation as a haven for Russian dissidents, and Russians' interests go beyond peaceful dwelling near the sea. 33 Russian speakers in Montenegro maintain the website rudiaspora.me. The Adriatic College (adriaticcollege.com) is a polylingual school in Budva for children aged 3 to 17 with the curriculum compatible with European, Russian and Montenegrin standards. The most popular media resource is "Russkij vestnik -Chernogorija" (rusvestnik.me) Russian tourists form the second largest group of the country's visitors. In 2017, only tourists from neighboring Serbia accounted for more arrivals, whereas Russian tourists had more overnight stays, topping the list with 26.7% of all overnight stays in Montenegro. 34 Compared to less than 5% of the tourist arrivals in Serbia, Croatia, Slovenia and Bosnia-Herzegovina, and slightly less than 10% of the tourist arrivals in Bulgaria, the appeal of Montenegro to tourists from Russia is self-evident. 35 In July 2018, travelling with (blonde) children in Kotor and its surroundings, we were addressed in Russian everywhere and heard Russian spoken by fellow tourists everywhere. Chatting with the owner of a chain of local hamburger restaurants we found that the number of Russian tourists was decreasing, and Turkish tourists might be the next big thing -but Montenegro, he felt, would not be attractive to Turkish tourists because of the prices.
In the linguistic landscape of Montenegro, texts in Cyrillic are primarily Russian. In addition to restaurant menus, advertisements from real estate and tourist agencies and different service businesses appear in Russian predominantly in the tourist zones. Some of these firms belong to Russian speakers from the FSU. Many older Montenegrins speak Russian as they learned it at school. In the speech of a tourist guide who uses Russian on an everyday basis, the accent is hardly audible: I and Y, soft and hard consonants are confused, and word stress is not always right: vísjat for visját, rimljáne for rímljane, ózernyj for ozjórnyj, korólevstvo for korolévstvo, dochkámi for dóchkami). Sometimes alternation of sounds was wrong (postavljat for postavjat 'will deliver') and sometimes case endings in nouns were mistaken (cena sutki for cena za sutki 'day price', za etix sto evro for za eti sto evro 'for these 100 euro', po 19-m veke for do 19-go veka 'until the 19 th century', govorit' etim jazykom for govorit' na etom jazyke 'speak this language', ego nasledoval for emu nasledoval 'inherited from him'), absence of reflexives (proguljat' instead of proguljat'sja 'hike', nauchat for nauchatsja 'will learn', poselili for poselilis' 'settled down', torgovat' for torgovat'sja 'bargain') and constructions like uznaem, esli postroili for uznaem, postroili li 'we'll know whether they have built'; est' i takix ljudej for est' i takie ljudi 'there are such people', Montenegrian lexis (mapa for karta 'map', velilepnyj for velikolepnyj 'beautiful').
Russians have lived in Bulgaria for more than 200 years. This period embraces church migration (Old Believers and post-revolutionary émigrés), political refugees in the late 19 th century, and soldiers who remained after the country's liberation from the Ottoman Empire. White émigrés in the 1920s-1940s included General Wrangel's army of tens of thousands of militants. There were also Bulgarian returnees with their Russian families after WWII. Every big city has its own history of relationships with Russia and Russians. Bulgarian-Soviet friendship and diverse contacts led to numerous mixed marriages, and the Union of the Soviet Citizens in Bulgaria was founded.
In the first half of the 20 th century, men dominated in the immigration influx, but in the second the number of women exceeded men. After the collapse of the Soviet Union, the gender composition of the immigration waves became balanced. Many Russian immigrants contributed to the development of Bulgarian science and technology. Russian schools operated here before and after WWII. Russian ballet, theater, painting, education, medicine, and journalism had a significant impact upon Bulgarian way of life. Russian cemeteries, archives, museums and legations are places where the memory of those people is preserved. Twenty thousand Bulgarians studied in Soviet tertiary educational institutions and about two thousand after 1992, and these numbers do not include alumni of the military schools. 36 These young professionals returned to Bulgaria, often together with their Russian-speaking family members.
The Russian-speaking diaspora today combines members or descendants of all the immigration waves. In the 1990s, new organizations came into existence. Some of them were and others still are involved in publishing periodicals: the Russian club Raduga The new amendment of the Law on Foreigners 38 stipulates that young volunteers coming to work in Bulgaria may receive a residence permit for one year. Researchers involved in projects at research organizations of the European Union may live in Bulgaria with their families; students and seasonal workers are also granted a special status.
A lot of Russians buy a second home in Bulgaria, and the peak of these acquisitions was less than ten years ago. 39 Among those who choose Bulgaria as their permanent domicile we find people of different age groups and different incomes. Seniors form a significant group; many of them own businesses in Bulgaria or in Russia and invest in the Bulgarian economy. One district of Pomorie is called "Little Moscow", and a Russian school has opened there fairly recently. The Orthodox religion, historical ties, membership in the European Union, an amiable climate, reasonable prices, the possibility of maintaining Russian as a home language for children (see rurech.bg, shkolaburgas.bg), the proximity of the languages and cultures of the mother and host countries help newcomers to integrate. Mixed marriages were common in the socialist times and this trend in familymaking continues, which is exceptional for a country with one of the lowest level of mixed marriages in Europe. Most Bulgarians approve of Russian immigration. Festivals, concerts and exhibitions organized by the Russians are frequented by the hosts since many Bulgarians are still proficient in Russian. During the entire socialist period, from 1944, Russian was studied as a mandatory school subject. Today many universities still have Russian departments, and the linguistic journal Bolgarskaja rusistika [Bulgarian Russistics]" is published regularly (bgrusistika.com). Russians in Bulgaria help each other cope with legal, psychological and economic problems 40 .
Those who have lived there for a while mention that their Russian is influenced by Bulgarian. It starts with talking about documents needed for domicile in Bulgaria. It is easier to adapt legalese to Russian morphology than translate it into Russian. Names of foods, in particular vegetables and fruit forming a substantial part of the local diet are also quickly integrated into speech. Names of shops are also borrowed: sladkarnica replaces konditerskaja [confectionary], xlebarnica is used for bulochnaja [bakery], and mesarnica for mjasnoj magazin [butchery]. An interesting phenomenon is the use of Bulgarian suffixes and stresses in common lexis: prijatelka for prijatelnica 'female friend'. 41 Some use Latin-based lexis in Russian in the same way as they use it in Bulgarian: lokacija for mestopolozhenie [location], vakacija for kanikuly [vacation], restrikcija for ogranichenie [restriction]. Notably, in Russian these words do not belong to the everyday vocabulary. In the Russian language of those who grew up bilingual the influence of the language of the host society is deeper. 42 Greece stands out among other immigrant-receiving countries due to its complex migratory relations with Russia. These relations have an intricate history, they are multifaceted and multilayered. Talking about mass migration, we can name as many as four waves only in the twentieth century: twice Greeks moved to Russia and twice Russians (or rather Russian speakers) migrated to Greece. It all began when after the fall of Constantinople-the capital of the Byzantine Empire-into the hands of In short, Russia and Greece have had a long history of exchanging populations. Neither the post-revolution, nor the post-Soviet migrants had to start community-building from scratch, although the differences between these two "emigration tsunamis" were striking. Despite significant differences between the "White" and post-Soviet immigration waves in terms of demographic features and motives for migration, their patterns of community building in Greece were quite similar. The first thing natives of the Russian Empire and, more than seventy years later, children of the Soviet empire did was to create interest groups and voluntary associations, launch schools and establish newspapers-all in an attempt not to get lost in an alien environment but to "retrieve" space where they would be able to create and cultivate their mini-homeland, just like their ancestors, the Greeks who once escaped to Russia, did and whose experience of emigration was well known to their descendants. 43 Greece is a purely mono-ethnic state, and Greeks, accustomed to migration but lacking the experience of hosting immigrants, greeted the waves of Russian "late home-comers". Despite the societal pressure to adapt and assimilate, Russian-speaking immigrants of different waves strove and ). Even those who are not proficient in Greek use abundant Greek communicative tags (e.g., ohi 'no', endaksi 'Ok', ela 'let's', ti kanis? 'how are you?', siga-siga 'little-by-little', congratulations). The Russian of the second generation immigrants has absorbed Greek lexis and syntax more extensively than that of their parents. 44 Greeks usually have a positive attitude towards Russia, Russians and the Russian government. 45 This creates favorable conditions and motivation for both groups to learn the language and traditions of each other.
In the countries known today as the Czech Republic and Slovakia, the first 'White' wave of Russian emigration left a huge imprint on the culture between the two wars (cf. Prague Linguistic Circle). Returning White and Red Czechs (the writer Jaroslav Hašek among them) built bridges between the cultures too. The contribution of these people was forgotten after 1945. In socialist times, ties between Czechoslovakia and the USSR were both official and informal, especially among the intelligentsia. The Soviet invasion in 1968 destroyed many relationships, while others-between the dissidents and the radical communists-grew. After perestroika, especially in the 21 st century, the new waves of migration are multi-ethnic and multicultural. One can find people from different corners of the FSU. Many are Ukrainians but they join the group of Russian-speakers. These newcomers are education-, start-upand business-oriented. 46 The recent diplomatic wars between CR and RF demonstrate that among other advantages of being in Central Europe, this location is favorable for espionage. Nowadays, in both countries, Russian speakers form communities, have their clubs, schools, stores and websites, etc. Prague is one of the main tourist destinations of Russian speakers in Europe. In 2015, the Russian language received minority status in Slovakia.
Teaching Russian as a foreign language started after WWII and covered the whole country. Nowadays, tens of thousands are still learning it, and the quality of research remains high. 47 Chalupa 48 has reflected on the practical use of the Russian language in the past and today, and on the motivation of Czechs to learn it. He has also hypothesized about its future. Numerous comments published in response to his article reveal that the matter is of interest to the public. Eva Kollarova, a famous Slovak specialist in Russian, edits an influential journal "Russkij jazyk v tsentre Evropy" [Russian Language in the Centre of Europe] providing discussion space for teachers and students of Russian; more journals on Slavistics are published. Many errors in the Russian speech of Slovaks are caused by differences in government, gender, number, in meanings of cognates, paronymic contaminations, etc. These deviations from the metropolitan standard are the sources for the emergence of a Russian ethnolect in Slovakia. 49 Among research projects dedicated to Slavic language contacts, a study carried out in Slovakia by Tsifrak 50 is of special interest. She discusses a new variant of the Russian language as used by Russian émigrés. Russian and Slovak are genetically related, so it is not so difficult for Russian speakers to understand Slovak, and as time goes on, the two languages merge into one system. Tsifrak notes that the dwellers of the post-Soviet space are accustomed to mixing cultures and languages, but habitual code-mixing may produce an unexpected effect, sometimes changing the sense of what was intended, and sometimes creating a comic effect. Thus, ovocie in Slovak is close to the Russian овощи [vegetables] but it denotes fruit while vegetables are zeleninaperceived by Russian speakers as 'edible greenery'. Words and phrases frequently used at work, in shops, restaurants and other public places form a linguistic cocktail in the heads of bilinguals who do not acquire the language of the host country in the classroom but in the situation of uncontrolled language immersion. Such expressions are well remembered and form the basis of Russian macaronic expressions: наступить на автобус (nastupit na autobus in Slovak) instead of сесть в автобус, or дам себе чай (Slovak dam si čaj) instead of выпью чая. In their new language immigrants often find words attractive due to the emotional depth or succinctness, like obdivovat that simultaneously stands for wonder, appreciate and marvel. They are amused to discover words that have phonetic similarity with familiar Russian ones but have a different meaning. These pairs confirm Tsifrak's observations: rodina is not 'homeland' like in Russian but 'family'. Pohoda, with its stress on the 1 st syllable, is only slightly different in form from the Russian pogoda, but it means 'super, o.k.', while in Russian it is 'weather'. Zakusky is a sort of dessert but not an appetizer as in Russian. Such interlingual quasihomophones are a source of amusement for language learners and form an essential part of émigré folklore. Runet still gives many links to these pairs 51 . Notably, as émigrés improve their proficiency in the language of the host country, these words stop being funny and lose associations with their Russian counterparts.
Conclusion
Research of Russian as a pluricentric language is still in its infancy.
Russian in the nation has changed dramatically after the disintegration of the Soviet Union. Under the influence of language policies, favoring hegemony of titular languages and limiting the functions of Russian in the public sphere, ethnolects on the territory of the FSU also underwent changes, absorbing lexis that was not needed in the Soviet times. New ethnolects began to develop in the countries where big communities of ex-Soviets settled down. This is not always welcomed by the majority society. Attitudes towards learning Russian depend mostly on Russia's politics, economy, tourist flows and some other socio-economic factors. Demand for proficient Russian speakers is dynamic and may come up unexpectedly. One proof of this is that universities in the predominantly Russian-speaking cities of Narva (Estonia) and Daugavpils (Latvia) have recently got a new source of money-making: teaching Russian to American service people. Due to mass emigration from the countries of the FSU, the last three decades have seen an emergence of big groups of heritage speakers of Russian. Some of these speakers can barely use the language and are limited to everyday family conversations, but others, attending bilingual kindergartens and complementary afternoon schools created by the immigrants of the last waves, are engaged in various educational activities which lead to the acquisition of academic literacy skills in Russian. Although in this respect they fail to be on a par with their peers in the metropolis, Russian schools greatly expand the linguistic repertoire of young diasporans and help them develop some metalinguistic knowledge of the Russian language. Together with educational institutions, cultural institutions, conventional and electronic media created by émigrés, the development of tourism and the transnational connections of Russian speakers facilitate Russian-language maintenance in the diaspora. Yet deviations of regiolects from the language of the metropolis are varied and are becoming stronger with the years. Documentation of new diasporic regiolects is only beginning and is an important task for linguists.
The driving force for learning and maintaining Russian for people living outside the nation is commodification of the language. While many firstgeneration immigrants have retained strong symbolic ties with the language and culture of the mother country, they are becoming much less significant for the second generation. This is equally true for Russian speakers in the FSU brought up in the post-Soviet decades.
The transnational ties of Russian speakers are another factor. They are multi-directional and multipurpose, ranging from business and professional connections, to friendships and family relationships. Thanks to these ties, many businesses flourish, and scientific and social projects are implemented. Abroad, many speakers of Slavic and Baltic languages flock together with Russian speakers, feeling closer to them than to the host society.
Orientation to the norm as it exists and is imposed by Russia has weakened. In the absence of codification deviations in the diaspora have increased. The norm in Russia has also eroded. Some liberation of the language has occurred; new linguistic developments sometimes originate in the diaspora and only later reach Russia. There is still little done to document local deviations from standard Russian. Material should be collected from oral interviews, participant observation and ethnographic diaries, as well as from analysis of the linguistic landscape and local conventional and electronic media in Russian. | 2020-07-09T09:11:28.260Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "3167c7cedde74688ee113d7e61262506829967a0",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.38210/rustudh.2020.2.g.1",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e71736ebcd23836bc67be1eafc9a4e3a30e46e80",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"History"
]
} |
263224391 | pes2o/s2orc | v3-fos-license | Delayed nasoseptal flap reuse in patients with revision endoscopic endonasal anterior skull base surgery
Key Clinical Message The reuse of the nasoseptal flap represents a favorable option for skull base reconstruction in revision endoscopic anterior skull base surgery. This study demonstrated that a detached nasoseptal flap can remain viable for several days even if not immediately reattached.
reconstruction.This patient recovered well after surgery, and routine office-based follow-up was performed.One year later, the patient complained of visual acuity change; however, any definite recurrence was not found through magnetic resonance image (MRI) evaluation.The following year (postoperative 2 years), this patient suffered from intermittent headaches, and several recurrent lesions were found through MRI evaluation.Therefore, a revision EEA for recurrent tumor removal was planned and performed 29 months after the initial surgery.
At the beginning of the surgery, a rhinologist performed the takedown of the previously applied NSF from the edge of the flap margin to the pedicle, positioned it in the choana (Figure 1), and further drilled the previous operation site to create a surgical corridor.Subsequently, the neurosurgeon tried to access the tumor; however, the anterior communication artery ruptured during bone work and massive bleeding was encountered (Figure 2).Four-vessel angiographies and coil embolization were performed by an endovascular neurosurgeon (Figure 3), and then, the rhinologist attempted to reconstruct the skull base defect; however, the reconstruction failed because of massive bleeding despite coil embolization.Nasopore® (Stryker) was applied after the first surgery and removed in the operating room at the initiation of the second surgery.Due to the presence of nasal packing and the absence of separation between the nasal cavity and the skull base, we administered antibiotics, including ceftriaxone, vancomycin, and metronidazole.Four days later, the rhinologist performed a reoperation for skull base reconstruction, and the defect was reconstructed layer by layer using Spongos-tan® (Ferrosan A/S), Hemopatch® (Baxter Deutschland GmbH), Surgicel® (Ethicon SARL), and the previously used NSF positioned in the choana.After reconstruction, the patient did not show any CSF leakage, and the operation healed well.The patient underwent radiotherapy from postoperative Day 69 to 112, and received a total dose of 5940 cGy.The NSF was sustained until 9 months after surgery, despite radiotherapy (Figure 4).
| DISCUSSION
The invention of neurovascular pedicled NSF allows surgeons to perform aggressive endoscopic skull base surgery.This could also reduce morbidity for patients with skull base tumors because the operation that would have been previously performed through an open approach can be completed through an endoscopic approach.The use of NSF dramatically reduced the postoperative CSF leakage rate compared with that traditional techniques, and Hadad et al. reported that postoperative CSF leakage was less than 5%. 4 Furthermore, the survival rate of NSF is quite high, and the flap necrosis rate was reported to be 0%-1.3% in a systematic review. 7However, the survival rate is significantly influenced by several factors, especially in patients with diabetes mellitus, cardiovascular problems, advanced age, postoperative infections, and prior radiotherapy in the paranasal region. 8he reuse of an NSF in revision endoscopic skull base surgery was previously reported by Zanation et al., who found that CSF leakage was prevented in 87.5% of patients who underwent revision surgery. 9However, in this study, NSF reattachment was performed in the same manner as a takedown.To date, no case regarding the delayed reuse of NSF several days after takedown has been reported.Our case suggests that the NSF can sustain even if left unattached for several days only if its pedicle is well maintained without injury.Additionally, in our case, the reattached NSF sustained despite postoperative radiotherapy.Radiation-induced vascular damage has been demonstrated previous, so flap viability might also be affected by radiotherapy.Therefore, we regarded this case as clinically significant.
In this study, we did not assess the viability of the NSF following surgery.1][12] Among these techniques, our skull base center typically employs immediate postoperative MRI to assess tumor removal status and NSF viability.However, in this case, we were unable to perform an immediate postoperative MRI evaluation due to the patient's postoperative care in the intensive care unit.
Even if the NSF is located below the choana for several days, it may not be safe from the risk of infection.However, this risk can be reduced with massive povidone-iodine and normal saline irrigation.Nevertheless, using the NSF located on the other side could be an alternative; however, considering the possibility of cartilage necrosis caused by using both NSFs and the accompanying saddle nose, the reuse of the NSF would be a good option to reduce the patient's morbidities.
F I G U R E 1
Takedown of the previously applied nasoseptal flap (NSF) and positioning it in the choana.F I G U R E 2 Massive bleeding was encountered during surgery.
F I G U R E 3
Anterior communicating artery pseudoaneurysm is found on four-vessel angiography (A) and coil embolization of pseudoaneurysm (B).F I G U R E 4 Nasoseptal flap (NSF) sustained 3, 6, and 9 months after surgery. | 2023-09-30T05:16:21.892Z | 2023-09-27T00:00:00.000 | {
"year": 2023,
"sha1": "e09b1227c4d7432ae26f75d5d5942bcf733097d1",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.8001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e09b1227c4d7432ae26f75d5d5942bcf733097d1",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269373947 | pes2o/s2orc | v3-fos-license | Utilisation of Reproductive Health Services among Adolescents in Ghana: Analysis of the 2007 and 2017 Ghana Maternal Health Surveys
Early pubertal development induces early sexual activities among adolescents. In Ghana, despite the high sexual activity among Ghanaian adolescents, sexual and reproductive health (SRH) services are underutilised, primarily due to SRH stigma and a lack of SRH knowledge and information. This study examined the use of SRH services among adolescents aged 15–19 years in Ghana over a ten year period. The study utilised data from the 2007 and 2017 Ghana Maternal Health Surveys (GMHSs). Responses from 2056 and 4909 adolescent females captured in the 2007 and 2017 GMHSs, respectively, were used. The results showed a declining utilisation of SRH services among adolescents from 28.3% in 2007 to 22.5% in 2017. The odds of using family planning among sexually active adolescents increased from 2007 [AOR-0.32, CI-(0.135, 0.77), p < 0.001] to 2017 [AOR-68.62, CI-(36.104, 130.404), p < 0.001]. With increasing age at first sex, adolescents were less likely to use a family planning method in 2007 [AOR-0.94, CI-(0.89,0.99) p < 0.001], but this improved in 2017 [AOR-1.26, CI-(1.220, 1.293), p < 0.001]. Despite this, knowledge of sources for family planning was found to predict its lower utilisation in both 2007 [AOR = 0.15 (95% CI-0.081, 0.283), p < 0.0001] and 2017 [AOR = 0.206 (95% CI-(0.099, 0.426), p < 0.001]. The findings show that even though knowledge of family planning methods predicted low utilisation, knowledge of sources, age at first sex, and educational level positively predicted the utilisation of SRH services from 2007 to 2017. Opportunities for both enhancing the clinical environment and health provider attitudes exist and should be explored for improving SRH outcomes among sexually active adolescents in Ghana.
Introduction
The world is experiencing the largest cohort of adolescents in history [1], with a significant proportion of the global population being between the ages of 10 years and 19 years [2,3].Despite the awareness that maintaining sexual health in adolescence essentially contributes to reproductive health and well-being in later life [4,5], challenges remain in ensuring access to it.Socio-cultural and gender norms continue to affect both sexes as they navigate their transition to adulthood.Currently, 88% of the 1.2 billion adolescents worldwide live in developing countries where universal access to SRH is yet to be realised and these adolescents face a higher unmet need for SRH services, as well as a higher burden of unplanned pregnancies and contracting sexually transmitted infections (STIs) than their peers in the developed world [3,6].
Many adolescents engaging in sex for the first time hardly use any form of protection [6,[16][17][18] due to casual, impulsive, and unplanned sexual activity among them [19,20].
Adolescents who had an early sexual debut are likely to have multiple partners, thereby increasing their risk of contracting STIs and the risk of unplanned pregnancy [1].
The need for adequate attention towards adolescents' SRH remains critical.Efforts to attain quality SRH are constrained by inadequate access and inequitable distribution of SRH services, resulting in the poor utilisation of SRH services among young people in sub-Saharan African countries.Prior studies show that adolescents across the world face barriers such as long waiting hours, negative provider attitudes, unnecessary restrictions, lack of privacy and confidentiality, social-cultural norms, and stigma when accessing health services [14,21].There remains a need for relevant data to understand and substantiate the needed interventions [22][23][24].
In Ghana, adolescent health is a priority health issue [25,26].Adolescent-and youthfriendly health services have been identified as a strategy for improving adolescent access to and utilisation of SRH services in the country.Despite its suitability and progress towards improved Sexual and reproductive health and rights (SRHR), outcomes among adolescents are not as expected [27].The Government of Ghana, through the Ministry of Health and related departments and agencies, has developed several adolescent-and youth-related policy documents and standards, such as the Adolescent Reproductive Health Policy, National HIV/AIDS and STIs Policy, National Health Policy, the Children's Act (1998), and the Juvenile Justice Act (2003), among others, to support adolescent health.In addition, Ghana ratified several international conventions that promote and protect the well-being of adolescents and youth, such as the United Nations Convention on the Rights of the Child, while the Ghana Health Service (GHS) promotes youth-friendly services.These initiatives, however well-planned, have also not yielded the desired outcome [15] because Ghanaian adolescents still underutilised SRH services, mainly due to stigma around premarital sex [5,[28][29][30][31], while over 750,000 adolescents become pregnant annually [31].
The challenges of adolescent inaccessibility of SRH services in Ghana have been identified as being due to barriers such as cost of services, lack of awareness about where to obtain contraceptives and STI treatment, misconception about side effects of contraceptives, lack of confidentiality and privacy [32,33], and negative provider attitudes [14,15,[26][27][28][29][30][31].Currently, urbanisation, changes in social norms, and shifting trends in marriage and sexual activity reflect the world in which adolescents are growing [34].
Although there have been many studies on adolescent SRH in Ghana, very few have focused on trends in the utilisation of SRH services among female adolescents nationwide.This study utilises secondary data collected from nationally representative cross-sectional surveys to examine the use of sexual and reproductive health services by adolescents (15-19 years) in Ghana from 2007 to 2017.The findings seek to inform efforts by the government of Ghana to improve and expand access to adolescent SRH services and appropriately respond to common and new barriers to attain optimum SRH.
Study Type
The study was a secondary data analysis of the 2007 and 2017 Ghana Maternal Health Surveys (GMHSs).These were nationally representative surveys among women in the reproductive age of 15-49 years, designed to produce representative estimates for maternal mortality indicators for the country and for each of the three geographical zones, namely Coastal, Middlebelt, and Northern sectors.The Ghana Maternal Health Survey (GMHS) is a household-based survey, which utilises a two-stage sample design.In the 2007 Maternal Health Survey, the first stage involved the selection of samples from a master sampling frame constructed from Enumeration Areas (EAs) from the Ghana Population and Housing Census 2000.The 2017 Maternal Health Survey sampling frame was also based on the Enumeration Areas of the 2010 Ghana Population and Housing Census.The second stage involved the systematic sampling of the households listed from each cluster to ensure an adequate number of completed individual interviews was obtained [35].The Survey collected data through an interviewer-administered structured questionnaire based on the DHS programme model.Three questionnaires were utilised for the GMHS, as follows: the household, women's, and verbal autopsy questionnaires.All women aged 15-49 years were eligible to be interviewed from each selected household.For this study, responses from the women's questionnaire were utilised.
Data Extraction
For this study, only female adolescents aged 15-19 years were included in the analysis.The Ghana Maternal Health Survey data for 2007 had 10,370 respondents aged 15-49 years, while that of 2017 had 25,062 respondents.Based on the criteria for adolescent females, the number of respondents selected was 2056 for 2007 and 4909 for 2017.
Inclusion Criteria
All female adolescents from the age of 15 to 19 years.
Exclusion Criteria
All female adolescents aged 15-19 years with incomplete responses were excluded from the analysis.
Measures
The outcome variable was the utilisation of SRH services, defined as the use of family planning and abortion services.
Utilisation of family planning was a direct Yes/No question "Are you currently using any method?"coded as Yes = 1 and No = 0, respectively.
Family planning methods were classified as modern or traditional methods and recorded.Modern methods include pills, injectables, implants, male condoms, female condoms, intrauterine devices (IUDs), and emergency contraception.Traditional methods also included the withdrawal method, rhythm method, and abstinence.
Utilisation of abortion services was measured by the respondents indicating whether they had used safe or unsafe facilities in response to the following question: "What was the source of the last step to end pregnancy".This was recorded as facility type.Based on the criteria set by the Ghana Comprehensive Abortion Care Services Protocol (2012), all hospitals, clinics, and health centres, both public and private, were classified as safe, while private pharmacies, chemical and drug stores, and respondents' homes were classified as unsafe facilities.Utilisation was thus recorded as a binary variable.
Provider of the last step to end pregnancy was recorded as provider type.Providers like doctors, midwives, and nurses are classified as trained; all others, like community health workers, pharmacists, chemical sellers, traditional practitioners, relatives, and friends, were classified as untrained providers.These classifications were all based on the Ghana Comprehensive Abortion Care Standards and Protocols (2012).Independent variables were age, knowledge level, education, sexual activity, and age at first sex.
Knowledge level was a composite variable derived from the response (yes or no) to the following four questions: "Have you heard of family planning method", "Do you know the source of family planning", "have you ever heard about abortion", and "do you know where to get abortion".
Sexual activity was binary (yes = 1 or no = 0), following the question: "Have you ever had sex?".Age at first sex was a follow-up question to sexual activity, by asking the respondent, "What age did you engage in sex?" Responses were captured in single ages.
Data Analysis
Descriptive statistics were used to describe the characteristics of the study participants.This was presented in percentages using frequency tables.Bivariate analysis was carried out using Pearson's chi-squared test to assess the relationship between independent variables and the utilisation of SRH.Multivariate logistic regression was used to examine the strength of the relationship with SRH utilisation.The odds ratio and the associated 95% confidence intervals were used to assess the strength of the association.A p-value of 0.05 was used to determine statistical significance.All statistical analyses were conducted using Stata SE version 15.
Background Characteristics of Respondents
Table 1 presents the background characteristics of the respondents.The mean age of the respondents was 16.8 years ± 1.4.More than half (52.1%) of respondents in 2007 were from urban areas, while 47.9% were rural dwellers.However, in 2017, more than half (54.1%) of respondents were from rural areas.In 2007, 90% of the respondents were in-school adolescents, with about three-quarters (75.7%) being in basic school (Primary or Junior High) and 14.7% being educated at Senior High School, with the remaining 10.0% not having formal education.Out-of-school respondents declined from 10% in 2007 to 5.5% in 2017 and respondents who were either in Senior High or Technical School increased to 21.7% from 14.7%.
Age at first sex was estimated among the respondents.In 2017, the percentage of respondents who first had sex at thirteen years and below was 10.4%, an increase from 9.6% in 2007.
Adolescents' Knowledge about Sexual and Reproductive Health
In 2007, about 44
Utilisation of Sexual and Reproductive Health Services
In 2007, the proportion of adolescents using a family planning method was 28.3%, which declined to 22.5% in 2017.Approximately 43.1% patronised the services of a trained provider in 2007, whilst 2017 recorded a decline to 35.2%.In 2007, 43.1% of adolescents accessed services in safe facilities compared to 32.8% in 2017 (Table 3).
Background Characteristics and Family Planning
Based on the bivariate analysis of the variables ever had sex and age at first sex both showed a significant relationship with the utilisation of SRH services in 2007 (p < 0.05).This relationship included variables: age, highest educational level, and ever had sex in 2017 (p < 0.05) (Table 4).
The study also found that knowledge about family planning methods, knowledge of the source of family planning, hearing about abortion, and knowing where to obtain an abortion had significant relationships with the utilisation of family planning methods in 2017; however, none of these variables were found to have a significant relationship with the utilisation of family planning in 2007 (Table 5).
Independent Variables and Abortion Utilisation
The bivariate analysis did not show a significant relationship between the independent variables and abortion utilisation in the 2007 and 2017 studies (Table 6).
Factors Associated with Family Planning Utilisation
In 2007, sexually active adolescents were 68% less likely to use family planning [AOR-0.32,CI-(0.135,0.77), p < 0.001] compared to adolescents who were not sexually active.This changed to 69% in the odds of family planning method use in 2017 [AOR-68.62,CI-(36.104,130.404), p < 0.001].With increasing age at first sex, adolescents were less likely to use a family planning method in 2007 [AOR-0.94,CI-(0.89,0.99) p < 0.001] and this changed to reflect an increase in the odds of using a method with increasing age at first sex in 2017 [AOR-1.26,CI-(1.220,1.293), p < 0.001].Additionally, compared to adolescents whose highest level of education was primary level, senior secondary-educated adolescents were more likely to use a family planning method [AOR = 1.822, 95% CI (1.257, 2.643) p = 0.002] (Table 7).Adjusting for the variables of ever hearing of abortion, knowing a source for abortion, and knowledge of the legal status of abortion in Ghana, we found that adolescents who knew about the sources of family planning were still 99.85% less likely to use family planning methods [AOR = 0.15 (95% CI-0.081, 0.283) p < 0.001] in 2007 compared to those who did not know about sources of family planning methods.This slightly improves in 2017 [AOR = 0.206 (0.099, 0.426), p > 0.001] (Table 8).
Discussions
The findings of this study show that the utilisation of family planning services remains challenging, as the proportion of adolescents using FP showed a decline from 2007 to 2017, despite the interventions and efforts undertaken by programs in the country.This low utilisation has frequently been reported among adolescents in Ghana [8,[36][37][38].The study also found that over the period under review, adolescents who knew a source of FP were more likely to have utilised FP services less, even though 2017 showed a slight improvement compared to 2007.This finding critically highlights the little progress made in responding to the underutilisation of SRH services among adolescents [38].Challenges in increasing the utilisation of SRH services remain within the Ghanaian healthcare system and the negative health implications continue to be recorded.It is critical to identify novel opportunities to encourage its utilisation, to respond appropriately to this challenge in Ghana [25,39].
This study revealed that adolescents' knowledge about family planning services was low in 2007 but had increased in 2017.These findings, though consistent with findings from other SSA countries and multi-country studies [16,22,40], could be a reflection of adolescents' negative attitudes towards SRH due to cultural barriers, lack of confidentiality at health facilities, fear of side effects, inadequate peer support, and poor provider attitudes as evident from both global and local studies [2,9,19,41].The lack of comprehensive information on reproductive health issues and services makes adolescent girls vulnerable to unsafe reproductive health behaviour [34,42].It is plausible that the improvements identified in Ghanaian adolescents in 2017 can be attributed to the interventions towards improved adolescent SRH services [15,25].
This study, however, reported a high knowledge of abortion services among respondents.The proportion of respondents with a knowledge about abortion services in 2007 increased to 90.4% in 2017.A similar pattern was observed concerning the knowledge of a source for abortion services, indicating that adolescents have an increased access to information on this issue [34].Although this study did not access information sources among adolescents, it is important to explore the sources to assess their validity, since many are unable to seek accurate information from the right source [8,14].
Regarding abortion, 43.1% of respondents in 2007 patronised safe facilities, yet, in 2017, it reduced to 32.8%.Similarly, utilisation of the services of a trained professional reduced from 43% in 2007 to 35.2% in 2017, a disturbing trend, suggestive of the increased use of untrained workers.The increase in unsafe abortion, despite the increased awareness and creation of campaigns on safe abortion, could be due to barriers such as cost of services, proximity to services, lack of confidentiality, and privacy [2,[26][27][28][29]34,43,44].Most adolescents feel uncomfortable accessing services of the various components of SRH services in facilities [13,32,40,45].The multivariate analysis showed a significant association between age at first sex among sexually active adolescents.
The recorded increase in the odds of modern method use from 2007 to 2017 is noteworthy.Many studies identify that the preferred modern method for adolescents is condoms [8,11,[36][37][38][46][47][48].It is plausible that among adolescents who are exposed to information on family planning and contraceptive use, many adolescents opt for the condom because it is cheaper and easily accessible and serves as dual protection [8,10,38], compared to others such as injectables, implants, and intrauterine devices (IUDs).The reduced utilisation of methods that serve pregnancy prevention only roles, but require visitation to health facilities with its associated negative health worker attitudes, inherently acts as a deterrent to adolescents in need [19,28,37].
The multivariate analysis showed a significant association between age at first sex and the utilisation of family planning services.This finding, however, is inconsistent with findings from other studies that revealed that adolescents who initiate sex early are not likely to use family planning methods, which exposes them to the risk of getting pregnant [8,40].It is plausible that the shift in attitude towards accepting the use of family planning methods is a product of the multiple interventions introduced to improve its utilisation in Ghana [26].Evidence suggests that older adolescents are likely to be sexually active and, therefore, more likely to utilise SRH services [24,28,29].With knowledge as a precursor to action, they may have more access to SRH services.
This study, though nationally representative, is not without its limitations.First, as the study relied on secondary data, this limited our analysis to the variables that had been collected.This study is also cross-sectional and, hence, causal inferences cannot be made.Finally, the study relied on self-reported measures from adolescents, which are subject to recall bias.
Conclusions
This study identified the factors that are attributable to the decline in the utilisation of SRH services over the period of the 2007 and 2017 Ghana Maternal Health Surveys.The results of this study highlight multiple important issues such as the declining use of family planning and contraceptive methods, amidst the increased knowledge of family planning methods, sources of family planning methods, abortion services, and sources of abortion among adolescents.Our results highlight the important role that socio-cultural concerns play in influencing adolescent attitudes toward the utilisation of SRH services.The underutilisation of these services may occur, even while knowledge of the services is high.
Table 1 .
Background characteristics of adolescents.
Table 2 .
Percentage of adolescents who demonstrated knowledge about SRH.
Table 3 .
Utilisation of SRH by adolescents.
Table 4 .
Background characteristics and utilisation of family planning.
Table 5 .
Knowledge of adolescents about SRH and its utilisation.
Table 6 .
Factors influencing utilisation of abortion by adolescents.
Table 7 .
Association between background characteristics and utilisation of family planning method.
Table 8 .
Association between knowledge and utilisation of SRH. | 2024-04-26T15:23:48.260Z | 2024-04-24T00:00:00.000 | {
"year": 2024,
"sha1": "26da1267616582219811b3361fa32c0017a571ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2c35a47ee9fad34e17195f56466bcc5214673391",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196505382 | pes2o/s2orc | v3-fos-license | The effect of cigarette type on anthropometrics and weight of PLWH
Relying on evidence that smokers tend to weigh less than nonsmokers, tobacco companies have used several strategies to take advantage of, and manipulate, people’s concern with weight.1,2 Their goal became the addition of active agents to target “non-smokers who are more concerned with losing weight than with contracting respiratory or blood circulatory illnesses. [...] (Bates no. 2056159412).2 While initially focusing on appetite suppressants, later companies began considering “Specific Appetite Inducers”.2 Documents indicated that they were experimenting with “special herbs or medications in a cigarette form as appetite stimulants or possibly for tension release”.2 The list included tartaric acid, 2-Acethylpyridine, catecholamine, menthol, mariolide, propylene glycol and reserpine. They were aware that appetite can be strongly influenced by both aromatic and taste characteristics of the compounds, and inquired about menthol.3 Note worthy, experts indicated that menthol had reached inconclusive effects. Yet, menthol’s other characteristics (e.g. odor, taste, and flavor) let to its inclusion and marketing.2,4 Although several decades have passed since the introduction of mentholated cigarettes to the market, the burgeoning literature is notable for the scarcity of studies examining the plausible effect of mentholated cigarettes on weight.5 A recent insurgence of concern over the safety of this additive, and its potential relationship with weight gain, has prompted further research.6
Introduction
Relying on evidence that smokers tend to weigh less than nonsmokers, tobacco companies have used several strategies to take advantage of, and manipulate, people's concern with weight. 1,2 Their goal became the addition of active agents to target "non-smokers who are more concerned with losing weight than with contracting respiratory or blood circulatory illnesses.
[…] (Bates no. 2056159412). 2 While initially focusing on appetite suppressants, later companies began considering "Specific Appetite Inducers". 2 Documents indicated that they were experimenting with "special herbs or medications in a cigarette form as appetite stimulants or possibly for tension release". 2 The list included tartaric acid, 2-Acethylpyridine, catecholamine, menthol, mariolide, propylene glycol and reserpine. They were aware that appetite can be strongly influenced by both aromatic and taste characteristics of the compounds, and inquired about menthol. 3 Note worthy, experts indicated that menthol had reached inconclusive effects. Yet, menthol's other characteristics (e.g. odor, taste, and flavor) let to its inclusion and marketing. 2,4 Although several decades have passed since the introduction of mentholated cigarettes to the market, the burgeoning literature is notable for the scarcity of studies examining the plausible effect of mentholated cigarettes on weight. 5 A recent insurgence of concern over the safety of this additive, and its potential relationship with weight gain, has prompted further research. 6 Equally important, the fear of weight gain often impacts readiness to quit smoking in high-risk populations, including people living with HIV. 7,8 These segments of the population have a) An excessively high prevalence of smoking b) Frequent use of mentholated cigarettes, and c) Greater body image concerns (e.g. shape, form, and size). 7,9 Body dissatisfaction has been associated with lower interest and attempts to quit smoking, and with poor antiretroviral adherence. 8,10 Our study aims to address this gap by examining the relationships between use of menthol-flavored cigarettes and body composition using data from our ongoing, randomized clinical trial. Understanding how these additives are associated with weight may help determine the cumulative health risk that current smokers may have as a function of the type of cigarette used. HIV+ smokers who were motivated to quit. The trial has been ongoing since 2016 and is taking place in Miami, Florida. The aim is to recruit a total of 500 participants. The trial was powered to detect differences in biochemically verified 7-day point prevalence abstinence at 3, 6, and 12months of follow-up (80% power, type I error rate of α= 5%, assuming a 20% loss to follow-up).
Current analyses spanned from June 2016 to December 2017, for a total of 18 months of enrollment and follow-up.
Adults were eligible if HIV status, smoking status, and willingness to quit were confirmed. For safety reasons, subjects were excluded if they had any contraindication to nicotine patches or gums, were involved in other smoking and/or drug cessation or weight control programs, or had comorbid conditions that limited their safe participation, such as the presence of psychotic or disabling psychiatric disorders. Written study materials, informed consent forms, and the study protocol were approved by Western IRB (WIRB). All procedures occurred at the University of Miami's Clinical Translational Research Site.
Smoking surveys
After giving informed consent, participants provided an exhaled breath carbon monoxide sample (Vitalograph; Lenexa, KS) for biochemical verification of smoking status. Then, subjects completed several standardized surveys to profile smoking history, including the number of cigarettes smoked per day and the history of tobacco use (cigarettes vs. cigars). This data, along with age of initiation and the total number of years smoking, enables estimation of cumulative exposure. To measure nicotine dependence, we selected the Fagerström Test for Nicotine Dependence (FTND) due to the strong literature evidencing validity, reliability, and its availability in English and Spanish. 11 Participants were also asked about exposure to secondhand smoke (SHS), symptoms of lung disease, and personal/family history of respiratory conditions. Upon completion of the baseline visit, participants expired breath carbon monoxide samples were verified.
Use of menthol-flavored cigarettes
This variable was based on self-reports (yes/no) and the participant's preferred brands. Smokers were dichotomized as menthol users or non-menthol users.
Anthropometric and nutritional intakes
Participants' anthropometric measures were obtained at each visit. Body weight and height were measured to the nearest 0.1kg after removal of shoes and outerwear using a calibrated balance. Weight and height were used to calculate body mass index (BMI; weight [lbs]/height [inches] 2 ×703). Participants were classified as thin if BMI was less than 18.5kg/m 2 , eutrophic if BMI was 18.5 to 24.9kg/m 2 , overweight if BMI was 25 to 29.9kg/m 2 , and obese if BMI was 30kg/ m 2 or higher. 12 Studies suggest that wait to hip ratio (WHR) is more accurate than BMI for predicting the risks of cardiovascular disease and premature death, so we obtained those measures. 13 As per national guidelines, the waist circumference measurement was made at the top of the iliac crest after fasting overnight. 14,15 The hip circumference measurement was obtained around the widest portion of the buttocks.
In accordance with national guidelines, abdominal obesity was defined as a waist circumference greater than 102 centimeters (40 inches) in males and 88 centimeters (35 inches) in females. 16
Covariates
Computerized questionnaires were used to obtain dietary intake (24 hour food recall), socio-demographics, and the medical chart, including a history of ART. 17 Based on self-reports, subjects were categorized as African American, White non-Hispanic (Caucasian), or Hispanic. Age was stratified by 20 year gaps between groups (18 to 39, 40 to 59, or 60 or more years). Annual income was categorized as $0 to $11,000, $11,001 to $20,000, $20,001 to $49,999, or more than $50,000. Education level was assigned a code between 1 and 16, representing each year of schooling from elementary/middle school through college or vocational training.
Statistical analysis
All statistical analyses were conducted using SPSS 21.0 (SPSS Inc., Chicago, IL, USA). For all analyses, statistical significance was considered as a two-tailed p-value <0.05. Means, standard deviation and percentages were used to describe the characteristics of the study sample. Analysis of variance (ANOVA) was performed to test for significant differences in mean of anthropometric measurements and the variables of interest. Regression analyses were used to evaluate predictors of body mass index and any factors that were significantly associated with BMI was included in the final model.
Recruitment
A total of 310 HIV infected smokers were eligible, and are currently being followed in a smoking cessation and health outcomes study conducted in Miami, Florida. Reflecting the current trend of the HIV epidemic, the mean age was above 50 years (51.7years) in both groups, however, the extremes of the age distribution were 23 and 69 years old. There were approximately equal numbers of male and female participants. The total sample includes 6% Caucasian, 83% African-American, and 11% Hispanic, so we have a large representation of minorities. The male to female ratio was nearly one to one (Table 1). Income levels were significantly different. Findings are in line with prior reports indicating that smoking rates are significantly higher for persons living below the poverty line. 18
Obesity and overweight prevalence and associated sociodemographics
Only a third of the sample was eutrophic, and females were heavier than males (31.2±8.6 vs. 26.5±5.5, p= .02). The odds of obesity was higher among individuals older than 60 years of age than in those less than 20 (OR=3.1 95% CI: 1.2-8.0 p=0.001). As illustrated in Table 2, overweight and obesity vary greatly between men and women, with women being disproportionately affected. Education status also differed across the groups with more years of education reported by the obese group. Of interest, neither age, income, nor race/ethnicity differed among the study groups.
Does smoking affect body mass index/ weight?
In contrast to the popular perception that smokers are thinner, overall BMI scores were 28.8±8.8, highlighting the profile of this at-risk population. Of concern, only a third of the sample had a eutrophic weight classification. The remaining subjects were overweight (32%) or obese (37%). Females were heavier than males (31.2±8.6vs.26.5±5.5, p=.02). Older individuals (>60 years of age) were also more likely to be obese than those under 40 years of age (OR=3.1 95% CI: 1.2-8.0 p=0.001). Middle aged subjects were more likely to be overweight than their younger counterparts (18-39 y OR: 2, p=0.03). Based on prior findings suggesting that heavy smokers tend to have greater body weight than eutrophic smokers, we dichotomized the sample by <10 cigarettes/day or 20>cigarettes/day, then analyzed weight and BMI. 19 We could not identify a significant difference between the BMI groups.
Weight by type of cigarette at baseline
To better understand how smoking and body weight relate, it is crucial to account for the type of cigarette being used. Despite similar amounts of smoking, and exercise patterns, PLWH who smoked menthol-flavored cigarettes had significantly higher BMI values than non-menthol smokers (29.2±7.8 vs. 25.2±4.4, p=.02). As depicted in Table 2, additional analyses indicated differences in several anthropometric measures.
Final analysis
The risk of abdominal obesity increased by 40% for menthol users (O=1.4, 95%; CI: 1-1.6, p=05). The mean waist, hip, and abdominal circumference of the menthol smokers was significantly higher than the non-menthol smokers. In Table 3, multivariate regression analysis confirmed that the type of cigarettes smoked significantly predicted BMI. In the adjusted model, female sex was also a significant predictor of obesity.
Discussion
The analyses uncovered several interesting findings. Although the common belief is that smokers are skinnier, over two-thirds of our smokers were either overweight or obese. This is in line with many researchers describing a direct relationship between smoking and weight gain. 20,21 In this regard, our cohort of smokers living with HIV had a sizable proportion of obese and overweight participants. However, our study extends the current literature by focusing on PLWH, and by including the type of cigarette in the analyses. The exclusion of this variable may account for some of the previously mentioned discrepancies in recent studies. Additionally, it is possible that those with lower rates of mentholated cigarettes find their sample to be skinnier, and the opposite might occur if the population is composed primarily of females and minorities, the primary consumers of mentholated cigarettes. 1,4 The hypothesis that smoking mentholated cigarettes adversely affects weight was confirmed, as smokers of mentholated cigarettes were significantly heavier than non-menthol smokers with HIV. Findings are of concern because obesity and smoking rates are sky rocketing among people living with HIV, and both are significant contributors to morbidity and mortality. 24,25 Interestingly, the markers of abdominal obesity such as percent body fat, waist circumference, and hip circumference were also higher in smokers of menthol cigarettes. This entails an even higher health risk of metabolic syndrome, incidental cardiovascular disease, and type II diabetes. 26 As being female led to an increased risk of obesity in the study, the study results should serve as a targeted health message.
The mechanisms underlying our findings are not been elucidated, however various causal mechanisms may account for the effect of smoking on central obesity. First, smoking stimulates the sympathetic nervous system, leading to an increase of cortisol, a stress hormone. 27,28 Abdominal fat deposition appears to be related to elevated levels of serum cortisol. 29 Second, cigarette smoking is related to increased insulin resistance, which is associated with increases in abdominal fat deposition and diabetes. 30,31 Third, since the waist and hip circumferences are determined by an individual's proportions of android fat and gynoid fat, an hormonal imbalance could be the cause. Smoking has an anti-estrogenic effect, leading to a decrease fat metabolism and to fat accumulation. 32,33 Finally, another possible reason is that smokers have distinct lifestyle characteristics, such as a higher likelihood of depressive moods and sleep impairments, which are risk factors for central obesity. 34,35 In contrast to prior studies showing that a clustering of smoking, obesity, and lower socioeconomic status exist, we did not replicate those results in this sample. 36 However, those of lower socioeconomic strata were more likely to be smokers of mentholated cigarettes. 4 We were also unable to confirm prior findings from a national cohort indicating that body leanness increased with the duration of smoking. 37 This research has important clinical and public health implications, as mentholated cigarettes are the only flavored cigarettes that have not been banned from the market. But it should, as data suggests an association with obesity, which, at minimum, warrants more research. These findings need to be analyzed in light of several limitations. This is a cross-sectional analysis, and the data was derived only from people living with HIV. Despite this, we have observed a similar pattern among the general population. 20,21 As with all human data, causality is difficult to establish, but the replication of this observation from several previous studies increases confidence. Our anthropometric measures were carefully obtained following national protocols. 14,15 | 2019-03-18T14:03:30.368Z | 2018-10-02T00:00:00.000 | {
"year": 2018,
"sha1": "f4a638597260dbe9f68eb61561e429b07b2d7ad1",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/AOWMC/AOWMC-08-00255.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "57dbdd85030591153741d8b03f142b607b1325fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257762278 | pes2o/s2orc | v3-fos-license | Renin-Angiotensin System Inhibitors in Advanced CKD: a #NephJC Editorial on STOP-ACEi
R enin-angiotensin system (RAS) inhibitors have been the preferred antihypertensive agents in the setting of chronic kidney disease (CKD)
R enin-angiotensin system (RAS) inhibitors have been the preferred antihypertensive agents in the setting of chronic kidney disease (CKD) since the 1990s. Their use has been consistently demonstrated to slow the progression of kidney disease, along with beneficial blood pressure and proteinuria lowering effects. [1][2][3][4][5] Although RAS blockade is a pillar of proteinuric CKD management, questions remain around whether they can be safely continued in patients with advanced CKD and whether they remain nephroprotective in later stages of the disease.
Ongoing RAS inhibitor use has been historically challenging in patients with advanced CKD, particularly those with diabetes, where hyperkalemia may be difficult to manage. 6,7 This commonly leads to RAS inhibitor discontinuation. The recent advent of novel potassium binding therapies, such as sodium zirconium cyclosilicate and patiromer, has strengthened the armamentarium against hyperkalemia and has helped ease the challenge of RAS inhibitor continuation in this setting. 8 However, data from a 2010 prospective cohort study in the United Kingdom demonstrated a significant increase in estimated glomerular filtration rate (eGFR) after discontinuation of RAS inhibitors in patients with advanced CKD. 9 These authors proposed that the RAS inhibitor discontinuation delayed the onset of kidney replacement therapy (KRT) in that cohort, and they encouraged providers to reconsider RAS inhibitor use in patients with CKD stages 4 and 5. 9 The notion that discontinuation would improve kidney function and allow for time to plan for KRT (eg, access planning or preemptive kidney transplantation) was plausible. Conversely, ongoing RAS inhibitor use in advanced CKD is supported by its beneficial effects, including blood pressure control and proteinuria reduction, along with cardioprotection and benefit in reducing mortality, which are of great importance in this population with a high prevalence of cardiovascular (CV) comorbid conditions. 10,11 Given this conflicting risk to benefit balance in this area of advanced CKD, there was true equipoise. The STOP-ACEI trial investigators sought to address whether discontinuing RAS inhibitors in patients with stage 4-5 CKD would slow the CKD progression in this high-risk population. 12
THE STUDY
The STOP-ACEI study was a multicenter, randomized, open-label trial. Participants were adults with stage 4-5 CKD (eGFR < 30 mL/min/1.73 m 2 ) being treated with an angiotensin-converting enzyme inhibitor, an angiotensinreceptor blocker, or both for >6 months and with a progressive decline defined as glomerular filtration rate loss >2 mL/min/1.73 m 2 /year for 2 years. The participants were randomized 1:1 to either continue RAS inhibitor therapy or to discontinue it and were followed for 3 years with follow-up visits at 3-month intervals. In the continuation group, the choice of RAS inhibitor agent and dosage were left to provider discretion. The study blood pressure target was <140/85 mm Hg in both the continuation and discontinuation groups, and providers were allowed to choose any guideline-recommended antihypertensives to achieve this target. The primary outcome was the eGFR at 3 years using the 4-variable MDRD175 (Modification of Diet in Renal Disease) Study equation, censored at time of KRT initiation. Secondary endpoints included time to development of end-stage kidney disease and a composite including decrease in eGFR > 50%, development of end-stage kidney disease, and initiation of KRT. 12 A total of 17,290 patents were screened at 39 centers in the United Kingdom. Of these, 411 patients were randomized, 205 to the RAS inhibitor continuation group and 206 to the discontinuation group. The cohort was predominantly male (68%) and White (85%) with a median age of 63 years, and only 37% of participants had diabetes. Participants were not limited to those with traditionally heavy proteinuric CKD and included those with hereditary and polycystic kidney diseases (19.7%), renovascular disease (16.5%), tubulointerstitial disease (1.5%), as well as unknown etiologies (17.3%). For the primary outcome, there was no significant difference in the eGFR at 3 years, with mean eGFR in mL/min/1.73 m 2 of the continuation group at 12.6 ± 0.7 compared to 13.3 ± 0.6 in the discontinuation group (P = 0.42). At 3 years, end-stage kidney disease occurred less often in the continuation group (115 patients, 56%) compared with the discontinuation group (128 patients, 62%). Although not reaching statistical significance, the point estimate of the hazard ratio of 1.28 (95% confidence interval, 0.99-1.65) favors RAS inhibitor continuation in this regard. Of the 490 serious adverse events, 21 were potentially related to the trial group assignment, with no significant between-group differences reported. CV events were included in the #NephJC is a recurring twitter-based journal club. #NephJC editorials highlight the discussed article and summarize key points from the NephJC TweetChat.
adverse events and were notably higher in the discontinuation arm (108 events) compared with the continuation arm (88 events), though no formal statistical comparison was done. There were 6 hyperkalemia events reported overall, of which 2 occurred in the discontinuation arm and 4 occurred in the continuation arm. 12
THE TWEETCHAT
The 2 NephJC Twitter discussions, held on November 29 and 30, 2022, included a combined 169 participants and 753 tweets. The NephJC tweetchats started with an inquiry about a vexing question for nephrologists regarding RAS inhibitors: should we continue them or not once the eGFR drops lower than 30 mL/min/1.73 m 2 ? A poll at the onset of the chats revealed that only 12.9% of 326 respondents would stop RAS inhibitors at a specific eGFR cutoff, whereas almost 50% would continue RAS inhibitors until the start of dialysis (Fig 1). The respondents, mainly nephrologists, highlighted hyperkalemia rather than eGFR as the main determinant of when they choose to stop RAS inhibitors.
The chat participants emphasized the low numbers of recruited patients with heart failure or diabetes, and they lamented the fact that only 17% had a prior history of CV events (a group that may benefit most from RAS inhibitor therapy). The STOP-ACEi authors' decision to focus on progressive CKD with eGFR decline > 2 mL/min/year likely accounted for many of the 17,000 screened participants to be ineligible. Of the 1,210 eligible participants, only 411 were randomized. Patients with higher risk for hyperkalemia or with compelling indications for RAS inhibitor continuation may have been advised by their physicians not to enroll. This could affect generalizability of the study outcome. Chat participants suggested that the Kidney Failure Risk Equation could have been utilized for CKD progression risk rather than the glomerular filtration rate-only based inclusion criterion. Another important point of discussion was the absence of information regarding RAS inhibitor doses and whether they were titrated up to maximum doses. At baseline, one-third of the study participants in both groups were treated with alpha-blockers, the same proportion being on loop diuretics. In the discontinuation group, RAS inhibitors were replaced with alternative blood pressure agents to achieve specified blood pressure targets; however, no information was provided regarding preferred class or doses.
Tweetchat participants were surprised that the primary endpoint of eGFR at 3 years was not significantly different between the 2 study groups. Moreover, in the discontinuation group, there was no increase in eGFR, unlike the previous study. 9 There was, however, a 28% decrease in incidence of KRT in the continuation group (95% confidence interval, 0.99-1.65), as well as fewer CV events (108 in the discontinuation group versus 88 in the continuation group). However, this study was not adequately powered to examine effects on CV outcomes, which frustrated some chart participants (Fig 2B). These results were interestingly somewhat concordant with a target trial emulation study from Fu et al, 10 which reported a higher absolute 5-year risk of death (40.9% versus 54.5%) as well as major adverse CV events (47.6% versus 59.5%) with RAS inhibitor discontinuation. However, the same study also reported a lower risk of KRT (36.1% versus 27.9%) with RAS inhibitor discontinuation. This may reflect the inclusion of different populations because STOP-ACEi only included participants with progressive CKD, unlike the target trial emulation observational study.
Hyperkalemia is a nemesis of all nephrologists who want to continue RAS inhibitors, no matter the eGFR (Fig 1). Curiously, hyperkalemia was registered in only 4 patients in the STOP-ACEi continuation arm (versus 2 in the discontinuation arm). This is likely because of the stringent inclusion and exclusion criteria but may also reflect a selection bias. In an effort to explain why the number of hyperkalemia events were so low, chat participants suggested that following a low-potassium diet may have been recommended, and subsequently the role of dietary potassium was argued (Fig 2A). Although there was no data regarding concomitant potassium binder use, 40% of study participants did receive bicarbonate supplementation. Chat voices emphasized the importance of concomitant RAS inhibitor and flozin (sodium/glucose cotransporter 2 inhibitor) treatment in reducing hyperkalemia. To allay concerns about hyperkalemia, the European Heart Failure Long-Term Registry study discovered that after adjustment for RAS inhibitor discontinuation, hyperkalemia was no longer associated with mortality, suggesting hyperkalemia may primarily be a risk factor for RAS inhibitor discontinuation rather than adverse outcomes. 13 CONCLUSION Discontinuation of angiotensin-converting enzyme inhibitors or angiotensin-receptor blockers in patients with advanced CKD does not significantly improve kidney function, but it may be associated with increased CV events and a trend toward faster onset of kidney failure. The decision to discontinue RAS inhibitors should not be based on an arbitrary eGFR value of 30 mL/min/1.73 m 2 , but rather it should be individualized after considering proteinuria, blood pressure, and CV comorbid conditions. | 2023-03-27T15:05:05.973Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "6fa9d649b33478f80581e3526cc02266cde0d54f",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7d446dae57abe152e8f43c8567ed54bad6635609",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59615615 | pes2o/s2orc | v3-fos-license | In vitro biotechnological advancements in Malabar nut (Adhatoda vasica Nees): Achievements, status and prospects
Adhatoda vasica Nees, belonging to family Acanthaceae, is a well-known medicinal plant. It is endorsed for its pyrroloquinazoline alkaloids and its derivatives, such as vasicine and vasicinone. Germinating A. vasica seeds is a tedious task; on that account, vegetative propagation is the preferred method for its multiplication. For rapid and large-scale multiplication, germplasm conservation as well as secondary metabolites production, in vitro culture of A. vasica was preferred over conventional propagation by several researchers; however, some major applications of this tissue culture technique are still awaiting to undergo extensive research. The present review, for the first time, illustrates all the major achievements associated with in vitro regeneration of A. vasica, reported till date and highlights the future prospects.
Introduction
Ardusi (Adhatoda vasica Nees syn. Justicia adhatoda L.), a shrub with an unpleasant smell, is popularly known as Malabar nut or Vasaka (in Sanskrit) [1]. It is an important member of the Acanthaceae family. In Unani and Ayurveda, this shrub is highly treasured owing to its healing properties against asthma, cold, cough and tuberculosis [2]. It acts as antispasmodic and expectorant as well [3]. A. vasica leaf, shoot and root prevalently possess quinazoline alkaloids like vasicine and vasicinone [4], and a non-crystalline steroid (vasakin), along with several essential oils, fatty acids, glycosides, sterols, and other phenolic components [5]. Due to an immoderate exploitation of plant parts for the purpose of constant phytochemical extraction by pharmaceutical industries, the natural population of A. vasica is under threat. As a consequence, the ever-increasing demand for its plant-part-based secondary metabolites cannot be fulfilled. Seed germination rate of A. vasica is quite poor and clonal propagation is occasional as well [6,7]. Owing to these drawbacks, tissue culture techniques i.e. direct and indirect organogenesis has been preferred [7][8][9][10].
Distribution and description
A. vasica is widely spread over India (up to an altitude of 1300 m), few parts of Sri Lanka, Bhutan, Pakistan, Afghanistan, and is progressively introduced to other countries like China, Hong Kong, Taiwan, Cyprus, Ethiopia etc. It is also found throughout the tropical regions of Southeast Asia [11] and some parts of Germany and Sweden [12], as well. A. vasica is a typically evergreen shrub, perennial, and grows at a height of about 1.2-2.5 m; leaves are characteristically perfect, elliptic-lanceolate, borne on short petioles and leathery to touch. The leaves carry an unpleasant smell and have a bitter taste. Chloral hydrate preparations of leaves showed oval stomata encircled by two crescent-shaped cells at right angles to the ostiole [13]. The branching habit is opposite and ascending with white, purple or pink flowers. But when the flowers become dry, they turn dull brownish in color. White with yellow or red barred throats with large bracts are seen in the flowers. Fruit-capsules and seeds are globular in nature [1].
Pharmaceutical/therapeutic importance
Ardusi contains numerous bioactive compounds, for instance, vasicinol, 5-hydroxy vasicine, vasicine, vasicine glycoside, deoxyvasicine, vasicinone, adhavasicinone, vasicolinone, adhatodine, anisotine and vasnetine [14][15][16][17]. Vasicine shows bronchodilatory activity under in vitro and in vivo condition, whilst, vasicinone exhibited its effectiveness towards bronchoconstriction in vivo. Simultaneous effect of these two alkaloids was preferably administered for bronchodilatory activity both under in vitro and in vivo. A combination of vasicine and vasicinone also showed a significant reduction in cardiac depressant effects. Vasicinone produced from the roots, prevents shrinkage of intestine and cardiac depression in guinea pigs, and transient hypotension in cats, thus displaying decent anticholinesterase activity [18]. Vasicine produces ambroxol and bromhexine that have a pH-dependent growth inhibitory influence on Mycobacterium tuberculosis, which suggests that it may play a significant part in the primary treatment of tuberculosis [19]. Both vasicine and vasicinone have sucrose inhibitory activity, signifying that they can be explored as natural antidiabetic agents [20]. It has been reported that vasicine and its derivatives are excreted through urine [21]. By way of intramuscular and intravenous administration, for the first 18 and 22 h, 55% of the excreted product was vasicine, whilst, on oral administration, it was 18% during the first 24 h. The leaves of A. vasica possess anti-ulcer activity, which was tested in rats. The ardusi leaves have the highest degree of anti-ulcer activity (80%) as detected in the ethanol induced ulceration model when compared to that of the actions of pylorus and aspirin [22]. The syrup made from A. vasica leaves improved symptoms of dyspepsia as well [23]. A. vasica extracts exhibited antimutagenic activity when cadmiumintoxicated mice was treated with the same, wherein, it showed marked decline in inhibition of lipid peroxidation and xanthine oxidase activity [24]. Swiss albino mice when exposed to Cobalt-60 radiation, was affected with radiation-induced ailment, displaying noticeable effects in histology of testis. This effect was significantly reduced when A. vasica plant extract was applied. This suggests that the ardusi plant extracts have radioprotective effects on testis [25].
In vitro regeneration
Conventionally, A. vasica is propagated through seed or nodal cuttings. Nevertheless, the frequency of propagation is limited since the seed setting is insufficient; seed germination is poor and clonal propagation via stem cuttings is exclusively seasondependent [7,8]. As an alternative to the conventional methods, in vitro propagation through plant cell, tissue and organ culture becomes a proficient technique for accelerated production of propagules in large-scale, exploring the variability among the propagules, and, to induce new attributes of commercial importance, as well as to develop novel variant via genetic transformation [26,27]. There are several in vitro techniques that have been applied for direct and indirect regeneration in A. vasica till date. It is now quite essential to compare the reported in vitro techniques and classify them based on their efficacy, in order to select the suitable need-based protocol [28][29][30]. Accordingly, in this review, we've compared the reported methods of micropropagation in A. vasica, for instance direct organogenesis via multiple shoot culture and indirect organogenesis mediated by callus culture, along with some improved technologies like artificial seed development and in vitro production of secondary metabolites.
Explant selection
Appropriate selection and collection of explants is the first and foremost step for a successful in vitro regeneration study. Even though A. vasica is a perennial shrub and collection of explants can be done round the year; the most active growth stage was considered to retain the regeneration ability of collected explants. Preferable time of explant collection for in vitro regeneration is considered to be between November and March [8,31], based on certain aspects like ontogenetic or physiological age and position (certain part of the plant, from where explants are collected) or size of explants. A number of explants, such as whole leaf, leaf disc, petiole, shoot tip, nodal segment, axillary meristems and root have been utilized for initiation of in vitro direct or indirect regeneration of A. vasica that has been summarized in Table 1. Among these explants, the sole use of nodal segments from field-grown plants was the most prevalent in majority of the reports [8][9][10][32][33][34]. Additionally, when the regeneration efficiency of nodal segments was compared with other explants like shoot tip [35][36][37], the nodal segment explants displayed better response based on multiple shoot initiation and subsequent proliferation. Similar trend was also observed in case of indirect regeneration, where the nodal segment explants induced higher frequency of callogenesis in comparison to shoot tip, petioles, and leaf disc explants [38]. To induce cell culture and to obtain maximum cell biomass, Singh et al. [39], unconventionally used root segment explants and attained significant results. On the other hand, Madhukar et al. [40] used leaf explants to develop cell suspension culture via friable callus induction from leaf explants. Even after considering the superior morphogenetic competence of nodal segment explants, leaf explants was preferred for induction and subsequent regeneration of callus [7,31,[41][42][43][44]. In couple of the instances, the specific age of the explant source (mother plant) was mentioned either as 2-3 years old plant [35] or 6-7 year old flowering plant [8]; however in majority of the reports the age and stage of mother plants were not mentioned, which is considered to be a major factor during explant selection.
Surface sterilization
The most crucial step for establishment of any in vitro culture is sterilization of explants that are to be inoculated in the media, since there persists a high chance of microbial contamination in the plant materials, collected from fields [45]. There are three key parameters of surface sterilization: the category of disinfectant, their levels and duration of exposure. These parameters should be standardized in such a way that the sterilization would eradicate the contaminants without disturbing the regeneration ability of the explants. In majority of the instances, these three parameters depend upon the nature of explant tissue; softer or juvenile tissue requires an exposure of lower levels of disinfectants for a briefer time span in comparison to mature and hard tissues [29,46]. As noted in the published literatures (Table 1), the surface sterilization of A. vasica was done by the way of exposing the explants to 1% (v/v) Savlon for 10 min, 80% (v/v) ethanol for 30 sec and 0.1% (w/v) HgCl 2 for 7-10 min with 3-5 interim rinse with sterile water (Table 1). However, in many of the reports it was found that prior to ethanol or HgCl 2 exposure, the explants were usually treated with 2-3 drops Teepol for 5-10 min [47] or 2 drops Tween-80 for 15-20 min [35,44] or Tween-20 for 5 min [9] or 1% (v/v) Dettol for 10 min [37] as an alternative to Savlon solution. There are few other reports that used several other alternative surface sterilants. For example, Madhukar et al. [40] used 1% (w/v) Bavistine Ò solution for 10 min prior to the treatment with Savlon, ethanol and HgCl 2 . Use of 3% (v/v) H 2 O 2 treatment for 2 min, before HgCl 2 exposure was reported by Panigrahi et al. [10]. In a unique approach, Abhyankar and Reddy [8] used Geneticin solution after treating with HgCl 2 to make the explants free from any contamination.
Multiple shoot formation
Following the collection, surface sterilization and preparation, the explants undergo processing for optimization of in vitro regeneration protocol via standardization of type and formulation of basal media, vitamins, carbohydrates, levels of solidifying agent, pH and plant growth regulators (PGRs). Influence of these factors on micropropagation of A. vasica has been summarized in Table 1. For multiple shoot initiation and subsequent proliferation ( Fig. 1a and b), full strength Murashige and Skoog [48] (MS) medium was the only choice as found in all the published reports on A. vasica. Supplementation of PGRs in MS medium significantly varied, as displayed by the reports on shoot multiplication. In several reports, combination of cytokinin and auxin were preferred. For instance, 0.5-2 mg/l N 6 -benzyladenine (BA) was used as cytokinin in combination with 0.05-0.2 mg/l a-napthalene acetic acid (NAA), which was used as auxin [9,32,35,49]. As an additional cytokinin source, 1 mg/l 6-furfurylaminopurine (Kinetin or Kn) was used along with equal concentration of BA for shoot multiplication [8]. Similarly, Roja et al. [36] used 1 mg/l supplementation of gibberellin A 3 (GA 3 ) with 1 mg/l BA to enhance the shoot multiplication frequency of A. vasica. Later on, Lone et al. [37] added 0.5 mg/l NAA and 0.5 mg/l thidiazuron (TDZ) with 2 mg/l BA to improve the regeneration efficiency of BA; wherein, 100% of explants produced the maximum (23.3) shoots/explant in 28 days. However, in contrast, there are several reports on the sole use of BA for initiation of high frequency multiple shoots, wherein very high concentrations of 10 mg/l [8,34]
Callus induction and regeneration
Similar to the multiple shoot regeneration, MS medium was the preferred choice for callus induction and its subsequent regeneration too. The only exception was reported by Anand and Bansal [51], who used Gamborg's medium (B5) [52] as a basal media instead of MS medium to induce callus from leaf explants with a supplementation of 1 mg/l 2,4-D. Apart from the basal medium, types and concentrations of PGRs played the most significant role during indirect organogenesis of A. vasica. In many instances, either equivalent amounts of auxin/cytokinin or variable auxin/cytokinin ratio efficiently induced high frequency of friable calli (Fig. 1e) or organogenic calli (Fig. 1f) (Table 1). For example, an equal amount of (1 mg/l) 2,4-D and Kn combination resulted 45% callusing from petiole explants within 4 weeks of inoculation [7]. Rashmi et al. [44] testified induction and proliferation of friable calli in MS medium with 6 mg/l IAA and 6 mg/l Kn from leaf explants, and in 3 mg/ l indole-3-butyric acid (IBA) and 3 mg/l BA from nodal segment explants. A comparable result was reported by Mandal and Laxminarayana [31], who obtained 100% callus induction in MS medium supplemented with 0.25 mg/l each of NAA and TDZ. On the account of higher auxin/cytokinin ratio, Dinesh and Parameswaran [53] reported 90% callus induction within 7 days of inoculation on MS medium fortified with 10.7 mM NAA plus 2.2 mM BA. Analogous trend was detected by Bhambhani et al. [7], who reported as high as 70% callusing within 4 weeks of inoculation of leaf explants in MS medium plus 1 mg/l 2,4-D and 0.5 mg/l Kn. Later, Singh and Sharma [54] achieved high frequency of friable calli in MS medium fortified with 3.5 mg/l NAA and 1.25 mg/l BA. A completely opposite trend was also displayed in the report of Sil and Ghosh [38], who obtained maximum callus induction from nodal segments on MS medium accompanied with 2 mg/l BA plus 0.5 mg/l NAA, a higher cytokinin/auxin ratio. An interesting study conducted by Maurya and Singh [43] exhibited the use of dual auxin/cytokinin combination, unique of its kind, in the form of an amalgamation of 1.5 ppm 2,4-D, 1.5 ppm IAA, 1.5 ppm Kn, and 1.5 ppm BA in MS medium that induced as high as 75% callus with 18.16 g fresh weight. As an exceptional result, the sole use of auxin in the form of 1 mg/l 2,4-D to induce 46% calli from nodal segment explants, that successively induced 60 roots per callus, without adventitious shoots, was reported by Panigrahi et al. [10]. Hence, from this above result, we, the authors came to this projection that largescale in vitro roots could be achieved from callus, suppressing shoot regeneration simultaneously (Fig. 1g). Earlier, Jayapaul et al. [42] observed similar results of callus induction (76%) with precocious root formation in MS medium but only after addition of 21.5 lM NAA, 19.7 lM IBA and 9.3 lM Kn. The same authors also reported the only occurrence of somatic embryogenesis (though precocious in nature) following 62% callus induction in MS medium supplemented with 4.5 lM 2,4-D and 2.3 lM Kn. Nevertheless, somatic embryogenesis in A. vasica is yet to be studied.
Root formation
The final phase of in vitro regeneration is the rooting of multiple shoots, following which ex vitro acclimatization and establishment of plantlets in external environment is achieved that completes any micropropagation protocol. For in vitro rooting of A. vasica ( Fig. 1c and d), use of MS medium as a basal medium was mentioned in majority of the reports ( Table 1). The only exception, was the use of Schenk and Hildebrandt [55] (SH) medium by Mandal and Laxminarayana [31], who observed 75% rooting with 9-10 roots/ shoot in SH medium fortified with 0.5 mg/l IBA. Even though in A. vasica, auxins are the preferred PGRs for in vitro rooting of shoots, PGR-free MS medium also proved its root regeneration potential in several instances. According to Amin et al. [49] PGRfree MS medium performed better than MS media supplemented with 0.1-0.5 mg/1 of either NAA or IBA for in vitro root induction of A. vasica microcuttings. Following this trend, 100% rooting with 3.5 roots/shoot of 4 cm length in 15 days was testified by Azad et al. [32] in MS medium, devoid of any PGR. Nath and Buragohain [47] obtained as many as 9.33 roots per shoot of 0.6 cm length in PGR-free MS medium and a comparable result was reported by Tejavathi et al. [50] as well. Apart from PGR-free MS medium, the most frequently used auxin was IBA. The minimum level of IBA supplementation was 0.1 mg/l that initiated 90% rooting [8] or a high frequency rooting with 5.8 roots/shoot of 2.5 cm length in 17 days of inoculation [37]. An increase in IBA level to 0.5 mg/l resulted in initiation of longer roots (3.5-4 cm) after 3 weeks of culture [9]. However, two-times higher concentration of IBA (1 mg/l) resulted in delayed (28 days) and lesser frequency (80%) of rooting with fewer (3-4) and shorter (3 cm) roots/shoot [35]. Similar outcome was also evident in the observation of Bimal and Shahnawaz [33], and Khan et al. [34] as well. Supplementation of a lower concentration of NAA (0.25 mg/l) with 1 mg/l IBA was reported to overcome such drawback and initiated as high as 94% rooting with 8.4 roots/shoot of 5.6 cm in length [10]. In addition to auxins (IBA in particular), use of activated charcoal (AC) was reported to enhance in vitro rooting of A. vasica [38]. According to Gantait and Mandal [56] supplementation of AC offers additional advantage by eliminating light and providing a reasonable physical environment for the rhizosphere and helps in rooting. Nevertheless, such inductive effect of AC was not tested in successive reports in A. vasica till date.
Acclimatization
Success of micropropagation eventually relies on efficient transfer and adaptation of in vitro regenerated plantlets in ex vitro autotrophic environmental conditions with maximum survival [45]. During acclimatization, plantlets that are multiplied under in vitro condition are exposed to a suitable growing condition that either assists them to grow rapidly or to extirpate them considering incompetent for ex vitro environment. The incompetency is determined based on the inability of the in vitro regenerated plantlets to control water loss and heterotrophic means of sustenance. That is why the relocation of in vitro regenerated plants to ex vitro environment necessitates specified states (controlled humidity, light intensity and temperature) for the effective acclimatization in field or in greenhouse [28]. The simplest substrate used for acclimatization of in vitro regenerated plantlets of A. vasica was garden soil in which 90% survival of plantlets was recorded within 12 weeks of transfer [31]. The next successful and yet simple substrate was the mixture of sand and soil. Nath and Buragohain [47] reported acclimatization of 85% plants in sterilized sand and soil mixture (3:1); in the similar medium, Bimal and Shahnawaz [33] successfully acclimatized 80% plants following their primary acclimatization under laboratory conditions. The effectiveness of soil and sand (1:1; v/ v) mixture during primary acclimatization was also proved by Panigrahi et al. [10], who recorded a survival rate of 95% within 4 weeks of transfer. Later on, they established plantlets in sand, soil and farmyard manure (1:1:1; v/v) for another 4 weeks for secondary acclimatization. Inclusion of common compost, vermicompost or farmyard manure enhanced the rate of acclimatization and increased the survival rate. According to Gantait et al. [57], compost and farmyard manure plays a major role in retention of moisture of the substrate apart from nutrient supply. It has already been established in several reports that retention of high humidity is a key component for high frequency acclimatization. Based on this fact, Azad et al. [32] reported 80% post-acclimatization survival on garden soil, sand and compost (2:1:1). A comparable success rate (acclimatization of 80% plantlets) was also observed by Khalekuzzaman et al. [35] in garden soil, sand and cow dung (1:1:1). Later, a much higher survival of 98.2% plantlets was achieved in garden soil, sand and vermicompost (1:1:1) in 4 weeks of transfer (Lone et al. 2013). In an exclusive experiment, Abhyankar and Reddy [8] used soil rite as a substrate and intermittently sprayed liquid ½ MS nutrient, which ensured a very high frequency of survival rate within 3 weeks of acclimatization.
Secondary metabolite production
The production of secondary metabolites can be fulfilled in a more sustainable approach via in vitro organogenesis as compared to that from in vivo or wild plant population [58]. All the parts of A. vasica plant have medicinal values [21]. The production of pyrroloquinazoline alkaloids has been reported in A. vasica; vasicine and vasicinone being the significant ones among them all. The first report of vasicine production from the leaf-derived callus culture of A. vasica was published by Jayapaul et al. [42]. They observed that the accumulation of vasicine was practically more in leafderived callus induced in MS media, fortified with NAA and BA. Later on, the high-performance liquid chromatography study of various extracts of A. vasica revealed the presence of higher levels of vasicine than that of vasicinone. Particularly, the water extracts of this plant contained more vasicine, i.e., 5.98% dry weight, whereas, the amount of vasicinone was 5.2%. Other extracts, like methanolic and petroleum ether extracts contained 2.8% and 0.187% vasicine in dry weight basis, respectively [36]. Bhambani et al. [7] successfully enhanced the production of vasicine in A. vasica by introducing elicitors in the cell culture. Elicitors such as chitosan, yeast extract, sodium salicylate, ascorbic acid, and methyl jasmonate (MeJ) were employed. This resulted in higher yield of vasicine (0.45 and 0.39%, based on dry weight) that was higher (3.7 and 3.2-fold) in comparison to the control cultures; when inoculated with 0.121% (20 mM MeJ) and 50 mg/l yeast extracts. Furthermore, Rashmi et al. [44] observed that vasicine production was higher under in vitro (callus-5.15 mg/ml and leaf suspension culture 4.09 mg/l) conditions in A. vasica. Similar trends was observed by Madhukar et al. [40], when they assessed the callus culture of A. vasica via ultra-performance liquid chromatography/ quadrupole-time-of-flight mass-spectrometry (UPLC/Q TOF MS), wherein, 123.3% increase in vasicine content was observed, compared to control plantlets. However, only a single report was documented, concerning the production of vasicinone, both from in vivo and in vitro plant parts of A. vasica [59]. In that report, the maximum vasicinone content (6.402% of dry weight) was obtained from in vitro leaf, followed by in vitro shoot (2.007% of dry weight), making way for simultaneous production of vasicinone more efficiently.
Artificial seed production
Artificial seed production is considered to be a multifaceted technology that has become quite popular among researchers, working on in vitro propagation and short/long-term conservation of threatened or endangered medicinal plant germplasms [60,61]. This technology is most suitable for storage or exchange of precious plant germplasms, since it encapsulates very small size of plant tissue or organ without disturbing the natural population. In this course, explants like apical or axillary shoot bud, nodal segment or somatic embryo etc. are drenched in sodium alginate solution and the aliquots with explants are dropped in calcium chloride solution to form the spherical artificial seeds. There are multiple examples of medicinal plants for which this technology has become indispensible where the propagation, storage and exchange of plant materials is concerned [62][63][64]. However, such a convenient technology has not been potentially used in A. vasica yet. There is a lone report of Anand and Bansal [41] who developed artificial seeds of A. vasica. They encapsulated the in vitro shoot buds in hydrogel (4% sodium alginate) with 1.1% (w/v) hydrated calcium chloride solution. The hydrogel was dissolved either in distilled water B5 medium alone/with 4.65 lM Kn or B5 medium with 4.65 lM Kn plus 50 mg/l Phloroglucinol. They observed that encapsulated shoot buds (artificial seeds) retained their maximum morphogenetic competence when prepared with and inoculated on B5 medium, 4.65 lM Kn plus 50 mg/l Phloroglucinol. The artificial seeds registered maximum germination frequency of 66.28% and developed into complete plantlets within four weeks of inoculation. However, assessments on variable levels of sodium alginate, calcium chloride and germination medium are yet to be explored significantly. Additionally, no report on storage potential and post-storage phytochemical/molecular analysis exists that might have generated useful information on A. vasica.
Outlook
Several facets of in vitro regeneration like explant selection, surface sterilization, multiple shoot culture, callus culture and in vitro rooting of A. vasica has been discussed aptly in this review. Interestingly, no researchers reported clonal fidelity analysis of regenerated plantlets, which is considered to be an integral part of a successful micropropagation protocol. Progress on in vitro intervention for its secondary metabolite production as well as artificial seed production has been highlighted as well. There are several other key and advanced applications based on in vitro regeneration that are yet not attempted in this plant. Even though encapsulation of in vitro plant parts have immense utility for short-term storage or germplasm exchange and encapsulation-based cryopreservation, no such attempt has yet been made so far. Furthermore, a survey of the available literature found no information on genetic transformation on A. vasica as well. Techniques of protoplast fusion as well as incorporation of desired genes via protoplast transformation could have been aptly used to enhance the quality and quantity of secondary metabolites. However, as this area of genetic transformation has not been touched as of now, there is ample scope for the introduction of Agrobacterium-mediated transformation of root cultures to produce more quinzolline alkaloids. This appraisal provides sufficient briefing about the insides of in vitro culture, which would aid the future Adhatoda researchers for further advanced study. | 2019-02-12T00:10:07.157Z | 2018-03-19T00:00:00.000 | {
"year": 2018,
"sha1": "a3dc762575e0b1563387d6a51ddfd8361af190d8",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jgeb.2018.03.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3dc762575e0b1563387d6a51ddfd8361af190d8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53121406 | pes2o/s2orc | v3-fos-license | Time-evolution of the fine-structure constant in runaway dilaton models
We study the detailed time-evolution of the fine-structure constant α in the string- inspired runaway dilaton class of models of Damour, Piazza and Veneziano [1, 2]. We provide constraints on this scenario via the time-variations of the fine-structure constant α as measured by spectroscopic experiments and we explore ways to distinguish the dilaton runaway models from other alternative.
Introduction
One of the main open questions in cosmology is the acceleration of the expansion of the Universe. The simplest model, a constant energy term, the so-called cosmological constant, fits the data very well, but it is unsatisfactory from a theoretical point of view.
An alternative way to model cosmic acceleration as coming from a dynamical energy component is through a scalar field, similar in the mathematical description to the Higgs boson, recently discovered at the Large Hadronic Collider. Such scalar fields emerge quite naturally in string theory, which predicts the presence of a scalar partner of the spin-2 graviton, the dilaton. We will focus our analysis on the cosmological consequences of a particular class of string-inspired models, the runaway dilaton scenario of Damour, Piazza and Veneziano [1,2] and assess their testability by future facilities. Specifically ELT-HIRES [3] an ultra-stable spectrograph for the E-ELT (European Extremely Large Telescope) will have two relevant capabilities: a direct measurement of the cosmic expansion performing then the so-called Sandage-Loeb test [3,4], and tests of the stability of the fine-structure constant (α = e 2 hc ) at up to 10 −8 level.
Runaway Dilaton Cosmology
The Friedmann equation and evolution equation for the scalar field (Φ) for this class of models can be written as: The total pressure p = i p i and the total energy density ρ = i ρ i sum over all components except the kinetic part of the scalar field, α i (Φ) are the coupling constants between the dilaton and each component i, so they characterize the effect of the various components of the universe on the dynamics of the field. Notice in particular that the theory does not require the coupling constant to be the same for all components. Experimental constraints impose a tiny coupling to baryonic matter: where b F and c are constant free parameters (c is expected to be of order unity and b F very small). Weak equivalence principle tests lead to the following bound on the present value of the coupling: leading to the following relation between b F and Φ 0 : Using the basic definition of deceleration parameter q = −1 −Ḣ H 2 one can also derive the constraint: where the dots represent the derivative with respect to time, and the prime with respect to the logarithm of the scale factor ln(a). By solving the Friedmann equations for these classes of models one finds the redshift evolution of the dilaton field compared to the present value of the field ( fig.1) and the evolution of the Hubble parameter for these type of model ( fig.2) which should be compared to observational data. A comprehensive list of recent measurements is given in [7].
Variations of fundamental constants
Recent obsevations suggest possible variations of fine-structure constant in time and/or space [8]. Our analysis is focussed on time-variations of α and some specific measurements are in Table 1, including the recent first result of the UVES Large Program for Testing Fundamental Physics [5], which is expected to be the one with a better control of possible systematics. It is then interesting to study the behaviour of α in this class of models, and as it has been shown in [1], the evolution of α is given by: By integrating this equation, one finds the evolution of the variation of α: The present drift of α is locally constrained by the Rosenband bound: From here one can then predict the redshift evolution of α taking Φ 0 and Φ 0 as free parameters and make a χ 2 analysis using both H(z) data from [7] and the α data from Table 1 (see figures (3) and (4).
Future tests
The drift in the spectroscopic velocity of an object following the Hubble flow can be obtained from the definition of redshift and expressed as: Here c is the speed of light and ∆t the time span of observation. The precision needed to detect this signal is expected to be reached by future facilities such as the SKA, an intensity mapping experiments, and ELT-HIRES (see [6] for the phase A study of the instrument) which will offer the unique advantage to observe this drift deep in the matter era (z ∼ 2 → 5) through spectroscopic measurements in the Lymann-α forest. ELT-HIRES is expected to reach a spectroscopic velocity precision parametrized by: where S/N is the signal-to-noise ratio, N QSO the number of targets observed, and z QSO the redshift of the observed targets.
Using the same values of the parameters as previously (b f fixed at 10 −8 , H 0 = 67.4 ± 1.4 kms −1 Mpc −1 ), one can then compute the behaviour of the redshift drift as a function of redshift for the runaway dilaton models, compare it to ΛCDM models and verify whether or not one will be precise enough to put constraints on the parameter space of the model with the ELT-HIRES. Fig.5 compares the redshift drift of the standard model of cosmology (red) to the runway dilaton one, the error bars being the expected accuracy that ELT-HIRES can provide. Figure 5. Redshift drift signal for the runaway dilaton class of models (blue), with model parameters in ranges allowed by observations, compared to the signal expected in ΛCDM (red) and the forecasted uncertainties for an observational time of ∆t = 30yrs, a signal-to-noise ratio of S/N=3000 for 40 uniformly spaced systems divided in 4 bins. | 2019-08-16T16:05:32.669Z | 2014-12-12T00:00:00.000 | {
"year": 2014,
"sha1": "63af0de409635e0d55cb33dccceaa0b55c58ff70",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/566/1/012006/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4fa296e6f986e10b23b3bf41e209661ddbfe5550",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
81983745 | pes2o/s2orc | v3-fos-license | The effects of high impact exercise intervention on bone mineral density, physical fitness, and quality of life in postmenopausal women with osteopenia
Abstract Osteoporosis and osteopenia prevailed in postmenopausal women and predisposed to osteoporotic fractures that increase mortality, morbidity, and the cost of social care. Here, we investigated the effect of 24 weeks of aerobic dancing on the bone miner density, physical fitness and health-related quality of life (HRQoL) in postmenopausal women with osteopenia. Total 80 participants (control [CON]: 40; exercise [EX]: 40) were included in the final analysis. The EX group underwent a 24-week aerobic dance intervention. Bone mineral density (BMD), physical fitness, and SF-36 questionnaire were assessed at baseline and 24-weeks. The BMD change in the femoral neck at the 24-weeks were significantly different between the 2 groups (CON: −1.3 ± 2.7%, EX: 3.1 ± 4.6%, P = .001). Grip strength, sidestep and physical functional domain of HRQoL in the EX group were significantly improved compared to the CON. The results were suggested 24-week aerobic dance intervention could result in the lower the incidence of bone fracture through increasing BMD and decreasing fall risk for postmenopausal women.
Introduction
Osteoporosis, a serious global health problem second only to cardiovascular disease, is characterized by bone loss and continuous destruction of the bone microstructure and prevails in postmenopausal women. [1,2] It leads to bone fragility and increases the risk of fractures. [3] Osteoporotic fractures increase mortality, morbidity, chronic pain, and the cost of social care, and then decrease health-related quality of life (HRQoL). [1,2,4,5] About 33% of women over the age of 50 have osteoporotic fractures, which are resulted from falls. [6,7] Further, 35% to 45% of people aged 65 or older fall at least once a year, and episodes of fall increase in frequency and severity in the older adult. Therefore, preventing falling and consequent osteoporotic fracture is particularly important in postmenopausal women. [3,6,7] In addition to pharmaceutical intervention for osteoporosis, nonpharmaceutical approach such as physical activity was recently employed with the goal of decreasing the bone loss as well as increasing muscle strength.
Aerobic dance is a high-energy exercise that improves cardiovascular endurance, consist of impact, movement, balance, and agility. [8] It is a safe exercise with the relatively low incidence of injuries, [9] and can improve physical fitness, and reduce the risk of falling in older adult (≥72 years) women. [10] Moreover, dancing exercise with a mild impact lasting for 12 months was reported a positive effect on bone mineral density (BMD). [11][12][13] Hence, aerobic dance seemed a reasonable intervention in postmenopausal women with osteopenia because of the benefits of physical fitness especially the agility and balance, as well as BMD. However, little light shed on the effect of the 24-week aerobic dance with high impact on postmenopausal women with osteopenia. Therefore, the goal of this study was to investigate the effect of a 24-week aerobic dance on the BMD, physical fitness and HRQoL of postmenopausal women with osteopenia (Tscore: À1 to À2.5). We hypothesized that a 24-week aerobic dance intervention would improve BMD, physical fitness, and the HRQoL of postmenopausal women with osteopenia.
Participants
Between August 2011 and August 2013, participants were enrolled from the rural community from southern Taiwan. The inclusion criteria were physically independent postmenopausal women with a diagnosis of osteopenia confirmed by dual-energy X-ray absorptiometry (DXA) (lumbar spine (L2-4), T-score of À1.0 to À2.5) The minimum and maximum age of participants were at 45 and 85 years, respectively. The exclusion criteria were women undergoing hormone-replacement therapy, with cognitive impairment, diabetes mellitus, bone fracture history, any medical conditions or taking any medications predisposing to poor bone quality, or any medical conditions that contraindicated administering the fitness assessment.
Twenty-four-week aerobic dance course was provided to this targeted population. The participants were included in the exercise (EX) group when they completed the course of the aerobic dance. While others who only received medication were enrolled as a control (CON) group. All participants were given 600 mg of calcium (oral) and 800 international units of vitamin D3 (oral) per day. No dietary control was applied to all participants during the intervention. All subjects gave their informed consent for inclusion before they participated in the study. The retrospective study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee and Institutional Review Board of the Chang Gung Memorial Hospital (IRB 99-3951B), and was also registered in the ClinicalTrails.gov (ID: NCT02936336).
2.2. Intervention 2.2.1. The aerobic dance program. The aerobic dance intervention was done 3 times a week for 24 weeks with nonconsecutive days. Each 60-minute class began with 10 minutes of mild warming up activities consisting of calisthenics and stretching, which were followed by 35 minutes of aerobic dance exercise as the core of the class. The choreography of the dance exercise consisted of A step, V step, tap point, grapevine, march, leg curl, walking, and so on. and the class concluded with 10 to 15 minutes of cool-down activities. The intensity of dance was set at 50% to 70% of each participant's target heart rate monitored with POLAR FT40 monitors (Polar Electro Oy, Kempele, Finland). The steps were rhythmical with 118 to 130 beats per minute and accompanied by music. The program was held in 4 senior citizens' community centers that were the closest to the participants' residences at night. At least, the same 3 researchers and a few assistants supervised participants, depending upon who showed up to help.
Outcome assessments
All assessments in both groups were done at the baseline (pretraining) and 24-weeks (upon completing the 24-week aerobic dance program) in the Sports Medicine Center by the same experienced investigator who was blinded to the participate allocation.
2.3.1. Anthropometry. Height and weight were measured by automatic height measurement and weighing scale instrument (HW-3030, Super-view, Taoyuan, Taiwan). Body mass index (BMI) (kg/m 2 ) was calculated as follows: BMI = weight/height 2 . The measurement was done twice and averaged to minimize bias.
2.3.3. Physical fitness. Fitness assessments were done as previously described. [14] They included muscular strength (grip strength), balance (closed-eye foot balance), cardiorespiratory endurance (step test), flexibility (sitting trunk flexion), muscle endurance (sit-ups), power (Sargent jump), and agility (reaction time and sidestep) and done at the Sports Medicine Center, xxxx Hospital using the HELMAS Physical Fitness Management System (O2run, Co, Ltd, Seoul, Korea).
2.3.4.
HRQoL. The Short-Form Health Survey questionnaire (SF-36) is commonly used to evaluate participants' HRQoL in clinical practice. The questionnaire contains 8 health domains: physical function, role limitation due to physical problems, bodily pain, general health, vitality, social functioning, role limitation due to emotional problems, and mental health. The 8 domains can be used to provide physical and mental component summary scores.
Sample size
We assumed a mean BMD difference of 1.5% between the CON and EX groups. [15,16] We calculated that 40 patients were required per group to achieve a power of 0.9 with 5% significance level, and we estimated that 25% of the participants would be lost to follow-up. Therefore, the proposed sample size is 50 patients in each group.
Blinding
An independent assessor blinded to the grouping and patients' demographic data performed the outcomes assessments.
Statistical analysis
SPSS 17 for Windows (SPSS, Chicago, IL) was used for all analyses. All continuous data are presented as means ± standard deviation. Normal distributions were calculated using the Shapiro-Wilk test. Independent t tests were used to assess the differences between the EX and CON groups. Paired sample t tests were used to analyze changes from pretraining within the groups. Mann-Whitney U test was used to assess the physical fitness differences between the EX and CON groups. Wilcoxon signed-rank test was used to analyze physical fitness changes from pretraining within the groups. Significance was set at P < .05.
Results
From August 2011 to July 2013, 100 postmenopausal women who met our inclusion criteria were collected in the present study. Fifteen participants, 8 in the CON group and 7 in the EX group, were excluded because of loss of follow-up. Another 5 participants, 2 in the CON group and 3 in the EX group, were excluded because of discontinuation of intervention. Eighty participants, 40 in the CON group and 40 in the EX group, were included in the final analysis. There were no significant differences in the mean age, height, weight, or BMI between Yu et al. Medicine (2019) 98: 11 Medicine the CON and the EX groups ( Table 1). The median attendance show-up was 58 sessions out of 72 sessions, giving an average program adherence rate about 81%. None of the participants in the EX group reported discomfort or injury that needed further treatment during the 24-week training. After the aerobic exercise intervention, it was shown that weight and BMI were decreased as compared to those for those in baseline participants in the EX group (Table 1). There were no significant differences in BMD between the CON and the EX groups at baseline. Although there were no detectable differences between the CON and the EX groups after a 24-week aerobic dance, there was a different evolvement between the CON and the EX groups. When comparison was performed within the individual group in temporal fashion, the BMD in femoral neck was 0.626 ± 0.097 and 0.643 ± 0.09 g/cm 2 for baseline and 24-week assessment in the EX group, respectively (P < .01). In the CON group, the BMD in femoral neck was 0.646 ± 0.115 and 0.637 ± 0.112 g/cm 2 for baseline and 24-week assessment, respectively (P < .01). It was shown that a significant increase of BMD in femoral neck in the EX group, while a decrease in the CON group ( Table 2). The changes of BMD in the femoral neck at the 24-weeks were À1.3 ± 2.7% and 3.1 ± 4.6% for the CON and the EX groups, respectively (P = .01) (Fig. 1). However, there were no significant differences in the changes of BMD in the spine between baseline and 24-week assessment in both the CON and the EX groups. (AP, CON: P = .712, EX: P = .912; lateral, CON: P = .316, EX: P = .628).
In the physical fitness assessment, there were no differences between the CON and the EX groups in muscular strength, balance, cardiorespiratory endurance, flexibility, muscle endurance, power, and agility in baseline assessment. After 24-week aerobic dance program, the grip strength was 32.2 (17.0-25.5) and 43.7 (21.0-27.0) kg for the CON and the EX group, respectively (P = .021). Meanwhile, the side step was 30.9 (12.0-21.0) repetitions and 42.6 (17.8-24.0) repetitions for the CON and the EX groups, respectively (P = .017). It was shown an increase in grip strength and side step in the EX group at the 24weeks (Table 3). While comparison was performed within the individual group, grip strength, sidestep and reaction time were significantly improved at the 24-weeks than baseline in the EX group (grip strength, P = .016; sidestep, P < .001; reaction time, P = .001), but not in the CON group.
In SF-36 for assessing the subjective outcome, it was demonstrated that there was a significant increase in the score of physical function for the EX group as compared to the CON group at the 24-weeks (Table 4). While no such differences were shown in the other domains between the CON and the EX groups.
Discussion
The major findings of this study were that 24-week aerobic dance improved femoral neck BMD, as well as grip strength, sidestep, and reaction time in postmenopausal women with osteopenia. The significant change in femoral neck BMD, but not in spine BMD, may result from more influence of high impact exercise to the trabecular bone than to cancellous bone. Aerobic exercise typically uses a high volume of low-intensity muscular contractions, then the muscles increase in size and their work capacity increases significantly. So that the presentations of grip strength and side step improved. Meanwhile, the regular exercise intervention facilitated neuromuscular control of the body, so reaction time was improved as well. However, no changes in step test, sit up and EC balance was observed. Longer duration of aerobic dance may be required to make significant differences among these performances. Improvement in physical function was also demonstrated in the SF-36 questionnaire assessment. During the intervention, no participants in the EX group reported discomfort or injury that needed further treatment. It was proposed that the aerobic dance protocol in this study seemed safe and feasible for postmenopausal women. Osteoporosis prevailed in postmenopausal women and was usually associated with an osteoporotic fracture that increased mortality, morbidity, chronic pain, and the cost of social care and decreases HRQoL. [1,2,4,5] Pharmaceutical and nonpharmaceutical approaches were developed to increase the BMD since higher BMD was protective in fractures of the femoral neck through higher tolerance in the impact from falls. [17,18] In literature, it was suggested that 12 months of impact exercise intervention was effective for improving BMD. [15,16] Paralleling the literature, the present study further demons treated 24 weeks of aerobic dance intervention was effective in improving femoral neck BMD.
On the other hand, osteoporotic fractures are often the results of falls. [6,7] Therefore, preventing falls is vital for reducing the incidence of osteoporotic fractures in postmenopausal women. [3,6,7] The present study showed grip strength was increased in the EX group while not in the CON group. Grip strength was an indicator for the prediction of functional limitations [19] and disabilities in the older adult. The low grip strength would lead to poor mobility [20][21][22] and is correlated with the increased incidence of falls. [23] The present study also demonstrated that agility, i.e. side step and reaction time, was improved in the EX group through 24-weeks aerobic dance. Indeed, agility-based training was suggested effective in reducing falls [24][25][26][27][28] . Therefore, it is possible that a 24-week aerobic dance intervention could reduce the incidence of falls in postmenopausal women.
In SF-36, a 24-week aerobic dance program was effective in improving the physical function domain. Indeed, the physical function of the SF-36 scales was shown lower scores in the older adult who experienced falls compared to those who did not. [29] Taken together, we found that a 24-week aerobic dance intervention resulted in a favorable outcome in osteoporotic fracture associated factors, including femoral neck BMD, muscle strength, agility, and physical function.
Several limitations of the present study must be acknowledged. First, the small number of patients might limit the application of the conclusion. However, this study involved a precise quantity of prescribed exercise intervention in a 24week period. The program adherence in the present study was 81%. Differences were statistically detected in femoral neck BMD, muscle strength and agility. Second, this study was limited by short follow up. The long-term follow-up including the fall and osteoporotic fracture occurrence would provide Table 3 Comparison of physical fitness parameters between 2 groups and within group. information regarding the ultimate influence of aerobic dance in postmenopausal women.
Conclusions
In conclusion, aerobic dance is safe, effective, and efficient in improving health in postmenopausal women because BMD of the femur neck, grip strength, sidestep and reaction time were significantly improved after aerobic dance intervention as well as the physical function domain in the SF-36 in the postmenopausal women. | 2019-03-19T13:02:31.285Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "19b1b59419f6e2f31b531fd7b897f6a0969f2f82",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000014898",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "19b1b59419f6e2f31b531fd7b897f6a0969f2f82",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204886072 | pes2o/s2orc | v3-fos-license | Histone Methylations Define Neural Stem/Progenitor Cell Subtypes in the Mouse Subventricular Zone
Neural stem/progenitor cells (NSPCs) persist in the mammalian brain throughout life and can be activated in response to the physiological and pathophysiological stimuli. Epigenetic reprogramming of NPSC represents a novel strategy for enhancing the intrinsic potential of the brain to regenerate after brain injury. Therefore, defining the epigenetic features of NSPCs is important for developing epigenetic therapies for targeted reprogramming of NSPCs to rescue neurologic function after injury. In this study, we aimed at defining different subtypes of NSPCs by individual histone methylations. We found the three histone marks, histone H3 lysine 4 trimethylation (H3K4me3), histone H3 lysine 27 trimethylation (H3K27me3), and histone H3 lysine 36 trimethylation (H3K36me3), to nicely and dynamically portray individual cell types during neurodevelopment. First, we found all three marks co-stained with NSPC marker SOX2 in mouse subventricular zone. Then, CD133, Id1, Mash1, and DCX immunostaining were used to define NSPC subtypes. Type E/B, B/C, and C/A cells showed high levels of H3K27me3, H3K36me3, and H3K4me3, respectively. Our results reveal defined histone methylations of NSPC subtypes supporting that epigenetic regulation is critical for neurogenesis and for maintaining NSPCs. Electronic supplementary material The online version of this article (10.1007/s12035-019-01777-5) contains supplementary material, which is available to authorized users.
Introduction
In the postnatal mammalian brain, most of the neural stem/progenitor cells (NSPCs) are spatially restricted to two specific brain regions: the subgranular zone (SGZ) in the dentate gyrus of the hippocampus and the subventricular zone (SVZ) of the lateral ventricles [1]. As the major site for NSPCs in the postnatal central nervous system (CNS), four major cell types of NSPCs have been identified in the SVZ niche: ependyma-like stem NSPCs (type E cells), quiescent or dormant NSPCs (qNSCs; type B cells), transient amplifying progenitors (TAPs; type C cells), and migrating neuronal precursors (neuroblasts; type A cells) [2,3] (Fig. 1b). NSPCs in SVZ can be activated in response to physiological and pathophysiological stimuli, in which they initiate CNS repair and functional recovery [4]. Therefore, understanding the dynamic regulation of NSPC subtypes may provide new insight for developing novel treatment modalities for CNS diseases.
Histone modifications are post-translational modifications to histone proteins which include methylation, phosphorylation, acetylation, ubiquitylation, and sumoylation. These modifications have biological roles and can be inherited and are referred to as epigenetic marks. Specific histone methylation marks at promoter regions affect transcription activities [5]. Generally, histone H3 lysine 4 trimethylation (H3K4me3) and histone H3 lysine 36 trimethylation (H3K36me3) are associated with active promoters and gene bodies of actively transcribed genes. This results in increased transcription activity, whereas histone H3 lysine 27 trimethylation (H3K27me3) is linked to transcriptional Electronic supplementary material The online version of this article (https://doi.org/10.1007/s12035-019-01777-5) contains supplementary material, which is available to authorized users.
* Magnar Bjørås magnar.bjoras@ntnu.no repression [6]. H3K4me3, H3K36me3, or H3K27me3 has pivotal and distinct roles in different stages of neurodevelopment and aberrant regulation of histone methylation contributes to the pathogenesis of various CNS disorders [7]. Many embryonic stem cell (ESC) promoters combine activating H3K4me3 marks and repressive H3K27me3 marks, and these bivalent domains are important dynamically regulated targets in the expression of developmental genes [8]. H3K36me3 is markedly enriched at pericentromeric heterochromatin in ESCs and fibroblasts [9]. Even though both H3K4me3 and H3K36me3 are transcriptional activators, H3K36me3 predominates in the transcribed bodies of genes, whereas nucleosomes near the transcription start site of active genes contain H3K4me3 [10]. However, we have limited understanding regarding the function of the dynamic changes in these histone methylation marks during neurodevelopment.
In this study, we observed distinct features of histone methylation in the different subtypes of NSPCs during Fig. 1 H3K27me3, H3K36me3, and H3K4me3 co-located with SOX2 during neurodevelopment in SVZ. Schematics of the cell layers and cell types in the embryonic (a) and adult (b) brain. Immunofluorescent staining showed that high level of H3K27me3, H3K36me3, and H3K4me3 co-stained with SOX2 at E18 (c), P10 (d), and 2M (e). Nuclei were counterstained with DAPI. E18, embryo at day 18; P10, postnatal at day 10; 2M, adults 2 months. Scale bar = 50 μm neurodevelopment. Type E/B cells are marked by high levels of H3K27me3, type B/C cells showed high levels of H3K36me3, and H3K4me3 is specific for type C/A cells. These results may reveal new insight into the onset of neurodevelopment and provide an innovative epigenetic signature for discovery and characterization of key regulatory genes/regions for neurogenesis.
Material and Methods
Animals C57BL/6N mouse strain was used for this research and all mouse experiments were approved by the Animal Research Committee and the Norwegian Food Safety Authority (NFDA), and conducted in accordance with the rules and regulations of the Federation of European Laboratory Animal Science Associations (FELASA). The staff at Komparativ Medisin (KPM) Oslo University Hospital is responsible for housing and daily maintenance. Housing and environmental enrichment is according to standards. All efforts were made to minimize animal suffering and to keep the numbers of animals used to a minimum.
Method Details
P10 and adult mice were anesthetized and transcardially perfused with normal saline followed by 4% paraformaldehyde (PFA, sc-281692, Santa Cruz Biotechnology, Dallas, TX, USA). Ten milliliter normal saline and 25 ml 4% PFA were used for P10 mice, while adult mice were infused with 25 ml normal saline and 50 ml 4% PFA. For E18 mice, pregnant E18 mice were sacrificed and the fetal brains were dissected in cold PBS, and then soaked into 4% PFA for fixation. All brains were dissected and post-fixed in 4% PFA overnight at 4°C, followed by paraffin embedding. Fourmicrometer brain tissue serial slices were coronally sectioned by microtome (HM355s, Thermo Scientific, Waltham, MA, USA) and mounted onto glass slides. These sections were used for immunostaining. The slides were deparaffinized and cleared in Clear-Rite™ 3 (6901TS, Thermo Scientific) followed by rehydration in an EtOH gradient. Then the slides were heated to 95°C in the antigen retrieval buffer (3 g sodium citrate (25114, Sigma-Aldrich), 0.4 g citric acid (251275, Sigma-Aldrich), 1000 mL H 2 O, pH 6.0) for 30 min, followed by washing with 0.01 M PBS (all washes were performed three times, 5 min each). The slides were permeabilized with 0.3% Triton X-100 (T8787, Sigma-Aldrich, St. Louis, MO, USA) for 20 min, rinsed, and then blocked for 2 h with blocking buffer (5% normal goat serum (G9023, Sigma-Aldrich) and 5% bovine serum albumin (A7096, Sigma-Aldrich)). The samples were incubated with the primary antibodies (Table 1)
Quantification and Statistical Analysis
The level of histone methylation and double-positive cell was measured and defined by using Image-Pro Plus 5.1. Differences between groups were analyzed using one-way ANOVA, followed by Tukey's post hoc test. All statistical analyses were performed using GraphPad Prism 5. The data are shown as mean ± standard deviation, and P < 0.05 was considered as statistically significant difference.
Results
High Levels of H3K27me3, H3K36me3, and H3K4me3 in Neural Stem/Precursor Cells during Neurodevelopment in SVZ To characterize the dynamics of histone methylations during neurodevelopment, we collected mouse brains at different time points of early life: embryo at day 18 (E18), postnatal at day 10 (P10), and adults at 2 months (2M). Then, we examined the levels of three different histone methylation marks (H3K27me3, H3K36me3, and H3K4me3) by immunofluorescence staining. All three histone marks showed strongest staining in neurogenic niches (e.g., SVZ and SGZ) during neurodevelopment, although the intensity varied among the three time points. In SVZ, the three tested histone marks showed co-localization with the established NSPC marker SOX2 at all three time points of development studied (Fig. 1c-e). In SGZ, SOX2 co-localized with H3K4me3 and H3K36me3 at all three Fig. 1 A-D).
Notably, H3K4me3, H3K36me3, and H3K27me3 stained distinct parts of SVZ, particularly at P10. H3K27me3 showed strongest staining of the ependymal cell layer, and H3K36me3 level was high in the surrounding striatal parenchyma as well as the ependymal cell layer at the lateral ventricle. In contrast, H3K4me3 staining is strongest between the ependymal cell layer and the striatal parenchyma (Fig. 1d). Previously, it has been demonstrated that in the postnatal mouse brain, type B cells locate between type A cells and the underlying striatal parenchyma as well as between type A cells and the ependymal cells, and that type C cells locate around type A cells (Fig. 1b) [11,12]. These results suggest that histone methylation may define different subtypes of NSPCs.
High Level of H3K27me3 in CD133-Positive Cells at Early Postnatal Neurodevelopment
In postnatal mouse brain, CD133 (also known as prominin-1) is a marker for type E/B cells; Id1 marks type B/C cells (type C cells are Id1 positive, although at significantly lower levels relative to type B cells) [13]; type C cells express the highest levels of Mash1 (also known as Ascl1); and DCX marks type A cells [14]. Immunocytochemical double labeling unveils 74% CD133-positive cells showing high level of H3K27me3. On the contrary, there were few CD133positive cells that co-stained with H3K36me3 (42%) and H3K4me3 (20%) (Fig. 2a, b). In the adult SVZ, the number of CD133-positive cell decreased markedly and there was no significant difference in immunocytochemical double labeling for the three histone methylation marks studied at this stage (Fig. 5a, b). The anatomical structure of embryo and postnatal mouse brain is noticeably different (Fig. 1a, b). In embryo mouse brain, H3K27me3 and H3K36me3 showed high levels in the ventricular zone (VZ), while high levels of H3K4me3 cells were located to SVZ (Fig. 1c). As expected, most cells in VZ and SVZ were CD133 positive. Furthermore, 74% of the CD133-positive cells co-stained with H3K4me3, while H3K27me3 and H3K36me3 showed 34% and 51% costaining with CD133, respectively (Fig. 3a, b). Thus, it appears that high level of H3K27me3 is displayed in ependymal and quiescent neural stem cells in the SVZ (type B/E) at early postnatal neurodevelopment.
High Level of H3K36me3 in Id1-Positive Cells at Early Postnatal Neurodevelopment
Similarly, we used immunocytochemical double labeling to identify colocalization of Id1 and the three histone methylation marks. Seventy-three percent Id1-positive cells co-stained with H3K36me3, significantly higher than H3K27me3 (42%) and H3K4me3 (29%) (Fig. 2c, d). Analogous to the CD133 staining, the number of Id1-positive cells was reduced dramatically at adulthood and double labeling revealed minor differences for the three histone methylation marks (Fig. 5c, d). In embryo mouse brain, most of the Id1-positive cells were located in VZ and the majority of Id1-positive cells co-stained with H3K27me3 (79%) and H3K36me3 (68%) (Fig. 3c, d).
However, just 44% H3K4me3-positive cells co-stained with Id1. These phenomena may indicate that H3K36me3 is a good marker for quiescent and active neural stem cells (type B/C) at early postnatal neurodevelopment.
High Level of H3K4me3 in Mash1 and DCX-Positive Cells at Postnatal Neurodevelopment
Mash1 (also known as Ascl1) is characterized as a proneural transcription factor and typically used as a type C cell marker. DCX is expressed in the last stage before NSPCs are migrating through the rostral migratory stream (RMS) [14]. Therefore, Mash1 and DCX were used for labeling type C and A cell, respectively. Immunocytochemical double labeling identified 66% of Mash1-positive cells co-staining with H3K4me3 at P10, while very low co-staining was observed for H3K27me3 (6%) and H3K36me3 (25%) (Fig. 2 E, F). Embryonic brain staining results showed that Mash1-positive cell appeared in SVZ; and it was similar to P10 with 82% H3K4me3, and very low H3K27me3 and H3K36me3 co-staining, 10% and 13% respectively (Fig. 4a, b). At adulthood, 58% Mash1-positive cells co-stained with H3K4me3, 2% with H3K27me3, and 54% with H3K36me3 (Fig. 5e, f). Then, double immunostaining was also used for detecting DCX and different histone methylations. During neurodevelopment, the number of H3K4me3 DCX double-positive cells was significantly higher compared with H3K27me3 or H3K36me3 double-positive cells (Fig. 2g, h; Fig. 4c, d; and Fig. 5g, h). Thus, both type C and type A are represented by H3K4me3.
H3K4me3-and H3K36me-Positive Cells Co-Stain with Proliferation Markers at Early Neurodevelopment
To further evaluate histone methylation in the proliferation state of early developmental cells in SVZ, co-staining with Scale bar = 50 μm. The square frames are enlarged to show the typical detail high (red) and low (yellow) levels of different histone methylation features. b, d The number of immunolabeled cells was counted for three sections in each mouse and each value represents the mean ± SD of three mice (n = 3). **P < 0.01, ***P < 0.001 versus H3K27me3 group. P10, postnatal at day 10 the proliferation markers Ki-67 and PCNA was analyzed. We identified a noticeable difference with most of Ki-67-positive cells co-staining with H3K36me3 (87%) or H3K4me3 (86%), while only 3% of H3K27me3-positive cells co-stained with Ki-67 at P10 (Fig. 6a, b). Similarly, just 4% of PCNA-positive cells co-stained with H3K27me3 compared with 70% for H3K36me3 and 75% for H3K4me3 (Fig. 6c, d). These results strongly indicate that high levels of H3K36me3 and H3K4em3 correlate very well with proliferating cells in SVZ at early postnatal neurodevelopment.
Discussion
Traditional therapies for CNS diseases are limited. For example, treatment for clinical stroke by the administration of tissue plasminogen activator and the recent introduction of mechanical thrombectomy can only be used in a limited proportion of patients due to time constraints [15]. Accordingly, continuing efforts are in need to develop novel, safe, and more optimal and effective therapeutic strategies for CNS diseases. The dynamic regulation of histone methylations and chromatin remodeling plays essential roles in development, cellular differentiation, and cell fate maintenance [16]. More importantly, emerging evidence supports the involvement of histone methylation in the pathogenesis of CNS damage and several neurodegenerative diseases [17,18]. In this study, we reveal how different histone methylation marks are dynamically regulated during NSPC differentiation in the mouse SVZ area, represented as marked differences in histone methylations between quiescent and active NSPCs (Fig. 7). As NSPCs can be activated by CNS damage and participate in CNS repair and functional recovery, our study may bring a novel perspective to a therapeutic strategy of CNS diseases and provide a potential histone methylation features for screening and identifying key therapeutic genes for CNS diseases. SOX2 maintain stemness of NSPCs in a slowly proliferating stem cell state by repressing the cell cycle regulator cyclin D1 during cortex development [19]. When NSPCs enter the stage of differentiation, the levels of SOX2 decrease, which releases this repression and thus promotes cell cycle re-entry and NPC proliferation [20]. In this study, the SOX2 staining results showed that the number of SOX2 cells in SVZ gradually decreased during neurodevelopment. Notably, most cells with high level of H3K27me3 showed high level of SOX2, whereas H3K36me3 cells presented low co-staining with SOX2 cells. Furthermore, H3K4me3 and SOX2 co-staining are rare in SVZ. There is a positive correlation between the expression of SOX2 and stemness of NSPCs [21]. Thus, our results define histone methylations specific for SOX2-positive NSPCs. Moreover, we reveal that high levels of H3K27me3 exist in the early stage of NSPC development; H3K36me3 is characteristic of metaphase while H3K4me3 is enriched in the mid and later stages of NSPC development.
In the mammalian embryo brain, the proliferative region comprises two distinct zones: VZ, which is a neuroepithelial layer directly adjacent to the ventricular lumen, and SVZ, which is positioned superficial to the ventricular zone [22] (Fig. 1a). Radial glial cells (RGCs, one type of embryonic neural stem cells) reside in the VZ and generate both intermediate progenitor cells (IPCs, one type of embryonic neural precursor cells) and cortical neurons. IPCs migrate away from Fig. 7 Schematic model of the developmental process of NSPCs projected from this study. Histone methylations are dynamically changed during NSPC differentiation in mouse SVZ area. Different subtypes of NSPCs represented different patterns of histone methylations. Specifically, type E/B cells are marked by high levels of H3K27me3, type B/C cells showed high levels of H3K36me3, and H3K4me3 is specific for type C/A cells the ventricular surface and establish the SVZ [23]. Therefore, the cellular composition is different in VZ and SVZ. RGCs are mostly concentrated in VZ and most of IPCs located in SVZ. In this study, embryo brain staining showed high levels of H3K27me3 and H3K36me3 in VZ, and of H3K4me3 in SVZ. Thus, it suggests high level of H3K27me3 and H3K36me3 at early stage of embryonic neural stem cell development, and of H3K4me3 at middle/late stage.
Further, our results identify significant differences among immunocytochemical double labeling in the P10 SVZ. However, we found that these distinct features disappeared in 2 months or E18; the number of NSPCs in SVZ was significantly decreased during neuronal development, and the dynamics of histone methylations described here might be one of the mechanisms underlying this regulation and might encode the difference between embryonic NSPCs and adult NSPCs. One major difference between adult and embryonic neural stem cells is their different number and their ability to differentiate into various cell types. Embryonic NSPCs can divide asymmetrically to generate neurons directly or indirectly through intermediate progenitor cells and oligodendrocytes. More importantly, at the end of the embryonic development, embryonic NSPCs begin to detach from the apical side and convert into astrocytes. Even if adult NSPCs can continue to generate neurons and oligodendrocytes, they cannot differentiate into astrocytes [24]. Histone methylation introduces epigenetic modifications with close ties to transcription and has been directly linked to lifespan regulation in many organisms [25]. For example, upon differentiation towards the neuronal lineage, some bivalent genes became expressed and lost the H3K27me3 mark, whereas those that were silenced lost the H3K4me3 and retained H3K27me3 [26]. Therefore, it is not unlikely that the embryonic and adult NSPC states are maintained by differential histone methylation profiles.
Chromatin, the template for epigenetic regulation, is a highly dynamic entity that is constantly reshaped during neurodevelopment [27]. Epigenetic regulation by histone methylation provides the necessary plasticity for cells to respond to environmental and positional cues, enabling the maintenance of acquired information without changing the DNA sequence. In this study, we showed different subtypes of NSPCs represented different features of histone methylations. These results may reveal novel insight into the onset of neurodevelopment and provide an innovative epigenetic signature for discovery and characterization of key regulatory genes for neurogenesis. However, further studies, especially whole epigenome analysis and histone profiling, are necessary for in-depth understanding of the role for individual histone methylation domains in neurodevelopment.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-10-26T16:59:41.523Z | 2019-10-25T00:00:00.000 | {
"year": 2019,
"sha1": "54468493a6605f373202b02fb8b568991e2cd07b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12035-019-01777-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "54468493a6605f373202b02fb8b568991e2cd07b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
228980218 | pes2o/s2orc | v3-fos-license | Liberalisation of education in Cameroon : The liberating-paralysing impact on nursing education
Contemporary nursing has significantly progressed since the Nightingale era. Entry to practice in many countries is now set at the Bachelor’s level. The scope of practice is widening: nurses now have prescribing rights; lead in chronic disease management; and provide increasing access to quality care, at relatively affordable costs.[1] Nurses even run primary care services employing general practitioners.[2] They play leading roles in developing care models that deliver better patient outcomes.[3] Their role in healthcare policy development is also increasing. Hall-Long[4] has argued that their proactive involvement in health policy development drives excellence in nursing practice, scholarship and education. Despite the progress made, some nurses in clinical settings avoid becoming involved in policy debates.[5] In some cases, nurses’ role in health policy development remains unclear.[6] Ellenbecker et al.[7] propose educating nurses in health policy to solve this problem. However, education alone will not solve the problem. Rafferty[2] has observed that nurses’ voice in policy development has always been weak. Their presence and status in policy decision-making is minor,[8] and even in cases where they have the competence, their voices are not heard.[9] Health policies, like other national policies, are usually determined by governments. If nurses want national policies to reflect nursing values, they will have to influence those policies.[10] This means that they need to skillfully align their goals with government interests. Three conditions are necessary for this to happen: first, the context has to be ready for change; second, the interests of the profession and the government need to align; and third, some contingency or factor is needed to create an intervention urgency.[11] Cameroon is a central African country with a centralised system of government. Nursing education until the late 1990s and early 2000s was controlled by the Ministry of Health (MOH), and was diploma-based.[12] The Ministry of Higher Education (MHE) and the Ministry of Employment and Vocational Training (MEVT) began running nursing programmes at the time of the liberalisation laws of the early 2000s. While all three ministries ran diploma programmes, only the MHE could run degree programmes. Considering organised nursing’s relative lack of influence over government policy structures, nursing has struggled to respond to these changes. The present study, conducted as part of an investigation of nursing education in Cameroon, analyses the effect of the liberalisation of higher education (HE) on nursing education.
Contemporary nursing has significantly progressed since the Nightingale era. Entry to practice in many countries is now set at the Bachelor's level. The scope of practice is widening: nurses now have prescribing rights; lead in chronic disease management; and provide increasing access to quality care, at relatively affordable costs. [1] Nurses even run primary care services employing general practitioners. [2] They play leading roles in developing care models that deliver better patient outcomes. [3] Their role in healthcare policy development is also increasing. Hall-Long [4] has argued that their proactive involvement in health policy development drives excellence in nursing practice, scholarship and education.
Despite the progress made, some nurses in clinical settings avoid becoming involved in policy debates. [5] In some cases, nurses' role in health policy development remains unclear. [6] Ellenbecker et al. [7] propose educating nurses in health policy to solve this problem. However, education alone will not solve the problem. Rafferty [2] has observed that nurses' voice in policy development has always been weak. Their presence and status in policy decision-making is minor, [8] and even in cases where they have the competence, their voices are not heard. [9] Health policies, like other national policies, are usually determined by governments. If nurses want national policies to reflect nursing values, they will have to influence those policies. [10] This means that they need to skillfully align their goals with government interests. Three conditions are necessary for this to happen: first, the context has to be ready for change; second, the interests of the profession and the government need to align; and third, some contingency or factor is needed to create an intervention urgency. [11] Cameroon is a central African country with a centralised system of government. Nursing education until the late 1990s and early 2000s was controlled by the Ministry of Health (MOH), and was diploma-based. [12] The Ministry of Higher Education (MHE) and the Ministry of Employment and Vocational Training (MEVT) began running nursing programmes at the time of the liberalisation laws of the early 2000s. While all three ministries ran diploma programmes, only the MHE could run degree programmes. Considering organised nursing's relative lack of influence over government policy structures, nursing has struggled to respond to these changes. The present study, conducted as part of an investigation of nursing education in Cameroon, analyses the effect of the liberalisation of higher education (HE) on nursing education.
Methodology Design
The study design followed Charmaz's [13] contemporary interpretation of grounded theory as described by Glaser and Strauss. [14] She proposed an early sorting and synthesis of data through qualitative coding and building levels of abstraction from the studied data.
Research
Theoretical sampling -'the process of data collection for generating theory whereby the analyst jointly collects, codes and analyses data and decides what data to collect next and where to find them, in order to develop the theory as it emerges' [14] -guided follow-up interviews and further data searches. A sample size of 10 nurses was set at the beginning of the study. Documents were collected from the MHE, MEVT and MOH.
Analysis
Document analysis began once the documents were collected. Applying Charmaz's framework, [13] meanings, beneficiaries, context and patterns within the documents were isolated. Document analysis enables exploration of historical foundations of contemporary ideas, practices and identities that subtly affect the present. [15] Texts were examined for context, target, and direct and implied meanings, as Charmaz [13] recommends. This analysis also generated new questions that were pursued in the interviews.
Interviews lasted between 40 and 60 minutes, and were audio-recorded and guided by an interview schedule. The research questions constituted the primary questions, while responses and emerging data generated follow-up questions. Analysis from the 10 interviews generated new issues that required 3 secondary interviews, including 1 new participant who met the study criteria. This participant had mastery of the new issues. Actions such as this were previewed in the ethical clearance. Interview transcripts and scanned copies of documents were then imported into NVivo 10 software (QSR International, Australia) for qualitative analysis. Data were coded beginning with line-by-line coding. Focused codes were then created by merging codes capturing similar data. Constant comparison of data, codes and focused codes led to the identification of subcategories illustrating the links between focused codes. With the growing complexity of emerging data, explanatory links between subcategories were identified, leading to categories. Memos were also written to question and expand on emerging data.
Ethical approval
Ethical approval was obtained from the University of Essex, UK, and the University of Buea, Cameroon (ref. no. 2015/346/UB/FHS/IRB), where the study was carried out. Participants gave written consent, and interviews were conducted at their convenience and with their rights respected.
Results
There were two categories constructed, 'advancement' and 'resistance' (Table 1). Advancement captured three subcategories that showed liberalisation positively affecting nursing education, as perceived by study participants. Resistance captured the complex links between five subcategories showing resistance to liberalisation-associated changes.
Advancement
This category describes the nature and positive effects of liberalisation, and is composed of three subcategories. Liberalisation changed perceptions about education among participants: ' As scientific profession is coming on … as different fields of specialties are coming up for the wellbeing of the patient, people should be given the opportunity to excel in whatever domain they want and not to have limiting factors. ' (interview 12: quote2)
Nature of liberalisation
Contemporary educational systems should thus be responsive to individual needs and scientific progress, and give professionals the opportunity to excel. The policy also increased educational opportunities: 'Formerly the nurse could not go beyond the so called CESSI [Centre for Higher Nursing Studies] advanced nursing diploma … when things were liberalised, it seemed as if many people understood that no profession should be held ransom. ' (Int12:1) Education opportunities beyond the diploma felt like professional liberation to some nurses.
Increased access to education
Liberalisation introduced nurse education to the university. Nurses with MOH diplomas with 5 years' practice experience were also admitted to study for the 4-year Bachelor's degree: 'When they announced the entry into the BSc section for UB [University of Buea] in 1997, they considered the new entry and the old or experienced nurses … professionals who were ready and had more than 5 years' experience were opportuned to get in … I got in and so succeeded to do my BSc nursing. ' (Int10:1) More universities now offer the 4-year Bachelor's degree programme: 'There are other universities that have also come up both public and private that are also training at the Bachelor's level. We can take the Christian university … the Catholic University in Yaoundé … the University of Bamenda…just to name a few … that are actually delivering a Bachelor of Science programme in nursing. Straight 4-year programmes. ' (Int7:1) The MHE, in addition to degree programmes, also launched the higher national diploma (HND) programme: 'So around 2003 or so … launched its HND programme to train nurses again at the diploma level, but this time using an HE model not a hospital-based type of model … giving those nurses the opportunity to advance in the HE system becoming Bachelor of nursing, masters … etc.' (Int7:2) The HND model was not hospital-based, but designed to allow advancement to undergraduate and postgraduate degree studies.
Research
The expansion brought about increased recognition of the Bachelor's degree within the MOH: 'I think things have ameliorated themselves, once you want to go to school now, you ask for authorisation, you are given the authorisation. When you come back and give your report and hand in your papers you will be placed. ' (Int10:2) After initial resistance to nursing degrees, the MOH created a process to recognise nurses' HE qualifications. Expansion equally created a wide variety of nursing programmes: 'There is a lot of multiplicity in our nursing as we move to HE, which is a good thing anyways -the nurse was not meant to stagnate. ' (Int3:3) Some participants saw the multiplicity of nursing programmes as a growth opportunity. Some nurses wanted the MOH to stop running nursing programmes. These programmes are still diploma-based, while MHE programmes are degree-based: 'MOH who is the employer feels that they should follow its ideology; unfortunately, times have changed. We cannot be following your ideology when you are ending at the diploma level and some of us are ending at the Bachelor's level. ' (Int7:3) Only the MHE can issue degrees. Since MOH programmes lack a clear diploma-Bachelor bridging pathway, some nurses perceived them as outdated.
Positive reception
Constant comparison revealed data showing that the expansion of nursing education was welcomed. There was the perception of rediscovery: 'We now realise there was something we were missing. Now they are going for it, to expand the scope of these disciplines. ' (Int1:1) Nurses saw an opportunity to grow their capacity and expand their scope of practice. This was facilitated by PHEIs providing diploma-Bachelor bridging courses: 'Without any written policy some private schools now, I must say PHEIs, are giving those nurses … the opportunity to convert their SRN diplomas to a degree. ' (Int7:1b) The bridging courses were designed only for HND holders, but PHEIs innovatively designed special diploma-Bachelor bridging courses for MOH diploma holders. Though both are 3-year diplomas, these bridging courses take 1 and 2 years, respectively. Some nurses took credit for the ongoing expansion: 'We fought for this, fought for it seriously … so we are very happy with what is happening today. ' (Int6:1) Though the ongoing changes resulted from a general government policy, some nurses believe their lobbying played a role.
Resistance
This category revealed five subcategories showing resistance to liberalisationassociated change.
Control of nurse education
Some nurses think that only the MOH should control nursing education: 'These are health personnel, in some settings there can be no health personnel who will not train within the … MOH context … but now they just diffuse the whole thing … What type of certificates does ministry of professional training give them?' (Int13:1) The training programmes under other ministries were looked on with suspicion. This suspicion was strengthened by the perception that the other ministries had weaker accreditation procedures, and so non-health personnel went there for accreditation: 'When we were in the MOH, there were many applications from people who wanted to open schools, economic operators, but … they were not qualified so they now went them into vocational education … and opened schools, got their authorisation from there. ' (Int1:1) For other nurses, this argument was more about control than quality: 'There is no rationale, there is no rationale! Again, it has to do with what we call protecting your turf. ' (Int7:1)
Policy controversies
The ministries operated parallel education models: 'MOH continues with its trajectory of training nurses in its hospitalsbased … curriculum while the MHE is using the LMD or the Bachelors-Masters-PHD model to train nurses along the university curriculum. So the problem is: what will be the fate of the nurses who are continuing to be trained by MOH?' (Int7:3) The two parallel models, the MOH hospital-based, and the HE Bachelors-Masters-PhD model (allowing a smooth transition from Bachelor through doctoral studies) were mutually exclusive. So, while MHE diploma holders could easily progress to postgraduate studies, the MOH diploma holders could not. The diploma-Bachelor bridging pathway remained a complex system within HE: 'Candidates with the HND … after 1-year conversion … get their bachelor's degree. But … the state universities are not doing it … One would think that it would have been automatic now for HND students to just enroll in the university system … but the university is not doing it. ' (Int7:4) PHEIs offer a 1-year HND-to-Bachelor bridging course, through their affiliation with state universities. However, these courses are not directly obtainable from the universities. Another controversy was the curricular diversity: 'There is no control; everybody has his own independent training programme curriculum … meanwhile, everybody should be on the same footing. ' (Int8:2) The perceived curricular diversity among PHEIs in contrast to the MOH's national curriculum was interpreted as evidence of disorganisation.
Influence of non-nurses
The data revealed the strong influence of physicians and non-nurses on nurse education. Non-nurses were perceived to be actively involved in shaping education policy: 'The training of nurses in this country is in the hands of people who are not nurses, and they don't understand how nurse training should be like.' (Int9:1)
Research
Many proprietors of PHEIs were non-nurses, and this gave them influence over nursing programmes within their institutions. These proprietors, some of whom were physicians, were seen to prioritise profits over professional standards: ''It's the quest for economic power by the doctor. They know that to get rich quick, open a nursing school of course … therefore the financial aspect of it … overrides nursing care practice. ' (Int3:3)
Personal prejudices
Educational expansion created job insecurities and encouraged resistance. Some nurses were afraid of losing their positions to more qualified graduates: 'They somehow feel threatened that if they allow training to move into the universities … young people will come out with higher qualification and that may jeopardise their jobs and their position. ' (Int9:1) Data also showed professional subjectivity: 'I think that people are protecting their diplomas, they are not protecting the profession. They are protecting the kind of training they got: because I am a state registered nurse, I have to make sure that state registered nursing stays on the market; because I did HND, let me protect HND. No!' (Int9:3) Some nurses were perceived to align with their preferred educational model, instead of seeking the best for the profession.
Status conflicts
Conflicts were raised about professional membership. Some professional associations accepted only MOH diplomas: 'The prerequisite to register in the association is a diploma in your profession of 3 years' consecutive training, academic training. ' (Int8:1) 'You're A-levels and you go and start doing a degree course when you have not yet been a professional. There is a jump … it shows in the field. And that is why we are not registering them. ' (Int8:2) BSc graduates are registered only if they completed a MOH 3-year diploma programme prior to their BSc studies.
When it came to recruitment of nurses, the MoH was perceived to recruit HE graduates only reluctantly : 'They are not willing to let go at the basic training level … But you are hiring their products with mixed feelings, and there are many out there who have not been hired because of the same reason. ' (Int7:4) The MOH thus preferred its own graduates, and only recruited graduates from other ministries reluctantly.
Another source of conflict was the 'nurse' title. Some nurses thought it was being abused: 'You see that you will train as an auxiliary for 6 months or 9 months -I am a A new definition will lead to restructuring of nursing curricula to achieve the envisaged status/ competence.
Discussion
As Fig. 1 indicates, government's liberalisation policy was unprecedented and unanticipated. The fallout from the policy pulled nursing in different directions.
Resistance and advancement
Liberalisation radically changed the educational context, giving rise to PHEIs, and non-nurses became proprietors of nursing schools. These players were perceived to be more profit-oriented than nursing values-oriented. The accompanying curricular diversity upended the MOH national curriculum model, creating the perception of PHEIs running independent programmes. With the MOH's loss of monopoly and the lack of co-ordination between the ministries, nurse education policy was not harmonised. This manifested in diploma upgrade, employment, and professional membership conflicts. The diploma-Bachelor's upgrade conflicts have increased job security anxieties, as some MOH diploma nurses fear competition from incoming degree holders. This has caused some nurses to resist liberalisation-generated changes.
Other nurses have embraced the ongoing changes, and were excited about the opportunity to obtain degrees. The diversity in programmes/ schools has increased access since the time when the MOH trained only for its own needs. PHEIs have created diploma-to-BSc upgrade models for MOH diploma holders. These pathways do not exist in state universities.
The interaction of these forces bears similarities to Lewin's [16] theory of planned change. [12] The change theory is characterised by unfreezing, change and refreeze. [17] According to Maboh, [12] the current context and changes taking place mirror the 'unfreezing' and 'change' phases. However, the key difference is that the ongoing change is unplanned and unco- Research ordinated. Resistance within nursing makes it difficult for 'refreezing' to be achieved. Comparing liberalisation to Traynor and Rafferty's [18] 'context, convergence and contingency' argument, the context is right for change, while convergence and contingency have been achieved for only one-half of the nursing profession. Thus, change cannot be maximised.
Liberating paralysis and practice implications
Liberating paralysis describes the current context, where unco-ordinated change is concurrently both advancing nurse education and generating resistance that is pulling it backward. This context has resulted from unprecedented change in overall government policy, with unanticipated ripple effects on the profession. These effects ushered in much-needed changes in this time and context. However, this needed change is so disruptive that it has generated significant resistance from some nurses, creating a whirlwind scenario that fails to fully advance nursing education. The absence of a strong national grouping makes it impossible for nursing to take control of the current context. Therefore the profession must organise itself, and develop strategies to influence government policy so that it can maximise situations where government policy provides opportunity for growth. Without this, enabling opportunities will always result in liberating paralysis.
Conclusion
Liberalisation opened HE to the private sector in Cameroon. Divided, the nursing profession both embraced expansion of its educational system into HE and resisted the changes at the same time. The interaction of these opposing forces, without co-ordination from organised nursing, has resulted in a state of liberating paralysis. Further research should explore strategies that prepare professions to anticipate and maximise government policy changes.
Declaration. This study was conducted as part of a PhD degree in Nursing Studies at the University of Essex, UK. | 2020-10-28T19:21:01.144Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "d9237e5233301d393cf5fe9cb22f3bd9b5a4d3cc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7196/ajhpe.2020.v12i3.1363",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bdce829e7a152b265622743d1e25c8b1d427cb21",
"s2fieldsofstudy": [
"Education",
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
263649202 | pes2o/s2orc | v3-fos-license | Post Disaster Damage Assessment Using Ultra-High-Resolution Aerial Imagery with Semi-Supervised Transformers
Preliminary damage assessments (PDA) conducted in the aftermath of a disaster are a key first step in ensuring a resilient recovery. Conventional door-to-door inspection practices are time-consuming and may delay governmental resource allocation. A number of research efforts have proposed frameworks to automate PDA, typically relying on data sources from satellites, unmanned aerial vehicles, or ground vehicles, together with data processing using deep convolutional neural networks. However, before such frameworks can be adopted in practice, the accuracy and fidelity of predictions of damage level at the scale of an entire building must be comparable to human assessments. Towards this goal, we propose a PDA framework leveraging novel ultra-high-resolution aerial (UHRA) images combined with state-of-the-art transformer models to make multi-class damage predictions of entire buildings. We demonstrate that semi-supervised transformer models trained with vast amounts of unlabeled data are able to surpass the accuracy and generalization capabilities of state-of-the-art PDA frameworks. In our series of experiments, we aim to assess the impact of incorporating unlabeled data, as well as the use of different data sources and model architectures. By integrating UHRA images and semi-supervised transformer models, our results suggest that the framework can overcome the significant limitations of satellite imagery and traditional CNN models, leading to more accurate and efficient damage assessments.
Introduction
Preliminary damage assessments (PDA) evaluate the extent of damage caused by disasters to buildings and are the first step in the post-disaster recovery process [1,2].These damage assessments are necessary after disasters to ensure the safety of buildings and allocate government resources to homeowners.The PDA process begins with an initial damage assessment (IDA) [3], where damage information is collected and verified by state or tribal authorities via door-to-door surveys over the affected regions.Individual assessments (IA) [4] are conducted as part of the initial damage assessment (IDA) [4] for each disaster-affected home.IAs are conducted door-to-door as disaster victims apply for aid but are inefficient and pose safety risks.For instance, after Hurricane Ian, victims had to wait nearly five months after the storm [5,6] to have their IAs completed.During inspections, compromised components and hazardous debris hamper the ability of inspectors to reach all areas safely and enter damaged properties [7][8][9][10][11].A large disaster could potentially result in hundreds of thousands of IA applications, overwhelming the available workforce and rendering the number of inspectors and support staff inadequate to meet the demands of comprehensive evaluations [6,8].There is thus a need for alternative methods that can help accelerate the PDA process.
A time-consuming step of the PDA process is the identification of the damage state of individual buildings.Researchers have proposed solutions to enable faster and safer post-disaster damage state assessments [8][9][10][11].These solutions typically rely on one or Sensors 2023, 23, 8235 2 of 18 more sources of data that can be obtained in an automated or efficient manner, combined with data processing methods that exploit computer vision and deep learning that can extract actionable information like the damage state of structures [12][13][14][15].
Different data sources have been studied for their suitability for post-disaster damage assessments including images captured via satellites (optical and SAR) [16][17][18][19][20][21][22][23][24], unmanned aerial vehicles (UAVs) [25][26][27][28][29], and ground-level cameras [30][31][32].Satellite images are the most commonly utilized data source due to their wide availability for larger regions.For example, the xBD dataset offers an extensive compilation of pre-and post-event satellite imagery, building polygons annotated with four damage levels [21,22].Among the numerous recent studies conducted on the xBD dataset, Bai et al. trained a model on satellite data and tested its generalizability on the 2011 Tohoku earthquake.Two major concerns with satellite imagery are the reduced visibility during overcast conditions and the limited resolution available that limit the accuracy of damage identification [33].Other researchers have proposed methods that utilize both pre-and post-disaster satellite imagery for building damage assessment.However, there are instances where pre-disaster images may not be always available [34][35][36][37].Additionally, synthetic aperture radar (SAR) images offer an alternative to optical satellite images, overcoming overcast limitations and enhancing satellite-based image analysis for various applications [17][18][19][20][21]38,39].SAR images still pose a challenge for reliable individual building assessment due to their very low resolution.UAV data on the other hand provides high-resolution images compared to satellite data yielding higher quality assessments in comparison.Gerke et al. [40], and others [26,27,29] have utilized the EMS-98 classification system, which categorizes residential buildings into five damage classes.Through their study [40], the authors investigated the varied and uncertain nature of observed damage patterns in different damage classes [11].Additionally, several studies have demonstrated the use of UAVs in automating post-earthquake assessments [31,32].UAV data offers high-resolution images but has limitations like flight time, restricted coverage area, and weather dependency, impacting its utility for post-disaster assessments.Regarding the third data type, researchers have made use of ground-level camera images for post-disaster assessments [30,41,42].These images offer a complementary close-up view's perspective to satellite imagery for detailed assessment but are difficult to scale over larger regions and have accessibility and safety concerns.All these studies suggest that each datatype comes with its own set of limitations, further emphasizing the need for careful consideration when utilizing different sources to enable more efficient PDA.
In addition to visual data, researchers have utilized other dynamic data sources like wind speed, ground motion data like PGA (Peak Ground Acceleration), and response spectra [43].A paper by Lombardo et al. presents an approach to the use of Monte Carlo simulation to quantify the misclassification of tornado characteristics by establishing a relationship between the degree of damage and wind speed [44].Yuan et al. introduced a 1D CNN-based approach for damage assessment [45,46].Moreover, ground motion data provides an advantage in assessing underground structural damage as discussed in studies [47][48][49].
In addition to the data source, the choice of post-processing methods to extract actionable information plays a crucial role in determining the accuracy of assessments.Researchers have explored various heuristic and deep learning methods for tasks such as damage classification and change detection algorithms.Most of the analysis with satellite images focuses on bitemporal images, which consist of pre-and post-disaster images.By utilizing bitemporal satellite images, it becomes possible to visually observe differences since disasters often lead to significant changes in the imagery.Several researchers have focused on detecting these changes by employing pixel-to-pixel comparison methods [24,50] as well as deep learning techniques [26,27,33,35,51].In a case study of Hurricane Michael, Berezina et al. [10] utilized a U-Net model for segmentation and a ResNet CNN architecture for classification on the segmented images.The results demonstrated the clear superiority of deep neural network architectures like CNNs over the support vector machines classifier for change detection with satellite images.Similarly, Hong et.al [9] presented a novel network called EBDC-Net to solve the finer classification problem of damaged buildings after earthquakes.Many papers focusing on change detection algorithms are restricted by a limited number of damage classes, typically only two classes (binary classification problem) limiting the usable insight about the damage state of a building [9,30,34,52,53].In a recent study by Khajwal et al. [30], a multi-class classification study using a dataset of around 500 post-disaster building images revealed an initial accuracy of approximately 55% when utilizing a single aerial image.Additionally, by incorporating multi-view images into their analysis, the authors achieved an additional 10% increase in accuracy.
While these advancements described above represent significant progress in the development of a dependable damage assessment tool, they still fall short of human-level performance at 70% [54] for satellite images and thus leave room for improvement.To advance the development of an automated PDA framework, it is crucial to thoroughly investigate novel data sources and methodologies in an integrated manner.There is an inherent tradeoff between using satellite imagery and images from UAVs.Satellite images lack the necessary level of detail required for accurate model predictions.UAV images on the other hand are difficult to acquire over large areas due to limited speeds, flight time, privacy concerns, and range.With regard to the computer vision methodologies utilized, recent advances that leverage unlabeled data typically available in quantities that are orders of magnitude larger than labeled images have received limited attention [55][56][57].Additionally, existing research has predominantly employed convolutional neural network (CNN) models, while recent findings for other applications suggest that transformers may offer superior performance [58][59][60] and thus require investigation towards their applicability for PDA.
We propose a new framework (Figure 1) for PDA, leveraging novel ultra-high-resolution aerial (UHRA) imagery, together with semi-supervised learning techniques to utilize vast amounts of unlabeled data and enhance the consistency and accuracy of multi-class damage classification to surpass human levels.The novel contribution of our research comes from adopting (i) UHRA images, (ii) unlabeled data into the training pipeline, and (iii) vision-transformer models.We study the effect of the data type and compare our proposed processing method to state-of-the-art approaches to demonstrate the superior performance of our proposed framework over those state-of-the-art approaches.Section 2 outlines our data collection and preparation process, including UHRA and satellite image data, along with introducing the supervised vision transformer (ViT) and semi-supervised Semi-ViT models as part of our deep learning architectures.Section 3 comprises three key experiments: semi-supervised learning with unlabeled data, comparison of CNN and transformer model architectures, and comparison of satellite and UHRA image data types.In Section 4, we delve into the results of each experiment, analyzing their implications and significance.Finally, we conclude the paper in Section 5, summarizing our findings and limitations.
Proposed Methods
Our framework for PDA is illustrated in Figure 1.The process consists of four steps.Firstly, raw UHRA image data is collected using an aircraft equipped with an ultra-highresolution image sensor, (e.g., UltraCam from Vexcel Imaging), typically within 2-3 days after a hurricane strikes.For instance, after Hurricane Michael, the data for an 85,000 km 2
Proposed Methods
Our framework for PDA is illustrated in Figure 1.The process consists of four steps.Firstly, raw UHRA image data is collected using an aircraft equipped with an ultra-highresolution image sensor, (e.g., UltraCam from Vexcel Imaging), typically within 2-3 days after a hurricane strikes.For instance, after Hurricane Michael, the data for an 85,000 km 2 area across four states was published online in just over three days [61].Then, the collected data is processed to extract individual building crops in an automated fashion.A pre-trained transformer model is then fine-tuned on the unlabeled building crops in an unsupervised manner to learn the distribution of the newly acquired data.Finally, the fine-tuned network is used to predict the damage class.
Our research methodology involved in developing the proposed framework examined different data sources and the deep learning architectures described in this section.
Data Sources, Collection, and Preparation
We compare the efficacy of images from two data sources: satellite images from Google satellite images [62,63] and UHRA images from Vexcel Imaging [64].
In this study, we use a 5-class scale for building damage, numbered 0 to 4, representing the severity of the damage.The ground truth is obtained from field observations by Kijewaski-Correa et al. [65] made available through NEHRI Design Safe [66].A 5-class scale was chosen because it aligns with the visually identifiable classes for FEMA individual assessments (IA) [4] and the HAZUS resistance model [67].The criteria used to define the damage classes have been discussed in [4,67].The correspondence between the classes taken on in this study is provided in Table 1.Example UHRA and satellite images for each damage class in the proposed framework are provided together in Figure 2. In the upcoming two sections, we will provide a detailed explanation of the extraction process for both types of images collected from different data sources.
UHRA Image Data
The UHRA images used in this study were acquired from Vexcel Imaging [64].The images are captured via a fleet of fixed-winged aircraft equipped with the UltraCam, a highresolution camera system, to capture up to 1.7 cm ground sample distance (GSD), overcoming the limitation of SAR and satellite images (usually 30-50 cm GSD) [69].Unlike arial images captured using drones, UHRA images can also be quickly acquired by aircraft over a large area Sensors 2023, 23, 8235 5 of 18 in a short span of time [61].Furthermore, UHRA images mitigate the constraints associated with ground images, as they do not pose accessibility issues or safety concerns.In the upcoming two sections, we will provide a detailed explanation of the extraction process for both types of images collected from different data sources.
UHRA Image Data
The UHRA images used in this study were acquired from Vexcel Imaging [64].The images are captured via a fleet of fixed-winged aircraft equipped with the UltraCam, a highresolution camera system, to capture up to 1.7 cm ground sample distance (GSD), overcoming the limitation of SAR and satellite images (usually 30-50 cm GSD) [69].Unlike arial images captured using drones, UHRA images can also be quickly acquired by aircraft over a large area in a short span of time [61].Furthermore, UHRA images mitigate the constraints associated with ground images, as they do not pose accessibility issues or safety concerns.
Our dataset was built using multiple online resources, including DesignSafe [66], the Google Maps Geocoding API [70], and Vexcel Imaging [64].For the labeled dataset, NEHRI's DesignSafe website was utilized to obtain building coordinates and the manually inspected damage class by Kijewaski-Correa et al. [65].The Google Maps Geocoding API was then employed to get the building footprint as a polygon.Finally, the Vexcel imaging API was used to extract the corresponding image and associate it with its respective damage class.These images are extracted using the input of the time of the event and a building polygon.Following this procedure, 1072 labeled images and 16,800 unlabeled images were extracted.Figure 2 presents a sample of each class from the extracted dataset.Our dataset was built using multiple online resources, including DesignSafe [66], the Google Maps Geocoding API [70], and Vexcel Imaging [64].For the labeled dataset, NEHRI's DesignSafe website was utilized to obtain building coordinates and the manually inspected damage class by Kijewaski-Correa et al. [65].The Google Maps Geocoding API was then employed to get the building footprint as a polygon.Finally, the Vexcel imaging API was used to extract the corresponding image and associate it with its respective damage class.These images are extracted using the input of the time of the event and a building polygon.Following this procedure, 1072 labeled images and 16,800 unlabeled images were extracted.Figure 2 presents a sample of each class from the extracted dataset.
Satellite Image Data
The satellite images dataset used in this study was adopted from Khajwal et al. [30], made publicly available on DesignSafe [62].The dataset consists of 500 labeled images (examples in Figure 2) extracted from Google satellite images [63].There are several other satellite datasets are available as open source, as discussed in the introduction section, such as the xBD dataset [71].However, we decided not to utilize this data because it classifies damage states into four different classes (no damage, minor damage, major damage, and destroyed), which deviates from the proposed 5-class scale.
Deep Learning Architecture
We evaluate the performance of transformers against convolutional neural networks (CNNs), which are commonly employed for classification tasks [9,10,30,53,72].Transformers, known for their attention mechanisms, are being increasingly adopted due to their superior performance in various deep learning tasks [60,73].We trained two transformer models: a supervised model and a semi-supervised model.All the models are trained on Nvidia RTX 3090 with 24 GB memory.The network architectures for these models are now described.
Supervised: Vision Transformer (ViT)
The vision transformer, also known as ViT, utilizes a transformer-based architecture to classify images [74].It operates by dividing an image into fixed-size non-overlapping patches, followed by a linear projection of each patch.Position embeddings are then added to each patch, and the resultant sequence of vectors is passed through a standard transformer encoder [74].The transformer encoder includes a multi-head self-attention layer, a multi-layer perceptron (MLP) layer with a gaussian error linear unit.Layer normalization is applied to each of these layers.Figure 3 visually illustrates the ViT model and its components.The hyperparameters for the training are summarized below in Table 2.The hypermeters used in the original ViT paper were directly adopted from [75].In our study, we used a pre-trained model trained on ImageNet [76] to speed up training, improve performance, and leverage learned representations.
Semi Supervised: Semi-ViT
The semi-supervised vision transformer (Semi-ViT) [75] is also a transformer-based model as the name suggests but utilizes unlabeled data along with labeled data.The semisupervised learning pipeline comprises three stages: pre-training (transfer learning [35,77]), followed by supervised fine-tuning, and eventually semi-supervised fine-tuning.
In our study, we used the same pretrained model and supervised training procedure as described in the previous section.During the semi-supervised fine-tuning phase, the exponential moving average (EMA)-Teacher framework is adopted.This choice was driven by the fact that recent results from Cai et al. [75] suggest that the EMA-Teacher framework (Figure 4) provides better stability and achieves higher accuracy for semi-supervised vision transformers for classification tasks compared to the more commonly used FixMatch method [78].The EMA-teacher framework consists of two parallel networks, the student network and the teacher network, both of which are initialized as the fully supervised ViT model is trained on labeled data.
As illustrated in Figure 4, the EMA-Teacher framework uses both labeled and unlabeled samples during training to update the weights of the student and teacher networks.Unlabeled samples undergo two types of augmentations, weak augmentations that pass through the teacher network, and strong augmentations that pass through the student network.The weak augmentations include random resized crop, random horizontal flip, and color jitter, and the strong augmentations are random resized crop, random horizontal flip, random augment [79], and random erasing [80].When the confidence of the prediction of a weakly augmented image passed through the teacher network is above a threshold, then a pseudo-label is assigned to that image.The weights of the student network are then updated using combination batches of labeled data yielding a cross-entropy loss (L ), and unlabeled data using a pseudo-label with a cross entropy loss L .The overall loss is computed as L L µL , where µ is the trade-off weight.The teacher network weights are then updated using the EMA method [75].The semi-supervised vision transformer (Semi-ViT) [75] is also a transformer-based model as the name suggests but utilizes unlabeled data along with labeled data.The semisupervised learning pipeline comprises three stages: pre-training (transfer learning [35,77]), followed by supervised fine-tuning, and eventually semi-supervised fine-tuning.
In our study, we used the same pretrained model and supervised training procedure as described in the previous section.During the semi-supervised fine-tuning phase, the exponential moving average (EMA)-Teacher framework is adopted.This choice was driven by the fact that recent results from Cai et al. [75] suggest that the EMA-Teacher framework (Figure 4) provides better stability and achieves higher accuracy for semi-supervised vision transformers for classification tasks compared to the more commonly used FixMatch method [78].The EMA-teacher framework consists of two parallel networks, the student network and the teacher network, both of which are initialized as the fully supervised ViT model is trained on labeled data.
Experiments
Understanding the role and potential advantage of using the unlabeled data, selecting an optimal model architecture, and the effect of different data types are crucial considerations in the development of an automatic PDA framework.In this study, we aim to investigate three key research questions: (i) we explore the impact of incorporating unlabeled data on model prediction accuracy, with the hypothesis that augmenting labeled data with unlabeled data will improve the performance of our models, (ii) we compare the effectiveness of CNN and transformer model architectures, aiming to identify the architecture that yields superior predictive capabilities for predicting damage class, and (iii) we conduct a comparative analysis of satellite and UHRA image data types to contrast their feature extraction and generalization capabilities.In each of our models, we utilize 85% of the data for training, and 15% of the images for the testing set.All individual experiments performed are summarized in Table 3.The results of this study will contribute to the development of more accurate and robust models in the field of post-disaster damage assessment.In the following sub-section, we will outline the experiments designed to test our hypotheses.The lack of availability of labeled data presents challenges in terms of annotation, while building an unlabeled dataset is far more feasible and convenient.We aim to assess As illustrated in Figure 4, the EMA-Teacher framework uses both labeled and unlabeled samples during training to update the weights of the student and teacher networks.Unlabeled samples undergo two types of augmentations, weak augmentations that pass through the teacher network, and strong augmentations that pass through the student network.The weak augmentations include random resized crop, random horizontal flip, and color jitter, and the strong augmentations are random resized crop, random horizontal flip, random augment [79], and random erasing [80].When the confidence of the prediction of a weakly augmented image passed through the teacher network is above a threshold, then a pseudo-label is assigned to that image.The weights of the student network are then updated using combination batches of labeled data yielding a cross-entropy loss (L s ), and unlabeled data using a pseudo-label with a cross entropy loss L u .The overall loss is computed as L = L s + µL u , where µ is the trade-off weight.The teacher network weights are then updated using the EMA method [75].
Experiments
Understanding the role and potential advantage of using the unlabeled data, selecting an optimal model architecture, and the effect of different data types are crucial considerations in the development of an automatic PDA framework.In this study, we aim to investigate three key research questions: (i) we explore the impact of incorporating unlabeled data on model prediction accuracy, with the hypothesis that augmenting labeled data with unlabeled data will improve the performance of our models, (ii) we compare the effectiveness of CNN and transformer model architectures, aiming to identify the architecture that yields superior predictive capabilities for predicting damage class, and (iii) we conduct a comparative analysis of satellite and UHRA image data types to contrast their feature extraction and generalization capabilities.In each of our models, we utilize 85% of the data for training, and 15% of the images for the testing set.All individual experiments performed are summarized in Table 3.The results of this study will contribute to the development of more accurate and robust models in the field of post-disaster damage assessment.In the following sub-section, we will outline the experiments designed to test our hypotheses.
Semi-Supervised Learning with Unlabeled Data
The lack of availability of labeled data presents challenges in terms of annotation, while building an unlabeled dataset is far more feasible and convenient.We aim to assess the performance of a model when labeled data is limited and investigate the extent to which incorporating unlabeled data can enhance the predictive capabilities of a model.Towards this objective, we designed two experimental cases.In the first case (UHR-Semi-100), we maintained the labeled data at 100% of the training data and utilized 100% unlabeled data.In the second case, (UHR-Semi-25) we reduced the labeled data to 25% of the training data while keeping the unlabeled data at 100% in Table 3.These cases are then compared with their corresponding supervised baseline (UHR-ViT-100 and UHR-ViT-25, respectively).By implementing these cases, we aimed to simulate real-world scenarios where the limitation in data acquisition typically affects the availability of labeled data rather than unlabeled data.
Comparison of CNN and Transformer Model Architectures
We performed a comparative analysis between CNN and transformer models to determine the more effective architecture for our task.To ensure a fair comparison, we kept the training and testing data consistent for both models.For this experiment, we trained a vision transformer (ViT) model (Sat-ViT-100) and compared the performance to results from a CNN model reported in Khajwal et al. [30] (Sat-CNN-100), as listed in Table 3.
Comparison of Satellite and UHRA Image Data Types
The objective of this experiment is to gain a quantitative and qualitative comparison between models trained on both data sources (Satellite and UHRA Images) and their adequacy for damage classification.Towards this objective, we trained two supervised ViT models on images from each data source.To ensure an unbiased experiment, we selected the buildings that were present in both the datasets.In total, there were 267 buildings common to both the satellite and UHRA datasets.These models were then tested on the test data from the same and other sources as listed in Table 4.The naming convention for the model is as follows: {ViT}-{Training Data}-{Testing Data}.For example, 'ViT-UHR-Sat' represents a ViT model that was trained on UHRA images and tested on satellite images.
Classification Metrics
To quantitatively assess the experiment results, we employed several standard classification metrics, including accuracy, precision, recall, F1 score, and average area under the ROC (receiver operating characteristic) curve, referred to AUC-ROC in this study.Accuracy reflects the percentage of correct predictions made by the model, providing an overall measure of its correctness.Precision measures the model's ability to correctly identify positive instances, offering insights into how well it avoids false positives.Recall evaluates the model's ability to detect all positive cases, indicating its sensitivity to identifying actual positive instances.The F1 score, which combines precision and recall, serves as a balanced metric for accuracy, particularly in datasets with imbalanced class distribution, where certain classes may be underrepresented.Lastly, the average AUC-ROC assesses the model's discriminative capabilities between classes.The AUC-ROC curve plots the true positive rate against the false positive rate for different classification thresholds.A higher AUC-ROC value indicates better performance in distinguishing between positive and negative classes, enhancing the model's predictive capabilities.Together, these metrics provide a comprehensive and nuanced evaluation of the model's performance in accurately assessing building damage classes, guiding our analysis and discussions in the subsequent sections.Refer to Table 5 for summarized evaluation metrics.
Results and Discussion
This section presents the results of the three experiments described in the previous section.These findings, summarized in Table 6, offer insights into improving model performance and data selection for an automatic PDA framework.
Semi-Supervised Learning with Unlabeled Data
In this section, we explore the utility of unlabeled data by conducting four experiments, denoted as UHR-ViT-100, UHR-Semi-100, UHR-ViT-25, and UHR-Semi-25 (see Table 3).The results of these experiments are depicted as two curves in Figure 5.The first part of each curve represents supervised training, and the second part represents semi-supervised training.For the supervised models, we conducted the training for 250 epochs, and for the semi-supervised model, we extended the training by an additional 50 epochs until the curve converged.The maximum accuracies achieved for UHR-ViT-100 and UHR-ViT-25 were 81% and 71%, respectively, also indicated in Table 6.Subsequently, we employed the semi-supervised approach to incorporate the unlabeled data into the training process.This led to a notable increase in accuracy of 7% and 10% for UHR-ViT-100 and UHR-ViT-25, respectively.Another notable observation was that with just 25% labeled images, the semisupervised model was able to reach the accuracy of the supervised model with 100% labeled images.These results clearly demonstrate the effectiveness of the semi-supervised training method in enhancing the model's performance by leveraging the additional unlabeled data.The evaluation maps (Figure 6) depict the true values, predicted class, and absolute difference between the real and predicted damage state for each building.Based on the evaluation maps, we observed that most buildings are accurately classified, with around 9% of instances showing misclassifications of ±1 class and even fewer falling into the ±2 class range (3%).Notably, there are no predictions with a difference of 3 classes, indicating that the model rarely exhibits significant errors in damage state assessment.From a practical standpoint, plotting the maps of predicted classes offers valuable insights and aids in identifying priority regions that are most affected after a disaster.The evaluation maps (Figure 6) depict the true values, predicted class, and absolute difference between the real and predicted damage state for each building.Based on the evaluation maps, we observed that most buildings are accurately classified, with around 9% of instances showing misclassifications of ±1 class and even fewer falling into the ±2 class range (3%).Notably, there are no predictions with a difference of 3 classes, indicating that the model rarely exhibits significant errors in damage state assessment.From a practical standpoint, plotting the maps of predicted classes offers valuable insights and aids in identifying priority regions that are most affected after a disaster.
Comparison of CNN and Transformer Model Architectures
In this section, we present a comparison between a CNN and transformer model.The primary aim is to determine which model architecture is more effective for the building damage classification.The transformer-based model displayed a remarkable 18% higher accuracy compared to the CNN-based model (see Sat-CNN-100 and Sat-ViT-100 in Table 6).This improvement was consistent across other performance metrics as well, including precision, recall, and F1 score.In experiment Sat-CNN-100, the model achieved an accuracy of 55%, and an average F1 score of 54% [30].In contrast, Sat-ViT-100 yielded significantly improved results with an accuracy of 73%, and an average F1 score of 72%.An essential observation here is that the model surpasses human-level accuracy on satellite images, achieving a 3% improvement over the reported 70% human accuracy [54,81].This result establishes the model's reliability and suitability for practical applications.
To gain further insights into the predictive capabilities of the models, we compared the ROC curves shown in Figure 7.The ROC curve analysis showed a higher AUC-ROC for the transformer model, indicating its superior ability to discriminate between classes effectively for all the classes.Another observation in both results is the lower AUC-ROC value for class 3, indicating the maximum uncertainty in prediction.This uncertainty is expected for satellite images, and it also aligns with observations from a study on human assessments [81].Lastly, comparing damage class 0 in both cases, the CNN model exhibits poor predictive capabilities, performing close to a random classifier, as evidenced by its ROC falling below 0.5.Conversely, the transformer model demonstrates higher discriminative capability, with an AUC-ROC of 0.93.The evaluation maps (Figure 6) depict the true values, predicted class, and absolute difference between the real and predicted damage state for each building.Based on the evaluation maps, we observed that most buildings are accurately classified, with around 9% of instances showing misclassifications of ±1 class and even fewer falling into the ±2 class range (3%).Notably, there are no predictions with a difference of 3 classes, indicating that the model rarely exhibits significant errors in damage state assessment.From a practical standpoint, plotting the maps of predicted classes offers valuable insights and aids in identifying priority regions that are most affected after a disaster.
Comparison of CNN and Transformer Model Architectures
In this section, we present a comparison between a CNN and transformer model.The primary aim is to determine which model architecture is more effective for the building damage classification.The transformer-based model displayed a remarkable 18% higher accuracy compared to the CNN-based model (see Sat-CNN-100 and Sat-ViT-100 in Table 6).This improvement was consistent across other performance metrics as well, including precision, recall, and F1 score.In experiment Sat-CNN-100, the model achieved an accuracy of 55%, and an average F1 score of 54% [30].In contrast, Sat-ViT-100 yielded significantly improved results with an accuracy of 73%, and an average F1 score of 72%.An essential observation here is that the model surpasses human-level accuracy on satellite images, achieving a 3% improvement over the reported 70% human accuracy [54,81].This result establishes the model's reliability and suitability for practical applications.
To gain further insights into the predictive capabilities of the models, we compared the ROC curves shown in Figure 7.The ROC curve analysis showed a higher AUC-ROC for the transformer model, indicating its superior ability to discriminate between classes effectively for all the classes.Another observation in both results is the lower AUC-ROC value for class 3, indicating the maximum uncertainty in prediction.This uncertainty is expected for satellite images, and it also aligns with observations from a study on human assessments [81].Lastly, comparing damage class 0 in both cases, the CNN model exhibits poor predictive capabilities, performing close to a random classifier, as evidenced by its ROC falling below 0.5.Conversely, the transformer model demonstrates higher discriminative capability, with an AUC-ROC of 0.93.
The overall results indicate that the transformer-based architecture has better ability to learn high-level features and capture complex patterns.This might be due to the transformer's attention mechanisms, which appear to be advantageous for handling spatial features in satellite images.Spatial features refer to the specific characteristics and patterns within an image.Vision transformers perform better than CNNs in terms of extracting spatial features due to their ability to preserve the spatial information of the embedded patches and capture long-range dependencies between image regions [82,83].In the context of damaged buildings, the key distinguishing areas are the damaged and undamaged sections.The overall results indicate that the transformer-based architecture has better ability to learn high-level features and capture complex patterns.This might be due to the transformer's attention mechanisms, which appear to be advantageous for handling spatial features in satellite images.Spatial features refer to the specific characteristics and patterns within an image.Vision transformers perform better than CNNs in terms of extracting spatial features due to their ability to preserve the spatial information of the embedded patches and capture long-range dependencies between image regions [82,83].In the context of damaged buildings, the key distinguishing areas are the damaged and undamaged sections.
Comparison of Satellite and UHRA Image Data Types
The following section presents the comparison between satellite and UHRA data sources.The results of all the experiments are summarized in Table 7.According to the experimental results, the model trained on UHRA images and tested on a satellite images yielded an accuracy of 58%.Through our series of experiments, we can draw two conclusions that suggest UHRA images are more suitable for training the ViT model.We notice that the models trained on UHRA images demonstrate better generalizability capabilities when tested on satellite images.The ViT-UHR-Sat model achieved an accuracy and F1 score of 58% and 62%, respectively.The ViT-Sat-UHR model achieved a lower accuracy of 41% and F1 score of 39%.This indicates that the model effectively learned features from the UHRA images and was able to generalize the satellite data well compared to the model trained on satellite images to perform generalization on UHR images.
Secondly, the AUC-ROC curve (Figure 8) reveals that the ViT-UHR-Sat exhibits superior discriminative capabilities in distinguishing between different classes.The average AUC-ROC scores achieved by ViT-UHR-Sat and ViT-Sat-UHR are 83% and 76%, respectively, reinforcing the higher discriminative capabilities of ViT-UHR.Moreover, ViT-UHR-Sat successfully overcomes the challenges associated with classifying damage state 3 when trained on satellite images, as discussed in the previous section (see Figures 7a and 8b).The confusion matrix in Figure 9 highlights this observation as well; the ViT-Sat-UHR model struggles to accurately predict the intermediate damage classes (DS-1, DS-2 and DS-3).Another observed issue is the misclassification of damage state 3 as damage states 2 and 1.In contrast, the ViT-UHR-Sat model demonstrates better performance comparatively.
Comparison of Satellite and UHRA Image Data Types
The following section presents the comparison between satellite and UHRA data sources.The results of all the experiments are summarized in Table 7.According to the experimental results, the model trained on UHRA images and tested on a satellite images yielded an accuracy of 58%.Through our series of experiments, we can draw two conclusions that suggest UHRA images are more suitable for training the ViT model.We notice that the models trained on UHRA images demonstrate better generalizability capabilities when tested on satellite images.The ViT-UHR-Sat model achieved an accuracy and F1 score of 58% and 62%, respectively.The ViT-Sat-UHR model achieved a lower accuracy of 41% and F1 score of 39%.This indicates that the model effectively learned features from the UHRA images and was able to generalize the satellite data well compared to the model trained on satellite images to perform generalization on UHR images.
Secondly, the AUC-ROC curve (Figure 8) reveals that the ViT-UHR-Sat exhibits superior discriminative capabilities in distinguishing between different classes.The average AUC-ROC scores achieved by ViT-UHR-Sat and ViT-Sat-UHR are 83% and 76%, respectively, reinforcing the higher discriminative capabilities of ViT-UHR.Moreover, ViT-UHR-Sat successfully overcomes the challenges associated with classifying damage state 3 when trained on satellite images, as discussed in the previous section (see Figures 7a and 8b).The confusion matrix in Figure 9 We also study the resolution and accuracy of the class activation mappings or CAMs produced by networks trained on these datasets.A CAM [84] can identify specific regions in an image that a model is focusing on while making a classification decision.In this study, we are using Eigen-CAM, proposed by Muhammad et al. [85].We perform the CAM on the layers before the final activation block to avoid the zero-gradient problem in transformer models [86].
Figure 10 presents the CAMs for an individual building across various experimental settings where the model is trained and tested on different combinations of data sources.The CAMs highlighted in green boxes are considered accurate, while those in red boxes are deemed less reliable.Upon examining the CAMs, it becomes apparent that the models are striving to differentiate between regions of damaged and undamaged rooftops.From the CAM analysis, two noteworthy observations can be made: (i) The CAMs for the model trained and tested on UHRA images are quite accurate and precise in detecting damaged regions (2-b, 5-b, 2-d, and 5-d).A similar performance is observed when the model is trained and tested on satellite images (1-a, 4-a, 1-c, and 4-c).(ii) The models trained on UHRA and tested on satellite images (ViT-UHR-Sat), produce good CAMs (2-a, 5-a, 2-c, and 5-c) and effectively identify damaged regions.However, the model trained on satellite images (ViT-Sat-UHR) does not perform well (1-b, 4-b, 1-d and 1-d) when tested on UHRA images.Conversely, the ViT-UHR-Sat model successfully distinguishes between buildings and the background, yielding accurate CAMs.We also study the resolution and accuracy of the class activation mappings or CAMs produced by networks trained on these datasets.A CAM [84] can identify specific regions in an image that a model is focusing on while making a classification decision.In this study, we are using Eigen-CAM, proposed by Muhammad et al. [85].We perform the CAM on the layers before the final activation block to avoid the zero-gradient problem in transformer models [86].
Figure 10 presents the CAMs for an individual building across various experimental settings where the model is trained and tested on different combinations of data sources.The CAMs highlighted in green boxes are considered accurate, while those in red boxes are deemed less reliable.Upon examining the CAMs, it becomes apparent that the models are striving to differentiate between regions of damaged and undamaged rooftops.From the CAM analysis, two noteworthy observations can be made: (i) The CAMs for the model trained and tested on UHRA images are quite accurate and precise in detecting damaged regions (2-b, 5-b, 2-d, and 5-d).A similar performance is observed when the model is trained and tested on satellite images (1-a, 4-a, 1-c, and 4-c).(ii) The models trained on UHRA and tested on satellite images (ViT-UHR-Sat), produce good CAMs (2-a, 5-a, 2-c, and 5-c) and effectively identify damaged regions.However, the model trained on satellite images (ViT-Sat-UHR) does not perform well (1-b, 4-b, 1-d and 1-d) when tested on UHRA images.Conversely, the ViT-UHR-Sat model successfully distinguishes between buildings and the background, yielding accurate CAMs.
The results from CAMs, ROC curve, and the confusion matrix affirm that the model trained on UHRA images demonstrates better generalizability and discriminative capabilities among all classes.This reinforces the practical value of UHRA images in enhancing the framework's performance for accurate building damage assessment.
Limitations
This study presents novel insights into building damage assessment using satellite and UHRA data.While the proposed framework has been extensively validated for posthurricane damage assessments, and could potentially be extended to other related scenarios as well, the following limitations are acknowledged: 1.
Above-Ground Structures Only: The methodology is tailored for above-ground structures and would not be suitable for subsurface assessment.
2.
Cloud Cover Impact: The flight altitude for capturing UHRA images is approximately 2 km, making clouds below this altitude a potentially significant limitation in the damage detection process.
3.
Roof Damage Sensitivity: While the sensitivity to roof damage serves as a valuable indicator for the PDA, it may not be equally informative for evaluating damage caused by other disasters where roof damage is not a good indicator of overall structural health.The results from CAMs, ROC curve, and the confusion matrix affirm that the model trained on UHRA images demonstrates better generalizability and discriminative capabilities among all classes.This reinforces the practical value of UHRA images in enhancing the framework's performance for accurate building damage assessment.
Limitations
This study presents novel insights into building damage assessment using satellite and UHRA data.While the proposed framework has been extensively validated for posthurricane damage assessments, and could potentially be extended to other related scenarios as well, the following limitations are acknowledged: 1. Above-Ground Structures Only: The methodology is tailored for above-ground structures and would not be suitable for subsurface assessment.2. Cloud Cover Impact: The flight altitude for capturing UHRA images is approximately 2 km, making clouds below this altitude a potentially significant limitation in the damage detection process.3. Roof Damage Sensitivity: While the sensitivity to roof damage serves as a valuable indicator for the PDA, it may not be equally informative for evaluating damage caused by other disasters where roof damage is not a good indicator of overall structural health.
Conclusions
This paper addressed key challenges in building an efficient, accurate, and automatic preliminary disaster assessment (PDA) framework.The novel contributions of our
Conclusions
This paper addressed key challenges in building an efficient, accurate, and automatic preliminary disaster assessment (PDA) framework.The novel contributions of our research stemmed from the adoption of (i) UHRA images, (ii) unlabeled data, and (iii) visiontransformer models.We investigated the impact of leveraging unlabeled data to improve classification accuracy, compared CNN and transformer model architectures, and quantitatively assessed the usefulness of satellite and ultra-high-resolution aerial (UHRA) images.The results demonstrated that the semi-supervised model with UHRA images is able to attain a state-of-the-art 5-class accuracy of 88%, yielding a 33% improvement over the previous state-of-the-art CNN trained on satellite data.Our experiments also demonstrated the efficacy of unlabeled data in improving the accuracy of the supervised model (UHRA-ViT-100) by 7%.A comparison of baseline supervised architectures on satellite data only, demonstrated the transformer's ability to learn high-level features and achieve an overall accuracy of 73% vs. 55% for the CNN model.Furthermore, incorporating UHRA images for training not only enhances the model's ability to generalize to different datasets but also improves its performance in distinguishing between classes.The results were verified by analyzing class activation maps (CAMs) to better interpret the models.The results from this study will significantly accelerate and improve post-disaster assessment and the overall recovery process.The proposed framework offers increased speed and accuracy compared to current automated
Figure 2 .
Figure 2. Samples for UHRA and satellite images with corresponding damage classes.
Figure 2 .
Figure 2. Samples for UHRA and satellite images with corresponding damage classes.
Figure 10 .
Figure 10.Class activation maps (CAMs) identifying good CAMs in green boxes and bad CAMs in red boxes.
Figure 10 .
Figure 10.Class activation maps (CAMs) identifying good CAMs in green boxes and bad CAMs in red boxes.
Table 1 .
Scale mapping of damage scale.
Table 3 .
Summary of experiments.
Table 3 .
Summary of experiments.
Table 4 .
Summary of inter-data experiments.
Table 5 .
Summary of evaluation metrics.
Table 6 .
Performance report for different experiments.
Table 7 .
Performance report on inter-dataset testing.
Table 7 .
Performance report on inter-dataset testing. | 2023-10-05T15:11:18.170Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "97fd0f902319d3daed845356e8d7dc4053f87068",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/19/8235/pdf?version=1696328426",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80411cc69646a0b1425cb012274003d8238850fc",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255395261 | pes2o/s2orc | v3-fos-license | Separation, purification, and crystallization of 1,5-pentanediamine hydrochloride from fermentation broth by cation resin
1,5-Pentanediamine hydrochloride (PDAH) was an important raw material for the preparation of bio-based pentamethylene diisocyanate (PDI). PDI has shown excellent properties in the application of adhesives and thermosetting polyurethane. In this study, PDAH was recovered from 1,5-pentanediamine (PDA) fermentation broth using a cation exchange resin and purified by crystallization. D152 was selected as the most suitable resin for purifying PDAH. The effects of solution pH, initial temperature, concentration of PDA, and adsorption time were studied by the static adsorption method. The equilibrium adsorption data were well fitted to Langmiur, Freundlich, and Temkin-Pyzhev adsorption isotherms. The adsorption free energy, enthalpy, and entropy were calculated. The experimental data were well described by the pseudo first-order kinetics model. The dynamic experiment in the fixed bed column showed that under optimal conditions, the adsorption capacity reached 96.45 mg g−1, and the recovery proportion of the effective section reached 80.16%. In addition, the crystallization of the PDAH solution obtained by elution proved that the crystal product quality of resin eluting solution was highest. Thus, our research will contribute to the industrial scale-up of the separation of PDAH.
Introduction
Bio-based polyurethane (PU) coatings have been widely used in recent decades and are gradually replacing petrochemical coatings because of their advantages of low environmental impact, easy access, low cost, and good biodegradability (Noreen et al., 2016;Paraskar et al., 2021;Ma et al., 2022). Bio-based pentamethylene diisocyanate (PDI) has shown excellent properties in the application of adhesives and thermosetting polyurethane, in which approximately 71% of the carbon content was bio-based. In theory, it is possible to replace petrochemical-based hexamethylene diisocyanate (HDI) (Zeng et al., 2022). At present, the industrial preparation of PDI is mainly liquid-phase phosgenation . The raw material for PDI is 1,5-pentanediamine (PDA), which needs to be obtained from biomass such as feed corn starch through biological fermentation engineering . In the twostep reaction, PDA initially reacts with a cold phosgene solution to produce 1,5-pentanediamine OPEN ACCESS EDITED BY Xin Zhou, Nanjing Forestry University, China hydrochloride (PDAH) and carbamate. Phosgene is then further introduced and gradually heated to produce PDI. There are inevitably some problems in this process, such as the production of tar compounds and chloride by-products, and it is difficult to control the particle size and rate in the salt forming process, all of which result in many impurities and reduced yield Li et al., 2020). Therefore, the high-quality PDAH can reduce the occurrence of side reactions by the phosgenation reaction, and this can improve the yield and purity of the final PDI (Takahashi et al., 2019).
A variety of technologies have been used to separate and purify products from fermentation broth, including precipitation, solvent extraction, adsorption, distillation, and membrane separation (Chen et al., 2007;Jeon et al., 2014;Szczygiełda and Prochaska., 2020;Hu et al., 2022;Lee and Lee., 2022). However, there are still a large number of bacteria, proteins, residual sugars and inorganic salts in the fermentation broth. Therefore, it is very important to choose an efficient, low pollution and low-cost technology to improve the yield. The advantages of using macroporous resin are strong adsorption capacity, high selectivity, low material cost, high regeneration possibility, and less pollutants . Commonly used resins are adsorption and ion exchange resins, which utilize a non-specific physical adsorption mechanism and an ion-exchange mechanism, respectively (Xiong et al., 2019). Macroporous adsorption resins are often used for the separation and purification of natural products, such as flavonoids (Dong et al., 2015), phenolic compounds (Park and Lee., 2021), alkaloids (Zhang et al., 2012), anthocyanin , and antioxidants (Zou et al., 2017). Macroporous ion exchange resins are commonly used to separate amino acids (Dong et al., 2015;Chen et al., 2016;Zhang et al., 2018), lactic acid (Ahmad et al., 2021), nutrients (Ke et al., 2021), succinic acid (Alexandri et al., 2019), and decolorization (Shi et al., 2017) from fermentation broth. However, there are few reports on the separation and purification of bio-based PDAH from pretreated fermentation broth by macroporous ion exchange resin.
Crystallization is an ancient separation process that is usually the last step in the purification process, and its control is crucial. Crystallization is a common and necessary unit operation in the chemical industry, and it is widely used in various industries, from the production of basic materials to complex pharmaceuticals (Ms et al., 2020;Weng et al., 2020). Compared with other purification processes, crystallization has the advantages of high recovery rate, good quality of recovered solid-liquid products, high yield, low energy consumption, good operability, and good stability (Sparenberg et al., 2021). At present, there are many products obtained by separation and purification using macroporous resin from fermentation broth and crystallization, such as bio-based carboxylic acids (Karp et al., 2018), antibiotics (Zheng et al., 2013), and hormones (Xu et al., 2018). In this study, higher purity and more uniform particle size of PDAH were obtained through crystallization, which provided a basis for the industrial production of PDI.
In order to master the separation performance of the resin and understand the basic principle of PDAH separation, this paper studied the influence of ion exchange resin on the static adsorption of PDA, the related adsorption thermodynamic kinetics, the optimization of the separation process, and the penetration and desorption curves of the fixed bed chromatographic column. The cooling crystallization products of the resin desorption solution and other two raw materials were compared. The results indicate that we were able to separate and purity high-quality PDAH through separation and purification, which can be used in the production of PDI.
2 Experiment section 2.1 Chemicals and reagents D113, D150, D152, and D155 resins used in experiment are weakly acidic cation exchange resins, which were purchased from Yuan Ye Biotechnology Co., Ltd. (Shanghai, China). The chemical reagents used in the experiment, ethanol, hydrochloric, acetonitrile and trifluoroacetic acid were purchased from Aladdin (Shanghai, China). The PDA fermentation broth and deionized water were provided by our laboratory.
HPLC conditions
A YMC Carotenoid column (250 mm × 4.6 mm, s-5 μm, Grace, Columbia, MD, United States) was used for these experiments with the following conditions: the mobile phase consisted of a mixture of 5% acetonitrile and 0.5% trifluoroacetic acid at a flow rate of 0.8 ml min −1 . The injection volume was 10 μl. The column temperature was 35°C, and the differential detector (1,290, Agilent Technologies, Santa Clara, CA, United States) temperature was 35°C .
Static adsorption and desorption experiments
Before use, four cation exchange resins were soaked in three times the volume of ethanol overnight, then washed with three times the volume of 1.0 M hydrochloric acid, 1.0 M sodium hydroxide, and 1.0 M hydrochloric acid. The resin was then washed with deionized water until the washing solution was neutral. The resin was used after filtration.
Static equilibrium adsorption experiments were carried out in a 20°C constant temperature oscillator. Wet resin (10 ± 0.1 g) was added to 30 ml of 100 g L −1 PDA fermentation broth in a 100 ml conical flask. The speed of the thermostatic oscillator was set to 200 r min −1 and the reaction time was set to 2.0 h; under these conditions, the PDA fermentation liquid reached adsorption equilibrium. After adsorption, the adsorption solution was filtered and the concentration of filtrate was determined by HPLC. The resin was then washed with deionized water three times, and 20 ml 1.0 M hydrochloric acid solution was added for desorption. The flask was held in a 20°C constant temperature oscillator and shaken for 2.0 h. The desorption solution was analyzed by HPLC (Figueira et al., 2022). The resin was screened according to the equilibrium adsorption capacity, desorption capacity, and desorption rate, and these parameters were calculated according to the following formulas: Frontiers in Bioengineering and Biotechnology frontiersin.org where q e was the equilibrium adsorption capacity, mg g −1 ;C 0 was the initial concentration of adsorption solution, mg L −1 ; C e was the equilibrium concentration of adsorption solution, mg L −1 ; V i was the total volume of adsorbed solution, L; W was the mass of the wet resin, g. q d was the equilibrium desorption amount, mg g −1 ; C d was the desorption solution concentration, mg L −1 ; V d was the volume of desorption solution, L; D was desorption rate, %.
Thermodynamic experiments
The static equilibrium adsorption experiment was carried out at 20°C in a constant temperature oscillator. Wet resin (10 ± 0.1 g) was added to 30 ml PDA fermentation liquid of different concentrations (50-200 g L −1 ) in a 100 ml conical flask. The speed of the thermostatic oscillator was set to 200 r min −1 and the reaction time was set to 2.0 h; under these conditions, the PDA fermentation liquid reached adsorption equilibrium. The adsorption solution was then filtered, and the concentration of filtrate after adsorption was determined by HPLC at different initial concentrations.
Kinetic experiments
Kinetic adsorption experiments were carried out at different temperatures (20-40°C) with a thermostatic oscillator. Wet resin (10 ± 0.1 g) was added to 30 ml 100 g L −1 PDA fermentation broth in a 100 ml conical flask. The speed of the thermostatic oscillator was set at 200 r min −1 , and 200 μl was taken from the flask periodically for HPLC. The instantaneous concentration was analyzed until adsorption equilibrium was reached, and was calculated according to: where q t was the instantaneous adsorption capacity, mg g −1 ; C 0 was the initial concentration of adsorption solution, mg L −1 ; C t was the concentration of adsorption solution at time t, mg L −1 ; V i was the total volume of adsorbed solution, L; W was the mass of the wet resin, g.
Dynamic adsorption and desorption experiment
A certain amount of wet resin was added to the glass adsorption column by the wet method. The inner diameter of the adsorption column was 2.0 cm and the heights were 10.0, 14.0, 20.0, and 30.0 cm. A peristaltic pump was used to control the flow rate of the liquid at the outlet of the column, and a distributed collector was used to collect the sample outflow quantitatively. When the concentration of PDA at the column outlet was equal to the initial concentration of PDA solution, the adsorption experiment was completed. The effluent concentration of different volume sections was determined by HPLC. The penetration curve of PDA on the resin column was drawn with the effluent volume V (ml) as the abscissa and the effluent concentration of PDA as the ordinate. The effects of PDA concentration, flow rate, and the H/D on the dynamic penetration curve of PDA were investigated. In dynamic adsorption experiments, the adsorption capacity of the resin unit was determined by integrating the area above the penetration curve (Figueira et al., 2022), and was calculated according to: where C 1 was the initial concentration of dynamic adsorption, g L −1 ; C ta was the concentration of PDA effluent at a certain time, g L −1 ; V t was the outflow volume corresponding to C ta time, ml; m a was the mass of total PDA adsorbed by resin column, mg; m s was the amount of resin used, g; Q a was the adsorption capacity of PDA per unit of resin, mg g −1 .
Hydrochloric acid was used as the desorbing agent, a collector was used to collect the desorbing effluent at the outlet of the chromatographic column, measure the pigment OD value of the effluent, and draw the pigment curve with the effluent product V (ml) as the abscissa and the effluent pigment OD value as the ordinate and the concentration of PDA in the desorbing effluent was detected by HPLC. The desorption curve of PDA on the resin column was drawn with the effluent volume V m (ml) as the abscissa and the concentration of PDA effluent as the ordinate. The effects of hydrochloric acid concentration, flow rate, and H/D on the pigment curve and the dynamic desorption curve of PDA were investigated. In order to obtain the eluent with high concentration, we selected the material solution of the section that reached more than 80% of the peak value of the eluent, and calculated these concentrations according to the following equations: where V m was the volume of eluent required for complete desorption of PDA, ml; C td was the concentration of PDA desorption solution at time t, g L −1 ; m b was the mass of total PDA eluted by resin column, g; C max was the peak concentration of desorption solution, g L −1 ; V 1 was the volume of eluent when the absorption reached 0.8C max for the first time, ml; V 2 was the volume of eluent when the absorption reached 0.8C max for the second time, ml; m c was the mass of total PDA 0.8C max twice by the resin column, g; F was the proportion of the mass of total PDA in the section with the concentration above the peak of 80% in the mass of total PDA eluted by resin column,%.
Crystallization of PDAH
The PDAH solution after fermentation (PHF), the PDAH solution decolorization by activated carbon after fermentation (PHDF), and the PDAH solution after elution of resin (PHER) were dissolved into the jacket beaker, preheated to 60°C, then cooled to 20°C-10°C; the crystals were observed 1.0 h after the introduction of crystal precipitates. After the crystallization was complete, the product was extracted by filtration, washed, and dried. The experiment was repeated three times under the same operating conditions to determine the average purity and yield of the product (Sparenberg et al., 2021). A small number of samples were placed on a slide, covered with a cover slide, and observed with an optical microscope and photographed. The purity and yield were calculated according to the following equations: where m 1 was the mass of PDAH actually measured in the final product, g; M 1 was the total mass of the final product to be tested, g; M 2 was the mass of PDAH in the initial solution, g. The product structure was analyzed by Nicolet Summit FTIR (Thermo Fisher Technology Co., Ltd. United States). The sample clip was placed in the sample window for infrared scanning determination.
A D8Advance powder X-ray diffractometer (Brock Technology Co., LTD. Germany) was used to detect CuKα rays (1.54056 A) under Frontiers in Bioengineering and Biotechnology frontiersin.org the following conditions: emission slit 1o; room temperature measuring; working voltage 40 kV; scanning Angle 5-60o; scanning step size 0.05o; scanning speed 1 step/second.
Resin selection
The adsorption and desorption capacities of cation exchange resins D113, D150, D152 and D155 are shown in Figure 1. Among the four resins, the unit adsorption capacity of D113 reached the highest of 156.51 mg g −1 . However, the highest desorption capacity and desorption rate was observed with D152 (83.48 mg g −1 and 62.53%, respectively). As can be seen from Table 1, the particle size of D113 was smaller than the other three resins, which increased the contact area between D113 and PDA fermentation broth, and made the adsorption capacity of D113 higher than the other three resins. However, the exchange capacity of D152 was the best among the four resins, which was also the reason why its desorption capacity and desorption rate were higher than the other three resins (Du et al., 2012). Therefore, D152 was selected to further study the adsorption process of PDA fermentation broth.
Effects of pH and temperature on the adsorption process
The equilibrium adsorption capacity of static adsorption under different pH values (7.0-13.0) is shown in Figure 2A. With the increase of pH, the adsorption capacity of D152 for PDA reached 133.80 mg g −1 when the pH reached 9.0. There were three main forms of PDA in aqueous solution: C 5 H 14 N 2 , C 5 H 15 N 2 + and C 5 H 16 N 2 2+ , which can be interconverted by adjusting the pH. When the pH was 9, PDA existed in ionic form, with more C 5 H 15 N 2 + and less C 5 H 16 N 2
2+
. Under these conditions, it was more conducive to ion adsorption (Lee et al., 2019). When the pH value reached 11, the unit adsorption capacity of PDA decreased significantly. When the pH value was high, the PDA mostly existed in the solution in molecular form. At this time, the adsorption was mostly through non-ionic interactions, which were difficult to exchange through with the ionic resin. The adsorption capacity of D152 to PDA decreased with increasing pH.
The equilibrium adsorption capacity of static adsorption under different temperatures (20-60°C) is shown in Figure 2B. It can be seen that when the temperature increased, the equilibrium adsorption capacity showed a negative correlation trend. The increase of temperature caused the adsorption reaction to move in the opposite direction, since the adsorption process was an exothermic event (Zhuang et al., 2020).
The equilibrium adsorption capacity of static adsorption under different initial PDA concentrations (50-400 g L −1 ) is shown in Figure 2C. At low concentration, the total number of adsorption sites was limited for fixed resin dosage, resulting in decreased adsorption efficiency with increasing of concentration. Before reaching the saturation adsorption capacity, the adsorption capacity increased with increaseing initial concentration (Ren et al., 2020). The saturated adsorption capacity of unit resin was 181.06 mg g −1 .
The equilibrium adsorption capacity of static adsorption for different adsorption times (0-2.0 h) is shown in Figure 2D. As expected, increasing the contact time between D152 and PDA
Adsorption isotherm model
The adsorption performance of D152 for PDA was further tested by the adsorption isotherm model. Finding the best correlation for equilibrium curve was for optimizing the adsorption system. The Langmuir model is based on the following assumptions:① the adsorbate is adsorbed on the surface of the adsorbent as a monolayer; ② Adsorption is dynamic, and the adsorbed molecules will return to the original solution under the influence of thermal motion; ③ There is no interaction between the adsorbate molecules adsorbed on the surface of the adsorbent (Vinco et al., 2022). The following formula was used to calculate the equilibrium absorption capcity: q e q m K L c e 1 + K L c e where q e was the equilibrium adsorption capacity, mg g −1 ; q m was the maximum adsorption capacity, mg g −1 ; K L was adsorption equilibrium constant, L mg −1 ; C e was the equilibrium adsorption concentration, mg L −1 . The Freundlich model is an empirical adsorption model with nonuniform surface (Ahmad et al., 2021), and can be expressed by the following equation: where q e was the equilibrium adsorption capacity, mg g −1 ; C e was the equilibrium adsorption concentration, mg L −1 ; K F was the adsorption capacity, (mg g −1 )·(mg L −1 ) 1/n ; n was the characteristic parameter of the equation.
The Temkin-Pyzhev model assumes a linear, rather than logarithmic, decrease in the heat of adsorption for surface molecules of adsorbents (Foo and Hameed., 2010), and can be described as: where q e was the equilibrium adsorption capacity, mg g −1 ; C e was the equilibrium adsorption concentration, mg L −1 ; a T was the thermodynamic constant, L mg −1 ; b T was the thermodynamic constant, J mol −1 . The fitting of different adsorption isotherm models is shown in Figure 3A. With the increase of PDA equilibrium concentration, the distribution coefficient between the solid phase and liquid phase gradually decreased, and the saturation of resin phase gradually increased. With the increase of the equilibrium concentration, the slope of the equilibrium curve gradually decreased, indicating that the affinity between the exchange ion and the resin decreased with the increase of the solution equilibrium concentration (Hou et al., 2022). The second derivative of the adsorption isotherm (second derivative of f" (C e ) < 0) showed that the ion exchange equilibrium of PDA on D152 cation exchange resin was favorable.
The basic characteristics of the Langmuir isotherm can be expressed by a dimensionless constant R L (Zhou et al., 2019): where R L > 1 indicates that the adsorption was unfavorable, R L = 1 indicates that the adsorption was linear, and R L < 1 indicates that the adsorption was favorable. The R L values under different PDA concentrations were <1, indicating that D152 cation exchange resin had a good adsorption effect on PDA. In the Freundlich model, when 1/n was in the range of 0.1-0.5, the adsorbent was easily adsorbed by the resin, and when 1/n was greater than 2.0, the adsorption was inhibited. The results are shown in Table 2. n value was 3.335 and 1/n value was 0.30, which ranged from 0.1 to 0.5, indicating that the adsorption process of PDA on D152 cation exchange resin was easy to carry out.
The Temkin-Pyzhev adsorption isotherm model was used to fit the adsorption isotherm data of PDA on D152 resin. As shown in Figure 3A, it can be seen that the fitting effect was good, and the uniform distribution of molecular binding energy of the adsorption layer was deduced (Foo and Hameed, 2010).
It can be seen from Table 2 that the correlation coefficients obtained by fitting the three isothermal models are all good (R 2 > 0.98), but the correlation coefficient (R 2 = 0.993) fitted by the Langmuir isothermal model is better than the Freundlich (R 2 = 0.989) and Temkin-Pyzhev (R 2 = 0.985) models, and the theoretical maximum adsorption capacity of the Langmuir isothermal model was 181.49 mg g −1 , which was closer to the experimental maximum adsorption capacity of 181.06 mg g −1 . Therefore, the adsorption data of D152 for PDA was closer to the Langmuir isothermal model. It is concluded that PDA is adsorbed on the surface of D152 resin as a monolayer, and the adsorption process is dynamic. There is no interaction force between PDA molecules adsorbed on the resin surface.
Adsorption thermodynamic parameters
In order to understand the thermodynamic characteristics of PDA adsorption on D152, the thermodynamic parameters of the adsorption process at 20°C were studied. Due to the good correlation of the Freundlich model, the calculation formulas of Gibbs free energy change (Δ G), enthalpy change (ΔH), and entropy change (ΔS) parameters were defined as (Chen and Zhang, 2014;Wang et al., 2019): where C K was a constant; R was the general gas constant, 8.314 J·(mol·K) −1 ; T was the thermodynamic temperature, K; n is the coefficient of Freundlich equation. According to the Clapeyron-Clausius Eq. 16, the linear fitting is shown in Figure 3B, and the fitting correlation coefficient was R 2 = 0.965. Finally, the enthalpy change was calculated as ΔH = −1.412 kJ mol −1 , indicating an exothermic reaction; Δ G = −10.474 kJ mol −1 , indicating that the adsorption reaction described as spontaneous; ΔS = 29.89 J·(mol·K) −1 , indicating that the disorder degree of solid-liquid interface increased (Chen and Zhang, 2014;Guo et al., 2014), and the arrangement of PDA adsorbed on the resin surface was more disordered after adsorption.
Adsorption kinetic model
The pseudo first-order kinetics and pseudo second-order kinetics models, which are widely used in kinetics research, are used to fit kinetic data. The pseudo first-order equation model can be described as: where q t was the resin adsorption capacity at t time, mg g −1 ; q e was the equilibrium resin adsorption capacity, mg g −1 ; t was the adsorption instantaneous time, min; K 1 was the pseudo first-order rate constant, min −1 . The pseudo-second order equation model can be described as: Frontiers in Bioengineering and Biotechnology frontiersin.org 08 where q t was the resin adsorption capacity at t time, mg g −1 ; q e was the equilibrium resin adsorption capacity, mg g −1 ; t was the adsorption instantaneous time, min; K 2 was a quasi-second-order rate constant, g·(mg min) −1 .
According to the fitting parameters in Table 3 and the model fitting diagram in Figures 4A, B, the pseudo first-order kinetics model can better fit the adsorption kinetic data of PDA than the pseudo second-order kinetic model, and the correlation coefficient (R 2 ) of the pseudo first-order kinetics model was higher than the pseudo secondorder kinetics model at different temperatures. The equilibrium adsorption calculated by the pseudo first-order kinetics model was closer to the experimental data, indicating that adsorption may be dominated by ion exchange and other processes and more unit point adsorption. According to the rate constants K 1 and K 2 obtained by model fitting, the adsorption rate decreased with the increase of temperature, which may be because the adsorption process is an exothermic process (Hou et al., 2022). In this case, the increase of temperature led to the reverse reaction direction of adsorption equilibrium, and reduced the adsorption rate.
In order to further analyze the adsorption rate limiting steps on the resin, we also used the particle diffusion model to fit the adsorption kinetic data. The particle diffusion model is as follows: where q t was the resin adsorption capacity at t time, mg g −1 ; t was the adsorption instantaneous time, min; k p was the particle diffusion constant, (mg·g −1 ·min 1/2 ); C was the parameter related to boundary layer thickness. The diffusion mechanism of PDA on D152 was characterized by a particle diffusion model. The curves of q t and t 1/2 at different temperatures and initial concentrations are shown in Figure 4C. It is clear that these curves show multi-linear graphs, indicating that intra -particle diffusion is not the only rate-limiting step (Zhuang et al., 2020). It can be speculated that the initial stage (0-20 min) was the film diffusion stage, while the later stage (20-90 min) was due to an intra -particle diffusion effect.
The boundary layer thickness related parameter C is directly proportional to the range of boundary layer thickness; the greater the value of C, the greater the boundary layer effect. If the value of C is negative, this indicates that the thickness of the boundary layer delays the diffusion in the particles, while a positive value of C indicates that the adsorption is fast (Deng et al., 2020). It can be seen from Table 4 that no matter whether in the film diffusion stage (0 min-20 min) or in the particle diffusion stage (20 min-90 min), the C value decreased significantly with the increase of temperature, indicating that the increase of temperature was not conducive to adsorption.
Dynamic adsorption and desorption results
The dynamic adsorption process of PDA on D152 was investigated under the conditions of single factor variation, different PDA concentration, different flow rate and different height-diameter ratio (H/D), and the penetration curve was used to describe these parameters. The dynamic desorption process of PDA on D152 was investigated under different hydrochloric acid concentrations, flow rates and height-diameter ratios, and desorption curve and pigment curve were used to describe the changes in these parameters and the impact on desorption.
Under the condition of a constant flow rate of 1.0 ml min −1 and initial concentration of PDA fermentation liquid at 100 g L −1 , the dynamic adsorption process of PDA on D152 under H/D was investigated, as shown in Figure 5A. When the aspect ratio was 5: 1-10:1, the unit adsorption capacity of the resin also increased with the increase of the H/D. The unit adsorption capacity of resin was 87.39, 92.78, 99.68, and 100.26 mg g −1 , respectively, because the H/D of the column theoretical plate number was larger, and the separation efficiency was improved (Xiong et al., 2019). However, when the H/D reached 10:1, the unit adsorption capacity did not increase significantly. Under the condition of a constant flow rate of 1.0 ml min −1 and an initial concentration of desorption solution of 1.5 M HCl, the desorption situation in the elution process was considered. By comparing the F value of the eluted feed solution under different aspect ratios, as shown in Figures 5B-E, the H/D was 5:1-15:1 and it reached 66. 85, 75.48, 81.45, and 81.47%, respectively. It can be seen that the F value increased as the H/D increased, but when it reached 10:1, the F value did not change significantly. At the same time, it can be seen that when the section corresponded to the desorption curve was selected, when the H/D was 5:1, 7:1, and 10:1, the OD value was stable at 3.5-3.52, while when the H/D reached 15: 1, the OD value exceeded 3.52, and the ability to remove pigment was weaker than the other H/D. Therefore, considering the situation of adsorption and desorption comprehensively, when the H/D was too large, industrial scale up will lead to high operating pressure and higher equipment cost. Therefore, the H/D of 10:1 was selected as the best ratio.
When the aspect ratio was fixed at 10:1 and the initial concentration of PDA fermentation liquid was 100 g L −1 , the dynamic adsorption process of PDA on D152 resin under different flow rates was investigated. As shown in Figure 6A, with the increase of flow velocity, component adsorption was quickly initiated, but when adsorption was complete, the required volume was significantly increased. The adsorption capacity of 1.0 ml min −1 at low flow rate was 96.48 mg g −1 , while at flow rates of 1.5 ml min −1 and 2.0 ml min −1 , the adsorption capacity was diminished to 88.71 and 77.64 mg g −1 , respectively. It was inferred that when the flow rate increased, the contact residence time between fermentation liquid and resin decreased, resulting in inadequate adsorption and decreased adsorption capacity (Zhuang et al., 2020). When considering desorption in the elution process when the H/D was fixed at 10:1 and the initial Figures 6B-D, the desorption flow rate increased and the amount of desorption agent required for complete desorption increased, and the elution peak had a certain trailing phenomenon. When the flow rate was 1.0 and 1.5 ml min −1 , the F value did not differ much, reaching 81.49 and 82.01%, respectively. However, when the flow rate rose to 2.0 ml min −1 , the F value decreased significantly to 66.43%. This observation may be due to the excessive flow rate, resulting in too fast discharge in the desorption process. After C max was reached, the concentration of feed liquid decreased rapidly.
With the increase of the flow rate from 1.0 to 2.0 ml min −1 , the overall OD value of the corresponding section of the pigment curve was also increased, indicated that 1.0 ml min −1 was the best flow rate to removed pigment. Therefore, 1.0 ml min −1 was selected as the best flow rate in combination with adsorption. When the initial PDA fermentation liquid concentration was optimized with a fixed H/D of 10:1 and a constant flow rate of 1.0 ml min −1 , as shown in Figure 7A, it was found that as the initial PDA fermentation liquid concentration increased from 50 g L −1 to 100 g L −1 , the penetration volume decreased. This may be due to the slow mass transfer process and that the breakthrough time of the low concentration was delayed. This observation was reinforced by the accompanied leftward deviation of the penetration curve, and an increase in the slope of the curve as PDA concentration increased, leading to the reduction of the mass transfer interface (Show et al., 2022). The final adsorption capacity was 76.54, 94.85, Frontiers in Bioengineering and Biotechnology frontiersin.org and 96.45 g L −1 with the initial PDA fermentation liquid concentration was 50, 75, and 100 g L −1 , respectively. There was little difference between the initial concentration of 75 g L −1 and the equilibrium adsorption capacity of 100 g L −1 , but the required amount of 100 g L −1 was lower and the time required for complete penetration was shorter. Therefore, 100 g L −1 was selected as the best fermentation liquid concentration. The concentration of hydrochloric acid was optimized, as shown in Figures 7B-D; it was found that when the concentration of hydrochloric acid was 0.5, 1.0, and 1.5 M, the F value of the eluted liquid was 80.78, 80.16, and 81.49%, respectively, which were relatively close. However, when the concentration of hydrochloric acid was 1.0 M, the maximum concentration value of the eluent reached 127.82 g L −1 , the overall peak pattern was good. With the concentration of hydrochloric acid increasing from 0.5 M to 1.0 M, the overall OD value of the corresponded section of the pigment curve was stable at 3.48-3.50, while when the concentration of hydrochloric acid reached 1.5 M, the overall OD value of the section was 3.50-3.52, increased significantly. Therefore, the 1.0 M hydrochloric acid was selected as the best elution concentration.
PDAH crystallization
The cooling crystallization experiment was carried out under the same conditions of three different raw materials, and the results are shown in Figures 8A, C, E. When the temperature was reduced to 0°C, the purity of the PHF, PHDF and PHER was the highest with 85.55, 92.75, and 97.23% respectively. This result may be because when the temperature was lower than 0°C, although the yield was improved, more impurities precipitated, and it was more difficult to separate in the suction filtration process. It can be seen from Figure 8B that there were many impurities in the PHF. Figure 8D shows that the crystal form of PHDF was more complex, while it can be seen from Figure 8F that the crystal form of PHER was long rod type. Therefore, PHER was used for crystallization. The water content of PDAH obtained by Karl Fischer was 0.8%, and the molar ratio of PDA to HCl was 1:1.81 in the obtained PADH crystals which was determined by element analysis.
The cooling crystallization products of three different raw materials were characterized by infrared spectroscopy. As shown in Figure 9A, when the amine was salted, the stretching vibration absorption peak of the N-H group shifted significantly to a lower frequency, overlapped with the stretching vibration absorption peak of the C-H bond, and formed a wide and strong spectral band in the range of 3,200-2,200 cm −1 . Due to the deformation and vibration of the N-H group, the band has a strong absorption peak at 1,600-1,510 cm −1 . The C-H group had an absorption peak near 1,475 cm −1 due to a deformation vibration. The C-N group had a stretching vibration peak at 1,230-1,050 cm −1 . Through infrared spectrum analysis, it can be confirmed that the main groups of the material structure are basically the same as PDAH. In addition, in the three raw materials, the impurity peak of PHER was less, indicating that its purity was higher.
The products obtained by cooling and crystallization of different raw materials were characterized by PXRD, as shown in Figure 9B. The main peaks of X-ray powder diffraction 2θ = 9.06o, 17.96o, and 25.52o appeared in the same position. These peaks belong to the same crystal form, and the main peak of PHER was sharper, the impurity peak was diminished, and the relative crystallinity was higher .
Conclusion
In this study, the adsorption and desorption properties of D152 for PDA in fermentation broth were the best. The Langmiur, Freundlich, and Temkin-Pyzhev equations all fit well with the adsorption equilibrium data of PDA on D152 at 20°C. The adsorption free energy, enthalpy, and entropy were calculated. The results showed that the adsorption of PDA on D152 was a spontaneous exothermic process. The pseudo-first-order model best described the adsorption kinetics of PDA on D152. The dynamic experiment in a fixed bed column showed that the desorption capacity reached 96.45 mg g −1 , and the F value reached 80.16%. The cooling crystallization of three kinds of raw materials showed that the resin eluting crystallization product had higher quality, the purity reached 97.23%, and the Frontiers in Bioengineering and Biotechnology frontiersin.org yield was 42.32%. This study provides a low-cost and efficient method for the separation and purification of PDAH from PDA fermentation broth, and contributes to the industrial scale-up of the separation of PDAH.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author. | 2023-01-04T14:17:42.998Z | 2023-01-04T00:00:00.000 | {
"year": 2022,
"sha1": "925bb98191dc6080a9fcac40e986ae7fe19b6f50",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "925bb98191dc6080a9fcac40e986ae7fe19b6f50",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
119117154 | pes2o/s2orc | v3-fos-license | Dark bubbles around high-redshift radio-loud AGN
At redshift larger than 3 there is a disagreement between the number of blazars (whose jet is pointing at us) and the number of expected parents (whose jet is pointing elsewhere). Now we strengthen this claim because (i) the number of blazars identified within the SDSS+FIRST survey footprint increased, demanding a more numerous parent population, and (ii) the detected blazars have a radio flux large enough to be above the FIRST flux limit even if the jet is slightly misaligned. The foreseen number of these slightly misaligned jets, in principle detectable, is much larger than the radio-detected sources in the FIRST+SDSS survey (at redshift larger than 4). This argument is independent of the presence of an isotropic radio component, such as the hot spot or the radio lobe, and does not depend on the bulk Lorentz factor Gamma. We propose a scenario that ascribes the lack of slightly misaligned sources to an over-obscuration of the nucleus by a"bubble"of dust, possibly typical of the first high-redshift quasars.
INTRODUCTION
Blazars (flat spectrum radio quasars, FSRQs, and BL Lac objects) produce most of their non-thermal radiation in jets whose plasma is moving relativistically at small angles θ from the line of sight. How small the viewing angle must be for a source to be a blazar is not defined exactly, but we have proposed to use θ < 1/Γ, where Γ is the bulk Lorentz factor of the emitting plasma. Under this definition, for a jet pointing in our direction within an angle 1/Γ there must exist 2Γ 2 other sources with their jets pointing elsewhere: these sources form the parent population of blazars, and are usually associated with the FR I (low luminosity) and FR II (high luminosity) radio-galaxies (Fanaroff & Riley 1974). Volonteri et al. (2011;hereafter V11) pointed out the difficulties in reconciling the number of blazars observed at high redshifts with the number of the expected parent population. The flux of these sources is less beamed and amplified with respect to the aligned sources, and for large enough θ is even de-beamed, but the extended structures at the end of the jets (hot spots and lobes), that emit isotropically, could be bright enough to be detectable, especially if the jet is powerful (FR II type), as are all the jets detected at high redshifts.
V11 also pointed out that the disagreement between the number of blazars and their parents occurs only for redshifts z > ∼ 3. This was based on two cross correlated catalogs: the Fifth Quasar Catalog (Schneider et al. 2010) of the Sloan Digital Sky Survey (SDSS; ⋆ E-mail: gabriele.ghisellini@brera.inaf.it York et al. 2000) and the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST; White et al. 1997). The SDSS Quasar Catalog is a spectroscopic, magnitude limited quasar survey (mi < 19 or 21 for low-and high-redshift quasars) and FIRST is a VLA radio survey complete above 1 mJy at 1.4 GHz. The common sky area is 8,770 square degrees. The quasars belonging to the two samples (that we will collectively call SDSS+FIRST) have been studied by Shen et al. (2011). According to this study, and listed in Tab. 2, in the redshift bin 4 < z < 5 there are 1192 quasars, 49 of which are radio-detected. Of these, at least 6 are blazars. Above redshift 5 there are 56 quasars, of which 4 are radio-detected, and 2 of these are blazars.
If we take the 6 blazars with redshift between 4 and 5, we expect (1200 ± 480)(Γ/10) 2 misaligned jets (assuming an uncertainty of √ 6 in the number of observed blazars in this redshift bin), but we see a total of only 49 radio sources above 1 mJy. Above z > 5, the 2 observed blazars should correspond to (400 ± 280)(Γ/10) 2 parents, but we see a total of only 4 sources above 1 mJy. These numbers strengthen the problem pointed out in V11, because, since then, more radio sources in the SDSS+FIRST turned out to be blazars (see e.g. Sbarrato et al. 2013;Sbarrato et al. 2015).
V11 proposed three possible solutions to this disagreement: i) the bulk Lorenz factor is much lower than what it is at z < ∼ 3. To reconcile the numbers, we would need Γ = 2, which is inconsistent with the observed properties of high-z blazars (see, e.g. Lister et al. 2013); ii) there is a (yet unknown) bias in the SDSS+FIRST survey against the detection of high-z c 2012 RAS radio-loud sources. For instance, the isotropic radio structure could be young and compact, self-absorbed at frequencies larger than 10 GHz (in the rest frame), and therefore be below the 1 mJy flux detection limit of the FIRST; iii) the SDSS+FIRST survey misses the detection of a large population of parents because their optical flux is absorbed by dust. A fourth solution would be to postulate the absence of the hot spot and lobe in these sources, or that these structures are very faint in the radio band for redshifts larger than 3.
In fact, the ∝ (1 + z) 4 scaling of the cosmic background radiation (CMB) energy density can greatly affect the radio emission of extended structures, as explored by Mocz, Fabian & Blundell (2011) and by Ghisellini et al. (2013;. Consider two sources at different redshifts, that have the same size, magnetic field, and that are energized by the same injected power. The higher-z source will have a fainter radio emission and a stronger X-ray luminosity than the lower-z source. This is because the emitting electrons will preferentially cool through inverse Compton scattering off CMB seed photons rather than producing synchrotron emission. The quenching of the radio emission can help to reconcile the disagreement between the number of high-z blazars and the corresponding number of expected parents, but in this paper we point out that this effect is not enough. In fact we will point out that also the sources whose jet is slightly misaligned can produce a flux above the threshold limit of the radio survey. This is independent of the CMB energy density. If a jet emits a radio flux of -say -200 mJy and it is observed at a given (small) θ, there should be other similar jets observed at larger viewing angles but whose radio flux is larger than the flux limit of the survey (in our case, 1 mJy). Even these sources are missing. The problem is even more severe because we will show that the expected number of these sources is independent of the bulk Lorentz factor. We believe that this calls for a revision of our basic understanding of these high-z sources.
SLIGHTLY MISALIGNED JETS
We define as blazar a source whose jet is observed at a viewing angle θ 1/Γ. At θ = 1/Γ, the Doppler factor is Smaller angles have larger Doppler factors (at 0 • , δ ∼ 2Γ), but the probability P to observe a jet pointing exactly at us is vanishingly small (P ∝ θ 2 ). Assume that a source, in the comoving (primed) frame, emits a monochromatic flux F ′ (ν ′ ) = F ′ (ν/δ). Then the observer at Earth will see a flux F (ν): The exponent p can have different values. If the emission is a power law of spectral index α [i.e. F (ν) ∝ ν −α ] we have, among the several possibilities: • p = 2 + α: in this case the jet emits between two locations that are stationary in the observer frame. The radiation is emitted isotropically in the comoving frame. Sometimes this is called the finite lifetime jet case.
• p = 3 + α: this is the case of a moving blob, emitting isotropically in the comoving frame.
• p = 4 + 2α: the jet is a moving blob emitting inverse Compton radiation using seed photons that are produced externally to the (Shen et al. 2011). The bottom part lists (in italic) other 3 blazars present in the photometric catalog of the SDSS+FIRST, but not in the spectroscopic one. They are shown for completeness, but we ignore them in the following. The radio-loudness R is defined as F 5GHz /F 2500Å , where F 5GHz and F 2500Å are the monochromatic rest frame fluxes at 5 GHz and at 2500Å, respectively. F R is the radio flux density at 1.4 GHz in mJy.
jet (the so called external Compton mechanism), and that are distributed isotropically in the observer frame. The inverse Compton flux is not isotropic in the comoving frame, but it is enhanced in the forward direction (see Dermer 1995). This changes the pattern of the radiation as seen in the observer frame with respect to the previous (p = 3 + α) case.
Assuming that the maximum observable boost is for sin θ ∼ 1/Γ → cos θ = β: The maximum viewing angle θc at which this source can be seen is set by Therefore the ratio of the maximum to the minimum fluxes gives: This gives the maximum viewing angle as: Now we can calculate the ratio of the number of sources oriented within θc to the sources oriented within θ = 1/Γ: where the last approximate equality is valid for β → 1. In this limit the ratio R is not dependent on Γ.
As an example, assume that the brightest radio source in a sample is a blazar with Fmax(ν) = 100 mJy and the limiting flux Table 2. Numbers of radio-detected quasars in the SDSS+FIRST spectroscopic sample and number of predicted misaligned objects. We have applied Eq. 8 for each blazars considering its actual radio flux and the limiting flux of 1 mJy of the FIRST survey. For each blazar, there are 2Γ 2 ∼ 338(Γ/13) 2 jets pointing in other directions. The number of blazars refers to the objects spectroscopically observed for the construction of the catalog. There are other high-z blazars in the SDSS+FIRST sky area, that were photometrically detected, but that were not followed up spectroscopically (see Table 1 and Ghisellini et al. 2015).
PREDICTED VS OBSERVED RADIO-LOUD SOURCES
We can apply the calculations detailed in the previous section to all the blazars in a sample. Adopting for each of them its radio flux Fi(ν) and the same flux limit (i.e. 1 mJy) we can calculate for each blazar how many slightly misaligned jets we expect to be observable, and then sum up to obtain the total expected ratio between the total detectable sources and the total observed blazars: In few previous works we classified a number of blazar candidates included in the SDSS+FIRST Sbarrato et al. 2012;. To identify the most reliable candidates, we selected the z > 4 quasars with a radio-loudness R = F5GHz/F 2500Å > 100 from SDSS+FIRST. In fact, among radio-loud sources, a more extreme radio-to-optical dominance is a first indication of a jet oriented roughly towards us. X-ray followup observations allowed us to confirm the blazar classification of the most radio-loud candidates (and more observations will follow). From these studies, we concluded that there are (at least) 6 blazars in the SDSS+FIRST spectroscopic catalog at 4 z < 5, and 2 at z > 5. These are listed in the top part of Tab. 1. Note that the three sources included in the SDSS+FIRST, but not in its spectroscopic catalog (classified in Ghisellini et al. 2015) are here excluded because they do not have the necessary optical flux to enter the SDSS+FIRST spectroscopic, flux-limited sample. They are listed in the bottom part of Tab. 1.
We are confident that these blazars are indeed observed at an angle θ < 1/Γ. In fact the X-ray flux and spectrum (due to external Compton) are dependent on the viewing angle in a stronger way than the radio (synchrotron) flux, as mentioned in §2 (the p-value is different). At high redshift (and in the absence of a detection in the γ-ray band), this is the best diagnostic to derive the jet orientation (see Fig. 3 of Sbarrato et al. 2015 showing how the SED changes by small changes in the viewing angle). Furthermore, in the case of B2 1023+25, the small viewing angle and large Lorentz factor were confirmed by the european VLBI (EVN) observations by Frey et al. (2015).
The existence of these blazars, compared with the whole SDSS+FIRST radio-detected sample, highlights a large discrepancy regarding the number of slightly misaligned jets. As listed in Table 2, the radio fluxes of the 6 blazars allow to predict a total of 616±246 [(6 ± √ 6) × Rtot(p = 2)] or 270±108 [(6 ± √ 6) × Rtot(p = 3)] jetted sources detectable in the SDSS+FIRST survey in the 4 z < 5 redshift bin. At z > 5, this number is 72±50 (p = 2) or 30±21 (p = 3). We believe that this is a severe disagreement, because: (i) the number of expected slightly misaligned objects derived from the known blazars is robust, because independent of Γ.
(ii) Since the flux comes from the jet, these objects are observed as point-like sources. This bypasses the problem of associating one (or two, in the case of a double radio source) radio objects not coincident with a SDSS source. Furthermore, point-like sources are easier to detect with respect to extended ones.
(iii) All high-z blazars have their optical flux completely dominated by the accretion disc radiation. The synchrotron emission (that can be depressed more than the radio in slightly misaligned sources, since α ∼1) does not contribute significantly to the optical flux. This implies that slightly misaligned sources should in principle be included in the SDSS quasar catalog.
(iv) The presence of a dusty torus should not affect the optical flux, as long as its opening angle is similar to lower redshift sources. This implies that the optical emission, in a standard scenario, should not be obscured.
OBSCURING BUBBLES: A WAY OUT
The discrepancy between the predicted and observed number of sources that have slightly misaligned jets is serious, and calls for an explanation. In addition, we are not aware of any instrumental selection effect strongly biasing our sample. The possibility that were proposed previously by V11 aimed to account for the lack of extended and isotropically radio sources, namely the foreseen parent population of high-z blazars. To explain these (still missing) sources we can envisage two possible reasons: i) the observational difficulty to detect a weak extended radio source at some angular distance from a point-like optical object and ii) the "radio quenching" effect due to the enhanced CMB radiation energy density that cools more efficiently the emitting electrons through the inverse Compton mechanisms and that weakens their radio emission.
However, the regions of the jet producing the 1.4 GHz radio flux ( > ∼ 7 GHz rest frame) are not affected by the "quenching" of the radio due to the CMB radiation. This is because they have a magnetic energy density much larger than the CMB one, even taking into account the Γ 2 enhancement due to the relativistic motion of the emitting plasma.
At z = 4, the CMB energy density is UCMB ∼ 2.6 × 10 −10 erg cm −3 . In the comoving frame, this is enhanced by a factor ∼ Γ 2 , thus reaching U ′ = 2.6 × 10 −8 (Γ/10) 2 erg cm −3 . Most of the observed radiation from the jet is produced in a compact region, where the magnetic field is around 3 G, and the observed self-absorption frequency is νt ∼ 3 × 10 12 Hz (rest frame, see e.g. . In the case of a flat radio spectrum, the self-absorption frequency scales as R −1 , where R is the distance from the black hole. This is the same dependence of the dominant component of the magnetic field B. Therefore in the region self-absorbing at 7 GHz (rest frame) B should be ∼ 6 mG, and its energy density UB = B 2 /(8π) ∼ 1.4×10 −6 . Since UB > U ′ CMB , there is no "quenching" of the synchrotron emission of the jet.
The proposed scenario
To solve the tension between predicted and observed sources, we propose a scenario that follows the ideas put forward by Fabian (1999). At redshifts larger than ∼4, jetted sources hosting a black hole with mass M > ∼ 10 9 M⊙ 1 are completely (i.e. 4π) surrounded by obscuring material. Only the jet can pierce through this material and break out. Observers looking down the jet can see the nuclear emission from the accretion disc and the broad emission lines. For observers looking with viewing angles even only slightly larger than θj ≈ 1/Γ, the optical emission (including broad lines) is absorbed, the flux is fainter, and the source cannot enter the SDSS catalog. The absorbed radiation is re-radiated in the infrared.
Even if the hot spots or the lobes were indeed emitting a radio flux above the 1 mJy level at the observed 1.4 GHz frequency, there would be no source in the SDSS to match with if the jet is only slightly misaligned. The quasi-spherical dusty structure (hereafter "obscuring" or "dark bubble") can cover the nuclear region until the accretion disc radiation pressure blows it away. This can occur at a threshold luminosity L th = η dṀ c 2 .
DISCUSSION
Let us assume that the obscuring bubbles exist not only in jetted sources, but are common to all high redshift quasars, including radio-quiet ones. The evolution in time of the obscuring bubbles 1 only the quasars with very massive black holes can be detected in the SDSS and the central black hole mass could however be different in jetted and non-jetted sources.
In fact, the presence of a jet could affect the accretion efficiency η d , defined as L d = η dṀ c 2 : part of the dissipation of the gravitational energy could amplify the magnetic field instrumental to launch the jet. In other words, while in the case of non-jetted AGN the gravitational energy is dissipated only through radiation from the disc (i.e. η d = η), radio-loud sources could use a fraction f of the released gravitational energy to heat the disc, and the remaining fraction (1 − f ) to launch the jet (Jolley & Kuncic 2008;Jolley et al. 2009): This condition could lead to different evolution patterns of the obscuring bubbles. If we assume an Eddington-limited accretion until the obscuring bubble is blown away by the reached L th , the mass growth rate of the black hole is: where mp is the proton mass, σT is the Thomson cross section and G is the gravitational constant. Therefore the black hole mass evolves as: The threshold luminosity can therefore be expressed as a function of time: from which we can derive how much time it takes for a massive black hole to reach the threshold luminosity itself Considering the difference in the use of gravitational energy in jetted and non-jetted AGN, there is a clear difference in the time needed for a source to blow away the dark bubble: if radio-loud AGN dissipate in radiation only f = 1/2 of the released gravitational energy, radio-loud AGN can get rid of their dark bubbles in half time, compared to non-jetted sources.
On the other hand, the black hole mass, at the time t th , is independent of f . For illustration, let us compare jetted and non-jetted sources of equal seed black hole mass M0, all emitting at their Eddington luminosity. Jetted sources have black holes that grow faster (if η d = f η). Therefore, at any given time, their Eddington luminosity is larger than that of the radio-quiet ones accreting with the same total η, but with η d = η. Fig. 2 shows the growth of the black hole for different values of η and η d , assuming that the accretion starts at z = 20 on a seed black hole mass of 100 M⊙. Assuming a threshold luminosity of L th = 10 47 erg s −1 , this is reached first by the jetted sources. Fig. 2 shows also the case of a total efficiency η = 0.1. Although we note the same trend (jetted sources with η d = 0.05 grows faster), we can note that in this case the threshold luminosity L th is reached at much larger redshifts. At z > 4, all jetted sources would have lost their absorbing bubble, and would be visible. One could also have jetted sources with η = 0.3 (and a smaller η d ), but radio-quiet sources with η = η d ∼ 0.1. In this case the radio-quiet ones could blow out the absorbing bubbles earlier than the jetted sources. This does not affect the general picture we are proposing, but it seems unlikely that at very early times, when we have large accretion rates, the spin of the black hole (that controls the efficiency η) is less than its maximum value (Thorne 1974), for all kind of objects. Major mergings could reset the black hole spin to values smaller than unity, but the rarity of very large black hole masses and the short available time (of the order of 1 Gyr) makes this possibility unlikely.
In the case we are discussing (all sources have η = 0.3, but η d of radio quiet is larger than in radio-loud), we have an interesting consequence. If we consider very large black hole masses (larger than 10 9 M⊙), jetted sources becomes fully visible in the optical at earlier times than radio-quiet objects. Even if the intrinsic ratio NL/NQ between the number of jetted and non jetted sources were constant in time (e.g. NL/NQ = 0.1, as at low redshift), we would infer at z > ∼ 4 a radio-loud fraction larger than NL/NQ from the blazar population. We stress that this would be true only if we consider large black hole masses, that need ∼ 1 Gyr to blow up the absorbing bubble. If the critical luminosity L th is smaller, it can be produced by a black hole of a smaller mass, that is reached at earlier times (larger redshifts). In this case, at z ∼ 4, these sources are all visible, since they have already blown up their bubbles.
This dark bubble scenario makes a simple prediction: most high-z parents of blazars with large black hole masses should be absorbed in the optical band, but should be very bright in the infrared. In this respect we can look at high-z radio-galaxies. Indeed, there is already one interesting example, 4C 41.17 (z ∼3.8), that is extremely bright in the far infrared (with flux densities ranging from 23.4±2.4 µJy at 3.6µ to 36.5±3.5 µJy at 8µ and a luminosity exceeding 10 47 erg s −1 ), but fainter in the optical by a factor ∼30 (Seymour et al. 2007;Chambers et al. 1990;van Breugel et al. 1998;Wu et al, in preparation). This is not a proof of a 4π absorbing bubble, but suggests that the absorbing material intercepts a larger fraction of the visible light, compared to local radio-galaxies. | 2016-05-11T20:00:00.000Z | 2016-03-17T00:00:00.000 | {
"year": 2016,
"sha1": "33f232018bad5042a682b6efe06a3ba2c0c005d2",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnrasl/article-pdf/461/1/L21/8009729/slw089.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "33f232018bad5042a682b6efe06a3ba2c0c005d2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
9275114 | pes2o/s2orc | v3-fos-license | Analysis of Risk Factors for Adjacent Segment Degeneration Occurring More than 5 Years after Fusion with Pedicle Screw Fixation for Degenerative Lumbar Spine
Study Design A retrospective study. Purpose We investigated the risk factors in adjacent segment degeneration (ASD) after more than 5 years of follow-up of lumbar spinal fusion. Overview of Literature There are many concerns regarding ASD followed by lumbar spinal fusion. However, there is a great deal of dispute about the risk factors. Methods A total of 55 patients who were followed up for more than 5 years after lumbar fusion were observed. Gender, age, residence, fusion method, number of fusion segments and radiological measurements were analyzed. In the radiological measurement, disc height, lumbar lordotic angle (LLA), fusion segment lordotic angle and fusion segment lordotic angle per level (FSLA per level) were estimated. In preoperative MRI, Pfirrmann's classification was used. The clinical result was evaluated by the criteria of Kim and Kim. Statistical univariate analysis was performed with the chi-square test by using SPSS ver. 12.0. Multivariate logistic regression analysis was conducted with SAS ver. 9. Results There were 21 patients with adjacent segment degeneration. Further, there was little relationship between ASD and gender, age, residence, fusion method, number of fusion segments, degree of preoperative adjacent disc degeneration in MRI, or preoperative and postoperative LLA. However, the frequency of ASD was significantly low in cases where FSLA per level was >15° (p=0.009). There was no significant relationship between ASD and the clinical result. Conclusions In patients followed up for more than 5 years after lumbar spinal fusion, the most important factor in the prevention of ASD was the restoration of FSLA per level to >15°.
Introduction
When a structural deformation or nerve compression is so severe that simple decompression alone does not produce satisfactory outcomes for degenerative lumbar disease, extensive decompression and lumbar vertebral fusion to maintain stability are widely-conducted surgical treatments.
Although some fusion methods and fixation systems have been developed to achieve successful fusion in spinal surgery, a long-term follow-up after a strong fusion has revealed degenerative changes at the adjacent segments due to the loss of mobility of the fusion site and the mechanical load caused thereby [1,2]. Degenerative changes at the adjacent segments include segmental instability, spinal stenosis, intervertebral disc lesion, retro-spondylolisthesis and fracture [3][4][5]. These poor outcomes are known to follow the accelerated degenerative changes at the adjacent segments after fusion [2,6,7]. As these complications are observed during a long-term follow-up, a cautious application of the fusion itself as well as new alternatives have been suggested. Therefore, measures to reduce and treat degenerative changes after fusion are discussed, along with the increased interest in causative factors related to the prevention and treatment reported by many studies. Nevertheless, these are still controversial.
On the basis of previous studies, in order to determine its causative factors, we statistically analyzed the correlation between possible factors and the degenerative change in adjacent segments among patients with radiographic changes during middle-or long-term follow-up of over 5 years after fusion with pedicle screw. In addition, we investigated the correlation between a radiological degenerative change at the adjacent segments and the actual clinical symptoms in order to show whether the radiological change was an index of the actual abnormality. We define that the degenerative change in the adjacent segment with a radiographic change is 'adjacent segment degeneration' , and the adjacent segment degeneration with clinical symptom is 'adjacent segment disease' .
Materials
The subjects of this study were 55 patients who had undergone pedicle screw fixation and spine fusion of three or fewer segments due to degenerative lumbar disease. The patients had been followed up for over 5 years. Their mean age at operation was 50.2 years (range, 34-67 years) and they consisted of 18 males and 37 females. Their mean follow-up period was 8 years and 6 months (range, 60-190 months). All of the surgery was performed by one orthopedic surgeon. The fusion methods were posterolateral fusion and posterior lumbar interbody fusion in 24 and 31 cases, respectively (Table 1).
Methods
The 55 subjects were retrospectively investigated with their medical records and radiological findings.
Criteria of degenerative change at adjacent segments: radiological degenerative change at the adjacent segments was considered to exist when anterior or posterior displacement of >3 mm was found on the X-ray of the sagittal plane of the closest upper segment and the closest lower segment at the last follow-up, when the height of the intervertebral disc relative to that of the upper interbody had declined by 20% and when a segmental motion instability of more than 15° was observed on the X-ray of the sagittal plane with flexion and extension.
1) Patient-related factors Gender, age and lifestyle by residential area were analyzed as patient-related factors, which could have some influence. Age was examined by dividing the patients into two groups: ≥50 years and <50 years of age (mean, 50 years). The effect of differences in lifestyle was examined by classifying residential areas into urban and rural categories.
2) Preoperative lumbar factors Magnetic resonance imaging (MRI) was used to investigate whether there had been a degree of preoperative adjacent disc degeneration. Patients recording grade ≥III in the five-grade classification of Pfirrmann et al. [8] based on MRI were considered to have a degenerative change. In addition, the preoperative lumbar lordotic angle was measured by Cobb's angle made by the upper endplate of the first lumbar vertebra and the upper endplate of the sacrum. It was classified, with its mean value of 32° as a standard, into ≥32° and <32°; moreover, its correlation with a degenerative change at the adjacent segments was assessed.
3) Surgery-related factors As factors related to surgical treatment, fusion method (posterolateral fusion or posterior lumbar interbody fusion) and the number of fusion segments (one, two and three segments) were analyzed.
4) Postoperative radiological change-related factors
Measures from the radiological images taken just after surgery were evaluated as surgical outcomes. First, the postoperative lumbar lordotic angle was classified, with its mean value of 40° as a standard, into ≥40° and <40°, and the meaning of each group was analyzed. Next, the fusion segment lordotic angle per level was calculated by dividing the lordotic angle of the fusion site or a Cobb's angle between the upper endplate of the fusion segment and the lower endplate of the fusion segment by the number of fused segments. It was also divided, with its mean of 15°, into ≥15° and <15°, and its correlation with the degenerative change of adjacent segments was assessed.
5) Correlation between radiological degenerative change
in adjacent segment and clinical symptoms The relationship between degenerative change in the adjacent segments and clinical outcome was assessed. The clinical outcomes were divided into satisfactory (excellent, good) and unsatisfactory (fair, poor) on the basis of the assessment base of the criteria of Kim and Kim [9].
6) Statistical analysis
To verify the significance of each factor, a univariate analysis was performed with the chi-square test by using SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA). Multivariate logistic regression analysis including all factors was also performed with SAS ver. 9 (SAS Institute, Cary, NC, USA), and the odds ratio of the significant factors was calculated. The significance level was p<0.05.
Radiological degenerative change at adjacent segments
Degenerative change at the adjacent segments was observed in a total of 21 cases. The change occurred at the upper segments (14 of retrolisthesis, seven of decreased height of the intervertebral disc, seven of segmental motion instability, and one of spondylolisthesis) in 18 cases, at the lower segments (one of retrolisthesis, two of decreased height of the intervertebral disc and one of spondylolisthesis) in four cases, and there was one case that showed the change at both the upper and lower segments.
1) Patient-related factors
According to the analysis of patient-related factors, the subjects included 18 males and 37 females, and the postoperative degenerative change was found in seven males and 14 females; hence, there was no significant difference by gender (p=0.940). The age of the subjects was ≥50 years and <50 years in 34 and 21 cases, respectively. The degenerative change did not show any significant difference by age as it was found in 14 out of the 34 cases with the age of ≥50 years and in seven out of the 21 aged <50 years (p=0.561). The relationship between degenerative change and differences in residential area as lifestyle was examined. The change was observed in 12 of 35 residents of urban areas and nine of 20 resident in rural areas; there was no statistically significant correlation (p=0.431) ( Table 2).
2) Preoperative lumbar factors
When the influence of the degree of preoperative adjacent disc degeneration was investigated by MRI, the degenerative change at the adjacent segments was found at the last follow-up in 11 out of 29 cases with and ten out of 26 without preoperative degenerative change; the difference was not statistically significant (p=0.968). In addition, the degenerative change was observed in eight and 13 among 27 and 28 cases with the preoperative lumbar lordotic angle of ≥32° and <32°, respectively; hence, there was no significant difference (p=0.200) ( Table 2).
3) Surgery-related factors
When the difference by the fusion method was investigated, the degenerative change was found in 11 and 10 out of 24 and 31 cases treated with posterolateral fusion and posterior lumbar interbody fusion, respectively, and the difference was not statistically significant (p=0.304). As for the number of fusion segments, one-segment fusion and two-and three-segment fusions were performed in 26 and 29 cases, respectively, and the degenerative change was observed in eight and 13, respectively. There was also no significant difference (p=0.284) ( Table 2).
4) Postoperative radiological change-related factors
When the influence of the postoperative lumbar lordotic angle on the degenerative change at the adjacent segments was investigated, the change was found in ten and 11 cases out of 29 and 26 with the angle ≥40° and <40°, and the difference was not significant (p=0.551). In addition, the fusion segment lordotic angle per level recorded ≥15° and <15° in 28 and 27 cases, respectively, and the degenerative change was shown in six and 15 cases, respectively. Thus, their difference was statistically significant (p=0.009) ( Table 2).
5) Correlation between radiological degenerative change
of adjacent segments and clinical symptoms For the correlation between the degenerative change of adjacent segments and clinical outcomes, the change was shown in 18 out of 44 cases with satisfactory clinical outcomes and three out of 11 with unsatisfactory outcomes. As there was no significant difference, the radiological change did not imply unsatisfactory clinical outcomes (p=0.405) ( Table 2).
Among the 21 cases with change at the adjacent segments, three needed a revision surgery (14.3%, 5.5% out of the total subjects); two of these were surgically treated for spinal stenosis at the upper adjacent segments, and the other one was treated for segmental instability.
When the multivariate logistic regression analysis on risk factors was conducted-with independent variables as gender, age, residential area, fusion method, degree of preoperative adjacent disc degeneration on MRI, the number of fusion segments, and the lumbar lordotic angle and the fusion segment lordotic angle per level at the fusion site after the surgery-the fusion segment lordotic angle per level of <15° increased the risk of degenerative change at the adjacent segments 4.666 times (range, 1.015-21.439 times) ( Table 3).
Discussion
Decompression and lumbar vertebral fusion have been widely conducted as surgical treatment for lumbar degenerative change. Spinal fusion provoked a conflict between benefits secured just after the surgery and future problems. The complications of lumbar vertebral fusion, such as intervertebral disc degeneration at adjacent segments, instability, fatigue and fracture, were observed during middle-and long-term follow-up [3][4][5]. Although many researchers have pointed out the degenerative lesions at the adjacent segments occurring frequently after fusion as major causes of these late complications [2,6,7], their causes, frequencies and risk factors are still controversial. ASD, adjacent segment degeneration; ADD, adjacent disc degeneration; MRI, magnetic resonance imaging; LLA, lumbar lordotic angle; PLF, posterolateral fusion; PLIF, posterior lumbar interbody fusion; FSLA, fusion segment lordotic angle.
It is the predominant view that the degenerative lesion at the adjacent segments can be part of normal aging, and that the reduced mobility and the mechanical load following lumbar fusion accelerates the degeneration [1][2][3]10,11]. This biomechanical change at the adjacent segments is affected by the range of fused segments and the sagittal angle, and stronger fixation of fused segments is known to be associated with a larger effect as more stress is put on the adjacent segments [12,13]. Cunningham et al. [14] reported that the pressure of the adjacent intervertebral disc became larger by 45% in their cadaveric study, and Lee and Langrana [15] found that the load at the adjacent segments was raised in their biomechanical study on lumbosacral fusion. For the influence of gender, Kumar et al. [16] and Ha et al. [17] showed no significant difference in the rate of the degenerative change by gender, while Etebar and Cahill [18] insisted that the rate of the change at the adjacent segments was higher in females after menopause. This study did not reveal any significant difference by gender.
Many researchers believe that older age leads to more change at the adjacent segments [10,[18][19][20][21]. As reasons for that, Aota et al. [20] pointed out that it was more difficult for the spine to adapt to postoperative biomechanical change in the elderly, whereas Etebar and Cahill [18] reported that osteoporosis negatively affected the existing degenerative procedure. However, we found no significant correlation between age and the degenerative change at the adjacent segments. As for the effect of lifestyle, Gillet [22] and Cho et al. [23] reported that differences in lifestyle could influence the adjacent segments, and Ahn et al. [24] reported that manual workers and residents in rural areas had around 47 times higher risk compared with those in urban areas. However, no significant correlation with residential area was observed in this study.
When the cases with preoperative instability or intervertebral disc degeneration at the adjacent segments were reviewed, Aota et al. [20] reported that instability deteriorated after the surgery in all cases with preoperative anterior displacement of ≥3 mm. Ha et al. [21] showed that preoperatively, more severe degenerative change in the adjacent joints was associated with more radiological change in the joints, and no degenerative change was observed for more than 5 years of follow-up in cases without preoperative degenerative change in the joints. However, this study did not show any direct correlation between preoperative adjacent disc degeneration on MRI and postoperative degenerative change at the adjacent segments.
Schlegel et al. [13] revealed that the change at the adjacent segments occurred early in the fusion when a device was used for fixation, and Rahm and Hall [2] insisted that posterior intervertebral body fusion increased the load over adjacent segments because the remaining mobility after bone fusion was excluded along with stronger initial fixation. However, Kim et al. [25] and Ha et al. [21] found no significant difference in the frequency of the degenerative change by the fixation system or fusion method. This study also showed that the correlation between the fusion method and degenerative change was not significant. Moreover, there was no significant difference in the degenerative change by the number of fusion segments. Although Aota et al. [20] and Etebar and Cahill [18] insisted that the changes at the adjacent segments became larger due to more stress on the segments for multi-level fusion, Kettler et al. [26] and Ha et al. [17] reported that there was no correlation between the number of fusion segments and the change. This study also indicated no significant correlation between them.
For the change at the adjacent segments by the sagittal angle, reduced lordotic angle has been reported to promote the degenerative change early by leading to a concentrated load of segmental motion at the adjacent segments [27,28]. Herkowitz and Kurz [29], Cho et al. [23] and Ahn et al. [24] stated that it was critical to maintain the lumbar lordotic angle during follow-up after fusion, and decreased lordotic angle eventually stimulated the degenerative change at the adjacent segments. For the segmental sagittal angle, Ahn et al. [19] found that the decrease in the fusion segment lordotic angle by 10° was associated with 3.2 times the increase in the degenerative change. In this study, the degenerative change was observed more frequently in the cases where the fusion segment lordotic angle per level was <15° after the surgery.
The incidence rate of the degenerative change at the adjacent segments after lumbar vertebral fusion has been reported variously as between 19.4% and 40% [3,20,21,25]. When it was investigated whether the degenerative change provoked clinical symptoms, Booth et al. [6] reported that a radiological degenerative change was found in many cases in >5 years after lumbar vertebral fusion; however, there were almost no cases with this symptom; other previous studies also revealed no correlation between radiological change and clinical symptoms [2,18,23,30]. This study found that unsatisfactory outcomes were observed in only 11 among 21 cases with the degenerative change at the adjacent segments, and that the correlation between radiological change and surgical outcomes was not statistically significant. However, it was reported that if clinical symptoms were observed again along with the degenerative change at the adjacent segments, surgical treatments were necessary in 8% to 16.8% of cases [14,20,25]. This study also revealed that three (14.3%) out of the 21 cases needed surgical treatment.
Conclusions
During the long-term follow-up after pedicle screw fixation and fusion, gender, age, residential area, fusion method, the number of fusion segments, and the degree of preoperative adjacent disc degeneration on MRI showed no significant relationship with the postoperative degenerative change at the adjacent segments; however, the correlation between the fusion segment lordotic angle per level and the postoperative degenerative change was significant. Therefore, efforts to restore the fusion segment lordotic angle per level to >15° are most important and are considered to be able to reduce the degenerative change at the adjacent segments.
Conflict of Interest
No potential conflict of interest relevant to this article was reported. | 2018-05-08T18:03:34.748Z | 2013-11-28T00:00:00.000 | {
"year": 2013,
"sha1": "dc74b922f31659474e86bf45ec79b881b4f60dbe",
"oa_license": "CCBYNC",
"oa_url": "http://www.asianspinejournal.org/upload/pdf/asj-7-273.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc74b922f31659474e86bf45ec79b881b4f60dbe",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": []
} |
212430182 | pes2o/s2orc | v3-fos-license | Cornual pregnancy-an unusual site of pregnancy: a case report and literature review
Ectopic pregnancy is a condition where gestational sac is located outside the uterine cavity. Cornual pregnancy, also known as interstitial pregnancy, is a rare type of ectopic pregnancy that develops in the interstitial portion of the fallopian tube and invades through the uterine wall. It poses great diagnostic challenge because of its unusual presentation and late diagnosis. Cornual pregnancy, if not diagnosed early, may present with massive and uncontrollable bleeding even leading to maternal death. We hereby report an unusual presentation of cornual pregnancy which was diagnosed and subsequently managed successfully.
Introduction
Implantation of fertilized ovum in sites other than the normal uterine cavity is called ectopic pregnancy; commonest site being the fallopian tube. Fallopian tube has different parts which vary in diameter and muscular strength. The commonest site of ectopic pregnancy is in the ampullary region of the fallopian tube as this is the most distensible part where fertilization of ovum and sperm takes place. The interstitial part of the fallopian tube, measuring 1.2cm in length and 0.7cm in width is situated within the uterine wall musculature. Pregnancies implanted in this site are called interstitial (cornual) pregnancies. 1 Cornual pregnancy, a rare variety of ectopic pregnancy, seen in about 2-4% of ectopic pregnancies. 2 Because of myometrial distensability, they tend to present relatively late, at 7-12 weeks of gestation. Rupture of cornual pregnancy may result in severe hemorrhage and shock, with high mortality rate, ranging between 2-2.5%. 2 According to the Confidential Enquiry into Maternal and Child Health (CEMACH) report published by Royal College of Obstetricians and Gynecologists(RCOG) press, London, United Kingdom in 2002, there were 11 deaths from ruptured ectopic pregnancy among which 7 were located in the extra uterine tube and 4 in the interstitial portion of the tube (cornual pregnancy). 3 Even though, cornual pregnancies are rare but they pose a significant diagnostic and therapeutic challenge and carry a greater maternal mortality risk than pregnancy in other parts of the fallopian tube. Here we present a case report of a patient with cornual pregnancy who was admitted in BIRDEM 2 Hospital with13+ weeks of amenorrhea where previous investigation reports of early weeks missed the diagnosis.
Case report
A 27-year-old, P-1+0, non-diabetic, normotensive house wife presented with severe per vaginal bleeding for 6 hours following 13+ weeks of amenorrhea. She had a previous uneventful pregnancy 4 years back and her child was born by Cesarean section without any complications.
History revealed that at her 6 weeks of amenorrhea she had a transvaginal sonography (TVS) which reported a case of missed abortion. For that she underwent dilatation and curettage in a local hospital but there was After one week, ultrasonography was repeated when she was diagnosed as a case of ectopic pregnancy and had a single dose of injection methotrexate (50mg) intramuscularly (IM) as medical management of ectopic pregnancy. She was advised for laparotomy as her symptoms persisted but she refused and went to abroad for further management.There she got another three doses of IM injection of methotrexate but still her per vaginal bleeding continued. Then she got admitted into BIRDEM 2 Hospital for further management.
On admission, she was severely anemic.Vital signs were within normal physiological limit. Per abdominal examination revealed no significant findings. On local vaginal examination, there was moderate per vaginal bleeding and blood was coming from the uterine cavity.
On investigations, her haemoglobin was 7.7gm/dl on admission. Other hematological investigations were well within normal limit. Routine examination of urine showed plenty of red cells. Serum E-hCG was14.8 mIU/ ml. Ultrasonography of lower abdomen and pelvis showed bulky uterus with right sided cornual pregnancy ( Figure 1).
Then laparoscopy was done. Pelvic cavity was free from adhesions.Uterus was asymmetrically enlarged with a highly vascular bulging on the right cornu which appeared to be a case of cornual pregnancy.
Due to high vascularity, decision for laparotomy was taken. Both tubes and ovaries looked apparently healthy. After preliminary infiltration with diluted vasopressin solution. Incision was made over the bulging and degenerated gestational sac, placental tissue and organized blood clots were removed ( Figure 2).Then uterus was closed in layers.
Figure 1 Ultrasonography showing cornual pregnancy
After counseling of patient and her attendant, decision for hysteroscopy followed by laparoscopy was taken. Patient was also counseled for laparotomy if necessary.
In hysteroscopy, good view of endometrial cavity was obtained. Both tubal ostia were seen. Right ostium was narrow but left ostium was normal.
Microscopic examination of the removed tissue showed, necrosed tissues, blood clots and degenerated as well as well-preserved chorionic villi lined by trophoblastic cells. Post-operatively the patient was stable and discharged on the fourth post-operative day without any complication.
Discussion
Cornual ectopic pregnancy is a rare type of tubal ectopic with a high risk of rupture and hemorrhage compared to other types of ectopic pregnancy. 4 It occurs within the interstitial portion of the fallopian tube and therefore has the potential to grow to larger sizes compared to other types of tubal pregnancies by the time they present. The risk of cornual pregnancy is increased in patients with history of pelvic inflammatory disease or tubal surgery or conception after tubal ligation. Rare associations may be seen when there is history of salpingectomy or salpingostomy. It can also occur after assisted reproductive technique specially if there is difficulty during the embryo transfer procedure. 5 Clinical presentation depends on whether pregnancy sac has ruptured or not. Unruptured cases may present with history of repeated abdominal pain at few days interval and per vaginal bleeding. Ruptured cases usually present with severe abdominal pain and features of shock. Cornual pregnancies are associated with the highest risk of massive, uncontrollable bleeding which can happen at any time in early pregnancy, thus leading to relatively high mortality rate. 6 Some authors have reported rupture of the interstitial pregnancy followed by formation of a hematoma in the broad ligament. 7 Ultrasonography is the most commonly used tool where it has 80% sensitivity and 98% specificity. The diagnostic criteria includes: a) Absence of gestational sac in uterine cavity, b) Gestational sac seen independently and less than 1cm from the lateral edge of the uterine cavity, c) Thin layer of myometrium around the gestational sac, d) Interstitial line sign (echogenic line extending to the gestational sac): During early days of cornual gestation, the sac is located in the lateral part of the uterus. Later on, the gestational sac shift above the uterine fundus. Thus, cornual pregnancy detected late may appear as an eccentric uterine pregnancy 8,9 Transvaginalsonography (TVS) is much better in diagnosis of cornual pregnancy than transabdominal sonography and one study has shown that early diagnosis of cornual pregnancy with TVS allows for first trimester conservative management with methotrexate. 10 Magnetic resonance imaging (MRI) may also be an important tool to reveal the eccentric location of gestational sac to the junctional zone but it is more expensive. 11 Early recognition of the cornual pregnancy is essential for medical management of cornual pregnancy. It can be used if there is hemodynamic stability, no medical contraindications to methotrexate and no signs of rupture. The modes for administration of methotrexate (dose: 1mg/kg) include systemic, laparoscopic, transvaginal, and with ultrasound guidance. According to RCOG guidelines, medical treatment should be given only when levels of EhCG are less than 3000 IU/L and symptoms are minimal. The systemic route of administration is a safe and highly effective treatment and offers advantages over local injection in that it is less invasive and not operator dependent.On the other hand, advantage of local administration of methotrexate has been its favorable side-effect profile and low dosage.Medical management with methotrexate may not always be successful and if treatment fails, surgical intervention may be required. Other non-surgical methods include selective uterine artery embolisation associated with or after methotrexate failure which can be used successfully in treating selected cases of early cornual pregnancy. 12 Traditionally, the treatment of cornual pregnancy has been surgical management which includes corneal resection, cornuostomy and hysterectomy. Laparotomy is necessary in patients with ruptured cornual pregnancy with profuse and life threatening haemorrhage. 13 In other cases, laparoscopic cornual resection, laparoscopic cornuostomy or hysteroscopic removal of interstitial ectopic tissue can be done. 14 But conservative surgical approach may lead to catastrophic hemorrhage, so ipsilateral uterine artery ligation can be performed before attempting to repair a ruptured uterine cornu which will help to achieve homeostasis and allow time to repair the cornu. 15 The size of cornual gestation determines the feasibility of laparoscopic approach. Laparoscopicsalpingostomy may be appropriate for gestation less than 3.5 cm 1 and cornual resection may be preferred for gestation of more than 4cm. 16 A laparoscopic approach should only be attempted if the surgeon is skilled in laparoscopic techniques and has the ability to convert the operation quickly to a laparotomy. 16 Hysteroscopic treatment is used in cases of noncompliance with treatment with methotrexate or who did not respond to management with methotrexate. With this technique, cornual endometrium is removed (including tubal ostium) and pregnancy sac extracted under laparoscopic guidance. 17 In contrast with other types of ectopic pregnancy, expectant management of corneal pregnancy is considered unsafe because of high risk of complications, which include uterine rupture and massive internal bleeding.
Following successful management of cornual pregnancy, there are some concerns regarding future pregnancy. Surgical treatment involving resection of the involved cornual region is associated with decreased fertility rates and increased rates of uterine rupture in future pregnancies. 14 So, it is mostly agreed that caesarean section should be the optimum mode of delivery for all pregnancies following cornual pregnancy. 13 The second concern after conservative management of cornual pregnancy is recurrence of ectopic pregnancy, particularly cornual pregnancy on the same side. So, appropriate counseling regarding future pregnancy risks and optimum mode of delivery should be done before discharge from the hospital.
Conclusion
Cornual pregnancy poses a significant diagnostic and therapeutic challenge and carries a greater maternal mortality risk than other types of tubal pregnancy.So, the purpose of this paper is to increase awareness and understanding the seriousness of cornual pregnancies and to advocate for early clinical diagnosis aided by ultrasound or lapaprosocpy. | 2020-02-13T09:23:03.446Z | 2019-12-31T00:00:00.000 | {
"year": 2019,
"sha1": "72c61653aafbdf5dbb7b3faf4aafefe101a2c782",
"oa_license": null,
"oa_url": "https://www.banglajol.info/index.php/BIRDEM/article/download/44764/32733",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "395e999e617797a5b4beb789fdc6f1ef534afdf7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245515836 | pes2o/s2orc | v3-fos-license | Cognitive Impairement in Non-Cirrhotic Portal Hypertension: Highlights on Physiopathology, Diagnosis and Management
Hepatic encephalopathy (HE) is one of the most frequent complications of cirrhosis. Several studies and case reports have shown that cognitive impairment may also be a tangible complication of portal hypertension secondary to chronic portal vein thrombosis and to porto-sinusoidal vascular disease (PSVD). In these conditions, representing the main causes of non-cirrhotic portal hypertension (NCPH) in the Western world, both overt and minimal/covert HE occurs in a non-neglectable proportion of patients, even lower than in cirrhosis, and it is mainly sustained by the presence of large porto-systemic shunt. In these patients, the liver function is usually preserved or only mildly altered, and the development of porto-systemic shunt is either spontaneous or iatrogenically frequent; HE is an example of type-B HE. To date, in the absence of strong evidence and large cooperative studies, for the diagnosis and the management of HE in NCPH, the same approach used for HE occurring in cirrhosis is applied. The aim of this paper is to provide an overview of type B hepatic encephalopathy, focusing on its pathophysiology, diagnostic tools and management in patients affected by porto-sinusoidal vascular disease and chronic portal vein thrombosis.
Criteria for the Literature's Selection
Clinical studies that assessed the prevalence and incidence of any type of hepatic encephalopathy (HE) in patients affected by chronic portal vein thrombosis and portosinusoidal vascular disease (PSVD) were included. Studies that evaluated diagnostic tools for the detection of cognitive impairment in this population or that evaluated the efficacy of treatment strategies were included too. No language, publication date, or publication status restrictions were imposed. The studies were identified by searching electronic databases (PubMed and SCOPUS). The last search was run on 28 October 2021. Reference lists of all studies included in the present review were screened for potential additional eligible studies.
One investigator (SG) searched the electronic databases, combining the following keywords: (hepatic encephalopathy AND non-cirrhotic portal hypertension), (hepatic encephalopathy AND porto-sinusoidal vascular disease), (hepatic encephalopathy and portal vein thrombosis), (type B AND hepatic encephalopathy), (hepatic encephalopathy AND idiopathic non-cirrhotic portal hypertension), (hepatic encephalopathy AND nodular regenerative hyperplasia), (cognitive impairment AND non-cirrhotic portal hypertension). Studies were excluded if the title and/or abstract showed that the articles did not meet the selection criteria of our review. For potentially eligible studies, or if the relevance of an article could not be excluded with certitude, we procured the full text. We defined the following exclusion criteria: (1) studies in which HE developed in patients with cirrhosis; (2) studies unrelated to our topic; and (3) studies in which HE developed in patients with a kind of non-cirrhotic portal hypertension other than portal vein thrombosis (PVT) and PSVD. A total of nineteen papers were finally analyzed.
Definition
Hepatic encephalopathy is a frequent complication and one of the most debilitating manifestations of liver disease, having a relevant impact on the quality of life of the patients and their caregivers [1]. It represents a brain dysfunction caused by liver insufficiency and/or porto-systemic shunting and is characterized by a wide spectrum of neurological or psychiatric abnormalities ranging from subclinical alterations to coma. According to the underlying disease, HE can be divided into: type A, due to acute liver failure, type B, secondary to porto-systemic bypass or shunting, and type C, resulting from cirrhosis [2].
Historical Point and Pathophysiology
The pathogenesis of hepatic encephalopathy is still much debated and not completely understood. It represents a multifactorial and complex syndrome in which there is an imbalance between production, metabolism and regulation of several neurotoxins and neurotransmitters [3,4] as a result of an interorgan trafficking. According to the most accredited hypothesis, which has its origins as early as 1954 with the studies conducted by Sherlock et al. [5], substances of a predominantly, but not exclusively, nitrogenous nature (ammonium, glutamine, methionine, mercaptans, phenol, indole, serotonin, GABA, etc.) reach the central nervous system, causing the spectrum of symptoms typical of HE. Many studies recognize ammonium as the key pathogenetic element responsible for the astrocytic swelling, known as "astrocyte swelling" [6,7]. In the cirrhotic patient, this process is made possible both by the inability of the liver to catabolize these substances and by the fact that portal hypertension acts as a stimulus for the creation of porto-systemic venous anastomoses. These anastomoses allow "dirty" blood, coming from the intestine/gut, to bypass the liver and reach the brain through systemic circulation, where these toxic substances cause an alteration of neurotransmission. The role of portosystemic bypasses in the development of type C HE is demonstrated by various pieces of evidence: their presence in 46% to 71% of patients with recurrent or persistent HE [8,9], the disappearance or in any case the reduction in the number of HE episodes in these patients after embolization of the shunt [9,10], and finally the development of HE in 25-45% of patients undergoing transjugular intrahepatic portosystemic shunt (TIPS), with evident improvement after revision of the stent [11]. Similar values are also achieved after porto-systemic surgical anastomoses.
In patients with type B HE, by definition, the liver is normally functioning, so the presence of portosystemic shunts would seem to be the main pathogenetic factor. Some animal models of type B HE have been used, in particular in rats, cats, and dogs, and less frequently in rabbits, in which HE was based on the presence of portal-systemic shunting. Moreover, while the presence of portosystemic shunts in humans is a rather rare vascular anomaly, in dogs, it is much more frequent. A 2003 multicenter study [12] found the presence of congenital shunts in 0.18-3.2% of all dogs evaluated, with higher or lower values depending on the breed. Dogs affected by such shunts show clinical symptoms similar to those of human HE. Finally, other studies showed that rats treated with portal-cava anastomosis were sensitive to ammonia administration, which leads to severe encephalopathy [13,14]. The study of these animal models helped to better understand the pathogenesis of type B HE also in humans [15,16].
Most of the works on type B HE conducted on humans come from the Eastern world, and derive from studies on patients with congenital porto-systemic shunts, consequently not related to portal hypertension. Hepatic encephalopathy linked to this type of congenital vascular anomalies was first described by Raskin et al. [17] in 1964, who published the clinical case of a patient with HE associated with a large spontaneous intrahepatic shunt. In the following years, the interest of the scientific community towards this clinical condition increased, above all due to the development of diagnostic techniques for non-invasive images, through which it was possible to obtain increasingly accurate images of the portal venous system (Doppler Ultrasound, contrast enhanced CT-scan and MRI). The Japanese Society for the Study of Liver Disease is the organization most interested in bypass HE. Thanks to the results of a national survey, in 2000, Watanabe published a review specifically focused on the subject [18]. From this investigation, it emerged that many patients with shunt-related HE were wrongly diagnosed as being affected by dementia, psychiatric or neurological disorders, or even by cirrhosis or acute liver failure, and therefore were submitted to prolonged hospitalizations and inappropriate medical interventions. Hence the importance, according to Watanabe, of searching for the presence of spontaneous porto-systemic shunts in all patients with typical symptoms and signs of HE even in the absence of an altered liver function. Watanabe identified both congenital vascular anomalies (patent ductus venous duct, absence of portal vein, arteriovenous malformations, rupture of intrahepatic portal varices, Rendu-Osler-Weber disease, etc.) and acquired (post abdominal surgery, trauma, liver biopsy, etc.) as causes of the formation of these collateral vessels. However, since, in most cases, it was not possible to find a specific cause, the author hypothesized that these shunts were due to portal hypertension, which disappeared after the development of the aforementioned anastomoses due to their decompressive action.
Patients affected by non-cirrhotic portal hypertension (NCPH) theoretically represent an ideal model in which to study type B HE, as they maintain preserved liver function for a long time, but have portal hypertension, which is an important stimulus to shunt formation (spontaneous acquired porto-systemic). Therefore, in this review, we focused on the prevalence, the diagnosis and management of HE occurring in patients affected by idiopathic non-cirrhotic portal hypertension (INCPH), recently named as porto-sinusoidal vascular disease (PSVD), and chronic portal vein thrombosis (PVT), which represent the most frequent vascular liver diseases causing NCPH in the Western world [19][20][21][22][23][24]. In the presence of portal hypertension, the porto-systemic shunts develop both passively, following the reopening of collapsed embryonic vessels and the inversion of flow in pre-existing vessels (in fact, physiologically there are numerous portosystemic anastomoses), and actively thanks to an increase in VEGF levels [25,26].
Moreover, Das et al. showed that patients with PSVD had cerebral alterations typically observed in cirrhotic patients [27]. In greater detail, they confirmed that the majority of cirrhotics have a hyperintense globus pallidus on T1 W MRI images, and they showed that none of the patients with PVT but more than half of the patients with PSVD had similar radiological findings. The diagnosis of PSVD or cirrhosis was the only independent predictor of the presence of these findings. That cerebral alteration has been attributed to the effects of manganese, which is deposited in excess in the brain of cirrhotics [28][29][30][31][32], and that normally, it is mainly cleared by the liver and excreted in bile [33], and its deposition in cirrhotics is probably due to lower biliary clearance secondary to the hepatocellular damage and to porto-systemic bypass. In cirrhosis, both mechanisms are involved, and this makes it hard to define what is responsible for these alterations. The authors speculate that, as with cirrhotics, but unlike PVT, patients with PSVD have an increased fasting arterial ammonia and abnormal ammonia tolerance test [34], making them prompt in developing HE under appropriate stress. Finally, the finding that the studied cerebral changes were not observed in patients with PVT is in opposition to previous results [35].
Prevalence of HE
The literature on hepatic encephalopathy in patients with portal hypertension due to portal vein thrombosis is mostly based on studies conducted in the Eastern world [36][37][38][39], where this clinical condition is more frequent, or in pediatric populations [40][41][42]. The main results of these studies are summarized in Table 1. Sharma et al. [36] showed that the prevalence of minimal hepatic encephalopathy (MHE) assessed by psychometric tests and critical flicker frequency (CFF) was 35.5% in patients with chronic extrahepatic portal vein obstruction (EHPVO). The same group demonstrated [37], in a cohort of 32 patients with EHPVO followed up for 1 year, that 12 patients were affected by MHE at baseline, that 75% of them continued to have MHE at follow-up, and that one of the patients without MHE developed it later. In the short time of follow-up, none of the patients developed overt HE.
The presence of MHE in these patients was strongly associated with a higher expression of ammonia, pro-inflammatory cytokines, and brain glutamine levels. [43][44][45].
Evaluating the prevalence of hepatic encephalopathy (minimal and overt) in 51 patients affected by NCPH in comparison with that of a control group of cirrhotic patients [44], Nicoletti et al. showed that, even lower than that observed in cirrhotic patients, a cognitive impairment was detectable in a relevant proportion of patients with non-cirrhotic portal hypertension, with no difference between the patients with chronic portal vein thrombosis and PSVD. The presence of a large portal-systemic shunt (spontaneous or iatrogenic) was considered the main risk factor for HE in these patients, as it was identified in 71% of the patients with cognitive impairment. Another study showed that in patients affected by NCPH, the incidence of OHE was similar, while the prevalence of MHE was lower than that of cirrhosis patients. The authors confirm that together with upper gastrointestinal bleeding and infection, a portosystemic shunt was an independent factor for HE [46].
Additionally, post-TIPS HE is a not infrequent complication of portal hypertension due to PSVD [47]. In a European cohort of 41 patients affected by PSVD and submitted to TIPS as the treatment of portal hypertension-related complications [45], HE was an in-hospital complication of two patients, while at long-term follow-up, overt HE occurred in 31% of the patients, and the one-year rate of overt HE was 24%. In two patients, HE was severe enough to require shunt reduction [45].
Diagnosis
The tests and the methods used to diagnose hepatic encephalopathy in patients with NCPH are the same currently used for the diagnosis of HE in cirrhotic patients.
Diagnosis of Overt Hepatic Encephalopathy
The diagnosis of overt HE is mainly based on clinical examination. Some scales are used to stage the severity of the encephalopathy, and with this aim, the most applied is the West Haven scale, which still represents the gold standard. The diagnosis of cognitive dysfunctions is not difficult; less easy is the attribution of them to overt HE, and that is why the exclusion of other causes of mental alteration by laboratory and radiological assessment is often required [2,50].
Diagnosis of Covert Hepatic Encephalopathy
For the diagnosis of minimal/covert HE, the same tools used for cirrhotic patients are used in NCPH patients, and they include paper-pencil tests (the psychometric hepatic encephalopathy score-PHES), computerized tests such as continuous reaction time, the inhibitory control test, the SCAN test and the Stroop test, or neurophysiological tests including the CFF and EEG. Clinicians may use tests for the diagnosis of MHE with which they are familiar that have been validated for use in this patient population [2,[50][51][52][53].
In the studies exploring the prevalence of minimal HE in patients affected by EHPVO, especially in children, the most used tools for the assessment of HE were psychometric tests and CFF. In the study by Yadav, the superiority of psychometric tests in comparison to CFF was demonstrated. The same observation resulted from a study by Srivastava [42]. In a recent study by Suresh et al., the diagnostic accuracy of the computerized Stroop test for the assessment of MHE in an Indian pediatric cohort of patients with EHPVO was investigated in comparison to other validated tests. The authors observed that the Stroop test can be useful to detect MHE in children and identify a subgroup of patients to be submitted to psychometric tests in clinical care. Nicoletti et al., as previously reported, used two categories of tests to evaluate the presence of minimal/covert HE [2]: the PHES and the Scan battery. The accuracy of the Scan battery and of PHES in the detection of MHE was similar, but some discordance was observed, suggesting that the two tests have different levels of difficulty [54] (the scan test is more complex than PHES), and that they explore different domains of cognitive function.
Treatment
To date, the treatment of overt hepatic encephalopathy occurring in patients affected by NCPH is the same as that of HE in cirrhotic patients.
The initial management includes a prompt start of care of hospitalized patients with HE, including the identification and the treatment of co-existing causes; the identification and correction of precipitating factors and the start of empirical treatment targeted the reduction in ammonia levels. The cornerstones of medical treatment of overt HE include nonabsorbable disaccharides, such as lactulose, and antibiotics, such as rifaximin, where lactulose is recommended for the prevention of recurrent episodes of HE after the initial episode, and rifaximin as an add-on to lactulose is recommended for the prevention of recurrent episodes of HE after the second episode. Alternative therapies, such as oral branched-chain amino acids, intravenous L-ornithine L-aspartate, and probiotics have been studied and used in cirrhotic patients, but no data in patients with NCPH have been provided [2,53]. Whether applying in patients with NCPH the same therapeutic strategies used in cirrhotic patients is correct is unknown. Although HE is a less frequent complication of a less frequent disease, more cooperative studies are needed to identify the best approach to treat hepatic encephalopathy in patients with NCPH.
As hepatic encephalopathy occurring in patients affected by portal vein thrombosis or idiopathic non cirrhotic portal hypertension is a type B HE, mainly sustained by the presence of large porto-systemic shunts, the radiological occlusion of the shunt may represent a fundamental approach in patients with persistent HE, despite an adequate medical treatment. [43,55] Radiological techniques such as plug-assisted retrograde transvenous obliteration (PARTO) or coil-assisted retrograde transvenous obliteration (CARTO) are currently used to treat recurrent or persistent HE [56][57][58], as well as gastric varices often present in these patients (Figure 1). Finally, in patients with persistent post-TIPS HE, the reduction in the caliber of the stent or its occlusion must be evaluated.
(CARTO) are currently used to treat recurrent or persistent HE [56][57][58], as well as gastric varices often present in these patients ( Figure 1). Finally, in patients with persistent post-TIPS HE, the reduction in the caliber of the stent or its occlusion must be evaluated. Finally, as in cirrhosis, the necessity to treat MHE is still debated. Despite its clinical implications (impairment, poor quality of life, etc.), guidelines state that the treatment of minimal/covert HE in cirrhotic patients is to be evaluated on a case-by-case basis [53,59,60].
However, some studies observed an improvement in psychometric tests in the majority of the EHPVO pediatric patients with MHE after therapy with lactulose, and that such treatment was well-tolerated [61]. These results confirm a previous study by Sharma were lactulose seemed to be effective in the treatment of minimal hepatic encephalopathy in patients with portal vein thrombosis, and that patients with cognitive impairment and with porto-systemical shunts had a better response to lactulose than the patients without any collaterals [38]. In the patients who responded to lactulose, the blood ammonia levels significantly reduced, while in the patients who were non-responders to the treatment, they did not.
Conclusions and Future Directions
Type B HE can be considered a complex and multidimensional cognitive deficit that is not infrequently found in patients with NCPH and that shares substantial physiopathological bases with type C HE. Both the presence of shunts per se and the neurotoxic effect of toxins of intestinal origin play a fundamental role in determining and supporting alterations in mental status, even in the absence of hepatocellular damage. Moreover, a Finally, as in cirrhosis, the necessity to treat MHE is still debated. Despite its clinical implications (impairment, poor quality of life, etc.), guidelines state that the treatment of minimal/covert HE in cirrhotic patients is to be evaluated on a case-by-case basis [53,59,60].
However, some studies observed an improvement in psychometric tests in the majority of the EHPVO pediatric patients with MHE after therapy with lactulose, and that such treatment was well-tolerated [61]. These results confirm a previous study by Sharma were lactulose seemed to be effective in the treatment of minimal hepatic encephalopathy in patients with portal vein thrombosis, and that patients with cognitive impairment and with porto-systemical shunts had a better response to lactulose than the patients without any collaterals [38]. In the patients who responded to lactulose, the blood ammonia levels significantly reduced, while in the patients who were non-responders to the treatment, they did not.
Conclusions and Future Directions
Type B HE can be considered a complex and multidimensional cognitive deficit that is not infrequently found in patients with NCPH and that shares substantial physiopathological bases with type C HE. Both the presence of shunts per se and the neurotoxic effect of toxins of intestinal origin play a fundamental role in determining and supporting alterations in mental status, even in the absence of hepatocellular damage. Moreover, a multidisciplinary approach for the best management of these patients is often needed. In fact, patients affected by non-cirrhotic portal hypertension may develop all the sequelae of portal hypertension, such as portal hypertensive bleeding or refractory ascites often requiring TIPS placement. However, the development of HE episodes that are also poorly responsive to medical treatment after this procedure, which could require stent revision with the reduction in its caliber or occlusion, is not rare.
Finally, a thorough understanding of the impact of HE in NCPH patients cannot fail to consider the need for reliable epidemiological data. Therefore, further studies will also be needed to establish the exact prevalence and incidence of cognitive impairments in these patients. In fact, only a thorough knowledge of all the facets of the problem will allow us to promptly identify patients at risk, study the cognitive deficit extensively and undertake appropriate therapies. Data Availability Statement: Data sharing is not applicable to this article as no new data were created or analyzed in this study. | 2021-12-29T16:11:58.077Z | 2021-12-25T00:00:00.000 | {
"year": 2021,
"sha1": "ce81f01b194ecab8e7f4db0e86f7d52916469bc8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/1/101/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec744187e0da06e975556c36e530031e6661dec2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248496167 | pes2o/s2orc | v3-fos-license | Towards the structure of a cubic interaction vertex for massless integer higher spin fields
The structure of a cubic Lagrangian vertex is clarified for irreducible fields of helicities $s_1, s_2, s_3$ in a $d$-dimensional Minkowski space. An explicit form of the operator $\mathcal{Z}_j$ entering the vertex in a non-multiplicative way (examined in arXiv:2105.12030[hep-th] for $j=1$) is obtained. The solution is found within the BRST approach with complete BRST operators, which contain all constraints corresponding to the conditions that extract the irreducible fields, including trace operators.
Introduction
The theory of interacting higher-spin fields has become one of the topical areas of theoretical and mathematical high-energy physics (for a review, we can recommend, e.g., [1], [2], [3], [4], [5]). It is anticipated that interacting higher-spin fields will open up new opportunities in the search for elementary particles beyond the Standard Model, and will also contribute to the emergence of pioneering approaches to the unification of fundamental interactions.
In our recent paper [6], a general Lagrangian cubic vertex has been obtained for unconstrained interacting fields with integer-helicities in Minkowski spaces (see [7], [8], [9], [10], [11], [12], [13], [14], [15] for the study of cubic vertices in different approaches). In contrast to the previously known results on cubic vertices, the study of [6] does not impose on interacting fields any algebraic relations that do not follow from the least action principle. The vertex is derived based on a BRST-closed solution of an operator equation arising from the condition that demands the preservation of gauge invariance for a deformed free action with respect to deformed gauge transformations, which, in their turn, follow from the application of an unconstrained BRST approach (developed, for example, in [16], [17], [18], [19], [3]; for the equivalence of the constrained [13] and unconstrained BRST approaches, see [20]) to the Lagrangian description of higher-spin free field models in Minkowski and anti-de Sitter spaces. The found vertex corresponds to the cubic vertex [11] deduced using the light-cone formalism in terms of physical degrees of freedom, and preserves the irreducibility of a representation for interacting fields, in particular, the number of physical degrees of freedom under a deformation of a free Lagrangian formulation.
The vertex |V (3) (s) 3 found in [6] (see (24), (25) for the definition) contains operator quantities including trace U k i operators entering multiplicatively and corresponding to the spin values s i , i = 1, 2, 3, as well as some operators Z j characterized simultaneously by three sets of spins, s 1 , s 2 , s 3 . An expression for the operator Z 1 has been found in [6]. The present article is aimed to finding an explicit representation for the operator Z j entering the vertex non-multiplicatively for j = 2, 3, ....
The paper has the following organization. In Section 1, the results of the BRST construction involving a complete BRST operator are presented as applied to deriving a cubic vertex for unconstrained fields of integer helicities, s 1 , s 2 , s 3 . In Section 2, we obtain the operators Z j for j > 1. Conclusion summarizes the results.
BRST approach to a cubic interaction vertex
A Lagrangian formulation for a cubic vertex within the BRST approach to interacting real-valued totally symmetric massless fields φ ..µs i (x), i = 1, 2, 3 with integer higher helicities s 1 , s 2 , s 3 in a d-dimensional Minkowski space determines a gauge theory of first-stage reducibility in a configuration space M (s) 3 cl [6] with the action functional being invariant up to the first order in the interaction constant g with respect to non-Abelian gauge transformations with zero-level parameters Λ (i) which are invariant, with the same accuracy, under gauge transformations with independent parameters, Λ (i)1 In (1) The quantities η are ghost operators generating Hilbert spaces H (i) gh with ghost-independent vectors |φ (i) .. s−... . Q (i) and K (i) in (1)-(4) stand for the BRST operator and the operator defining an inner product in the space H as elements of respective Q (i) -complexes (see, for example, [17], [18]) determine a distribution of Grassmann parity and ghost number for |χ (i) The operators l 0 and the basic vector |φ (i) are defined in a Fock space H (i) generated by bosonic oscillators a .. s−... , also on auxiliary bosonic oscillators b Each of the BRST operators Q (i) ((ǫ, gh)Q (i) = (1, 1)) is constructed using a corresponding system of constraints: l and contains anticommuting ghost operators, η where BRST-extended traceless constraints have the form Here, the operators 0 , commute with one another at i = j and form 3 isometry subalgebras in Minkowski space and 3 subalgebras so(1, 2) (13) with independent cross-commutators: [l The ghost operators satisfy the non-vanishing anticommutation relations −ı{η {η (i) 11 , P All of the above-mentioned operators act in a Hilbert space with an inner product of vectors depending an all of the oscillators and ghosts, : . (17) The complete BRST operator Q tot = 3 j=1 Q (j) supercommutes with any of σ (i) ; it is nilpotent in a subspace with zero eigenvectors for the spin operators σ (i) (16) and is Hermitian together with the operator K = ⊗ 3 j=1 K (j) with respect to the inner product (17): The vertex V (3) (s) 3 has a local representation: The vertex is a BRST-closed solution of the equations [6]: (with the properties (ǫ, gh) V (3) = (1, 3)) as a consequence of the inner product completeness, as well as of the spin equations (16). Arbitrariness in solutions of the system (21) is determined by adding BRST-exact terms of spin (s) 3 , ((ǫ, gh) X (3) = (0, 2)) which do not alter the equations of motion for the interacting model. The gauge transformations form a closed algebra with a commutator of transformations being proportional to the gauge transformation with a Grassmann-odd gauge parameter Λ 3 , expressed functionally through Λ 1 and Λ 2 : Λ . It should be noted that the validity of the Jacobi identity for the gauge transformation algebra imposes additional restrictions on the vertex V (3) (s) 3 . The equation (21) determines cubic interaction vertices for irreducible massless totally symmetric higher-spin fields.
We emphasize that a Lagrangian description without the interaction vertex V (3) (s) 3 is equivalent to 3 copies of Fronsdal formulations [22] in terms of totally symmetric double traceless fields φ
General solution for a cubic vertex: the form of the operator Z j
A general solution of the equations (21) for a cubic vertex has been obtained [6] in the form of modified products of special operators homogeneous in powers of oscillators (taking into account the conservation law for the momentum associated with the vertex) [13] using the powers of operators linear (L (i) ) k i and cubic Z j in the powers of oscillators, with a subsequent replacement by BRST Q tot -closed forms L The set of Q tot -closed operators also includes new two-, four-, ..., [s i /2] forms in powers of oscillators, corresponding to trace operators at i = 1, 2, 3 Different representatives of vertices are labelled by a natural-valued parameter k, restricted by the inequalities so that the order of derivatives diminishes in the representatives by the value of 2 under the change k → k + 1. Notice that the vertex |V (3) (24) may contain terms without derivatives for even-valued helicities s i , as well as some terms with one, two, and three derivatives in case the respective one, two, and all the s i helicities are odd-valued [6].
The quantity Z j в (25) is defined in [6] for j = 1 by the relation k i and also from the fact that the trace-dependent part of the BRST operator η In (33), we take into account (12) that it is only the part h (l) b (l) of the operator L (l) 11 that acts non-trivially on the second term. As we notice, once again, that the structure of the final expression in (33) consists of a Q tot -closed part and a triple supercommutator, independent of the oscilltors carrying the indices l, e, with l = e, except for the "processed" oscillator b (l)+ , we introduce additively the indicated product for each e = l, e = 1, 2, 3, multiplied, respectively, by k e b (e)+ h (e) , thereby increasing the first two terms (32). As a result, under the action of Q tot on a twice-modified quantity, we The structure of the non-vanishing expression in (34) consists, once again, of a Q tot -closed part and the fourth supercommutator, independent of all of the oscillators, except the "processed" ones, b (l)+ , b (e)+ . Subtracting the latter term, constructed as multiplying respectively by k o , from the first three terms in (32) proves the BRST-closeness of the quantity (32).
For j = 2, we repeat the suggested algorithm, starting from ZZ× × 3 p=1 L (p) kp , and finally obtain For j + 1 ≥ 1, in turn, we obtain by induction The relations (32), (35), (36) determine the quantities Z j in the cubic vertex (24), which presents the main result of this paper.
Conclusion
In the present article, we have obtained an exact representation for the quantities Z j , for j ≥ 1, that constitute the non-multiplicative part of a general cubic vertex constructed in [6] for massless completely symmetric fields of arbitrary integer helicities s 1 , s 2 , s 3 in a d-dimensional Minkowski spacetime.
The construction is implemented in the framework of an unconstrained BRST approach to higher-spin field theory, in which every condition that determines an irreducible massless representation of higher spin is taken into account on an equal footing in the complete BRST operator, as compared to all the previous studies. As a consequence, the cubic Lagrangian vertex operator (24) preserves both the locality and the irreducibility property of a representation for interacting fields of helicities s 1 , s 2 , s 3 .
The inclusion of trace restrictions into the BRST operator has led to a larger content of configuration spaces in Lagrangian formulations for interacting fields of integer helicities in question (as compared to the constrained BRST approach [13]), which has permitted the appearance of new trace operator components U (s i ) j i (30) in the cubic vertex. In this regard, the correspondence between the obtained vertex |V (3) and the vertex |V M (3) of [13] is not unique due to the fact that the tracelessness conditions for the latter vertex are not satisfied: L irrep = 0. Secondly, after eliminating the auxiliary fields and gauge parameters by partially fixing the gauge and using the equations of motion, the vertex |V (3) will transform to |V (3) in a triplet formulation of [13], so that, up to total derivatives, the vertices |V irrep poses an interesting problem. The suggested approach can be further developed: for irreducible massless half-integer higher-spin fields on a flat background; for massive integer and half-integer higher-spin fields; for higher-spin fields of a mixed index symmetry; for supersymmetric fields of higher spins, where the vertices must include any degree of traces. One should also mention the problem of constructing the quartic and higher vertices in the BRST approach, as well as the quantization of a model of interacting higher-spin fields, by following the algorithm for constructing a quantum BRST-BV action [23]. All the mentioned problems are awaiting their solution in our forthcoming works. | 2022-05-03T06:47:28.826Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "3c6160a7777860dc7d12110c4b00ee80151e9d58",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b98e7f35eee839ae605b4d7a5a6244cdbe07fa5f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
54073379 | pes2o/s2orc | v3-fos-license | A survey on impact of border markets on customer satisfaction
Article history: Received April 20, 2012 Received in revised format 19 October 2012 Accepted 20 October 2012 Available online October 21 2012 Border markets in many countries have been considered as the most important bridge for building a financial connection between inside and outside the countries. In this paper, we present an empirical study to find the impact of border market on customer satisfaction. The proposed study of this paper considers the effects of eight factors including competitive brand, foreign investment, management of imported goods, governmental supportive rules, monetary policies, supply chain management, buyers, marketing planning and import management on customer satisfaction who purchase on border markets. The proposed model of this paper designs and distributes 400 questionnaires among some experts and uses factor analysis and structural equation modeling to test nine hypotheses. The results indicate that there are some strong evidences that all nine factors impact customer satisfaction and foreign investment has the highest impact on customer satisfaction followed by supply chain management, marketing planning import management. © 2013 Growing Science Ltd. All rights reserved.
Introduction
Border markets in many countries have been considered as the most important bridge for building a financial connection between inside and outside the countries.The idea of developing such market is to get rid of many existing barriers inside the countries (Hollowell, 1996).This helps economy grow faster in these regions since people could establish their business and build a connection with other counties more quickly.Many foreigners are also able to travel to these regions without involvement in rules and regulations to get necessary visa permission to enter the region.Customer satisfaction in these areas play essential role on developing the region.There are literally tremendous efforts on learning the effects of different factors on customer satisfaction.Lin (2007) provided a model of customer satisfaction by using a nonlinear fuzzy neutral network model for measuring the effects of various factors on customer satisfaction.Lin (2007) reported that the interpersonal-based service encounter is better than the technology-based one in terms of functional quality, while the technology-based service encounter is better than the interpersonal-based one in technical quality.The functional quality reported a positive and significant impact on customer satisfaction; the service quality also had a positive and significant impact on service value; the service value had a positive and significant influence on customer satisfaction.Lin concluded that the service encounter had a positive and significant impact on relationship involvement and the relationship involvement had a positive and significant influence on customer satisfaction.Esbjerg et al. (2012) developed a conceptual model for investigating customer satisfaction with individual grocery shopping trip experiences within an overall 'disconfirmation of expectations model' of customer satisfaction.They explained that understanding what causes satisfaction/dissatisfaction with individual shopping trips is necessary to describe overall, cumulative satisfaction with a retailer.They also proposed a framework synthesizes and integrated multiple central concepts from various research streams into a common framework for investigating shopping trip satisfaction.Ueltschy et al. (2009) reported that cultural differences with the Chinese respondents perceiving significantly higher service quality and expressing greater customer satisfaction when performance was high and expressing less customer satisfaction when performance was low than do the Japanese and Korean respondents.Chi and Gursoy (2009) investigated the relationship between employee and customer satisfaction.They also examined the effect of both on a hospitality firm's financial performance utilizing serviceprofit-chain framework as the theoretical base.They investigated the relationship between customer satisfaction and financial performance; financial performance; employee satisfaction; and financial performance.Besides, they investigated the mediating impact of customer satisfaction on the indirect relationship between employee satisfaction and financial performance.Their findings recommended that while customer satisfaction had positive influence on financial performance, employee satisfaction had no direct effect on financial performance and there was an indirect relationship between employee satisfaction and financial performance.Slevitch and Oh (2010) explained in their study that there is a non-linear nature of the customer satisfaction function.Johnson et al. (2002) applied arguments from the economics, sociology psychology, and marketing domains to forecast systematic differences in aggregate customer satisfaction across both countryside and industries.These anticipations were then examined using a database created from three broad-based national satisfaction surveys in Germany, Sweden, and the United States.Based on the results of the survey, Johnson et al. (2002) concluded that across countries, satisfaction was the highest for competitive products and lower for government and public agencies.
The study also provided some supports for the use of national indices for making meaningful comparisons of satisfaction on a broad scale.Flint et al. (2011) reported that customer value anticipation can be considered as a strong driver of satisfaction and loyalty, with satisfaction acting as a mediator for loyalty.Singh and Ranchhod (2004) performed an empirical investigation on the relationship between market orientation and business performance in the context of British machine tool industry.Their findings recommended that customer orientation and customer satisfaction orientation had a stronger influence on performance than the other dimensions.They also believed that managers could implement the multidimensional conceptualization to develop particular types of orientations required for better performance.Moskalev (2010) investigated the relationship between host country laws restricting the capability of foreign bidders to conduct cross-border mergers and acquisitions (M&As) and the dynamics of domestic and foreign markets for corporate control.They reported that, as governments, especially governments of less wealthy, faster growing economies, relax their cross-border M&A laws, foreign bidders try to increase the number of cross-border M&As.The likelihood that foreign bidders build cross-border M&As in which they collect a controlling stake in the target was bigger in host countries with less restrictive cross-border M&A laws.In such countries, foreign bidders were also more likely to implement cross-border M&As than cross-border joint ventures as the means for entering the market.
The proposed study
There are 45 questions for measuring the impacts of 9 items and the sample size of the proposed study is calculated as follows, where N is the population size, and N=1000, the number of sample size is calculated as n=385.The questionnaires have been distributed among 400 people Cronbach Alpha (Cronbach, 1951) was calculated as 0.86, which means the results are reliable.In our survey, there were 383(95.8%)male and 17(4.2%)female.In addition Fig. 1 shows their job experiences.
Fig. 1. Job experiences of the participants (%)
As we can observe from Fig. 1, over 65% of the participants maintained at least 5 years of job experiences.In terms of educational background, 38.5% only finished high school, 51.3% finished a two years university college and the remainging people maintained a bachelour degree of science.
The results
The main question of the survey is associated with important factors influencing customer satisfaction in free zone areas.In order to answer this question we need to use factor analysis.Kaiser-Meyer-Olkin measure of sampling adequacy yields 0.78 and the Chi-Square for Bartlett's test of Sphericity is 4784.497with 903 degree of freedom and p-value of 0.000.These results confirm that the results are statisically meaningful and we can use factor analysis.Table 1 shows details of our findings for factor analysis.
Each eigenvalue is the value of all the components of variance that could be explained by that factor and the higher the value, the more it explains.The factors whose eigenvalues are greater than one are the best.As we can observe from the results of Table 1, 9 factors determine 49.33 of total variance of variables.The first factor maintains an eigenvalue of 6.37, which represents 14.18% of total variance.In order to finf the optimal number of factors, we use Scree plot and Fig. 2 shows details of our results.
Fig. 2. Scree plot
As we can observe from the results of Fig. 2, three factors represent the optimal number of important components.Table 2 shows details of communality of different questions of the survey.The lowest number belogs to item 17, Sanitary issues , and the highest number belongs to item 33, which is culture adatibility .
Table 2
The results of communality As we can observe from the results of Table 3, all nine hypotheses of this survey have been approved, which means all factors influence customer satisfaction, significantly.In other words, an increase of one percent in competitive brand will increase customer satisfaction by 0.19%.According to the results of Table 3, foreign investment has the highest impact on customer satisfaction followed by supply chain management, marketing planning import management.
Conclusion
In this paper, we have presented an empirical study to detect the effects of different factors in border market on customer satisfaction.The proposed study of this paper has considered the impacts of eight factors including competitive brand, foreign investment, management of imported goods, governmental supportive rules, monetary policies, supply chain management, buyers, marketing planning and import management on customer satisfaction who purchase on border markets.The proposed model of this paper designed and distributed 400 questionnaires among some experts and used factor analysis and structural equation modeling to examine nine hypotheses.The results have indicated that there are some strong evidences that all nine factors impact customer satisfaction and foreign investment has the highest impact on customer satisfaction followed by supply chain management, marketing planning import management.
Table 1
The results of Factor Analysis | 2018-12-01T16:44:19.320Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "138a31b3eddef4dc1a3a5ef28fa7f33dff54dbb5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.msl.2012.10.028",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "138a31b3eddef4dc1a3a5ef28fa7f33dff54dbb5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
} |
250323482 | pes2o/s2orc | v3-fos-license | Algorithm to Data in New Generation A Novel Algorithm to Secure Data in New Generation Health Care System from Cyber Attacks Using IoT
A Novel Secure ░ ABSTRACT- The rise of digital technology has essentially enhanced the overall communication and data management system, facilitating essential medical care services. Considering this aspect, the healthcare system successfully managed patient requirements through online services and facilitated patient experience. However, the lack of adequate data security and increased digital activities during Covid-19 made the healthcare system a soft target for hackers to gain unauthorized access and steal crucial and sensitive information. Countries such as the UK and the US recently received such challenges, highlighting the need for effective data maintenance. IoT emerged as one of the critical solutions for data management systems in terms of addressing data security which certainly can enhance overall data collection, storage, maintenance, prediction of potential data security breaches and taking appropriate measurements. The concerned research considers a secondary data collection process where necessary data is collected from original scholarly articles, books and journals. Apart from that, a positivism research philosophy, a deductive research approach and a descriptive research design have been considered for this study. Qualitative data analysis techniques have also been incorporated into this research. Upon viewing the pros and cons of IoT algorithms, DES, AES, triple data encryption standards, and RSA encryption can be used in the healthcare system to facilitate data protection.
lockdown's adoption and the introduction of the virus, the world has begun to shift toward digital functioning, which includes healthcare services. Telemedicine and virtual therapy have developed as a new trend in the healthcare system, making the sector a potential target for cybercriminals and hackers. A spate of cyberattacks against national healthcare systems has been reported in several countries, including the United States, the United Kingdom, the Czech Republic, and others. Upon considering the prevalence of cyberattacks, the need for effective data security maintenance emerged as one of the key trends within the healthcare system [3]. IoT appears to be one of the critical solutions to mitigate data protection and security challenges across different sectors. Specifically, for e-health devices, IoT emerged as a powerful application that certainly enhances eventual data security. IoT consists of sensors that collect data that is useful for consumers. Upon considering this factor, IoT is also used to protect data from potential cyber threats by utilising its full potential in addressing data security loopholes within the system. IoT sensors are mainly used to identify these loopholes and enhance data security within any sector [4].
"The Data Encryption Standard (DES)" appears to be one of the significant algorithms of IoT that essentially prevents potential data breaches within a system. It is a symmetric-key algorithm to ensure the encryption of digital data. On the other hand, data encryption is considered one of the effective ways to prevent potential data breaches and secure all types of data storage within a system. DES can avert possible attacks on the digital databases and ensure the data storage process upon viewing this factor. In addition, "Advanced Encryption Standard (AES)" is also considered an effective IoT algorithm that is used by several sectors to prevent potential data breaches through cyber-attacks [5]. On the other hand, "Triple Data Encryption Standard", "Twofish Encryption Algorithm", and "RSA Encryption" emerged as the necessary IoT algorithms used by several sectors across the world in terms of preventing data breaches and protecting data security.
░ 2. LITERATURE REVIEW
Over the years, with technology dependence increased, threats from cyberattacks emerged as a significant challenge that certainly hampers the level of data security for the users. Moreover, Covid-19 emergence has further highlighted this challenge due to poor data security maintenance and increased cyberattacks. "Malware", "Phishing", "SQL injections", "Man-in-the-Middle (MIM) attacks", "Denial-of-Service (DOS) attacks", and "Password attacks" emerged as the most common yet effective cyberattacks that certainly cause compromised data security within a system [6]. On the other hand, ransomware attacks and DOS attacks appear to be significant cyberattacks witnessed by the healthcare system. These particular attacks directly hamper data security by hacking digital data sources. Apart from that, phishing and password attacks emerged as other relevant and significant attacks that compromise data sources within the healthcare system due to its lack of adequate protection. Several countries, such as the US and the UK, have witnessed significant challenges related to data protection and data security violations within the healthcare system during Covid-19. Significant challenges such as database hacks, unauthorized access to sensitive patient data, and the stealing of Covid-19 research-related data have been observed in the world's healthcare system [7]. Countries such as the US and UK mostly witnessed malware, ransomware, phishing and password attacks in healthcare during Covid-19. The main aim of those cyberattacks was to gain access to central databases and steal information related to vaccination, Covid-19 research progress and sensitive patient data [8]. This highlights the significant challenges the concerned sector faces and the significance of data protection within the healthcare sector. On the other hand, facilities such as telemedicine and virtual treatment using digital platforms undoubtedly contributed to the increased cyberattacks across the world during the Covid-19 pandemic. The need for adequate data protection and utilization of active technologies appears relevant in the current time. IoT is considered one of the most effective solution to data security challenges and ensures appropriate data collection. In-built sensors in IoT help effective data collection, which is further enhanced by the different algorithms to improve data security management [9]. The advanced encryption standard is considered as one of the most compelling IoT algorithms that essentially addresses data security challenges by ensuring early prediction of potential cyberattacks. AES uses keys of 192 and 256 bits for heavy-duty encryption, effectively secures data access and protects the network from possible collapse due to unauthorized access of malware. On the other hand, RSA encryption is an IoT algorithm used by modern computers to decrypt and encrypt messages [10]. It appears to be one of the significant alternatives healthcare systems can use to prevent significant data breaches and potential cyberattacks. It is public-key as "public-key cryptography" because it gives one key to the public and keeps one critical private.
░ 3. METHODOLOGY
The search is based on secondary data to present comprehensive findings and enhance effective research outcomes. The main reason behind secondary data collection is its ability to quickly collect a wide range of data. Upon considering this aspect, the concerned research has collected necessary data on the type of cyberattacks faced by healthcare during Covid-19. On the other hand, positivism research philosophy integrates conducting scientific study and reaching effective outcomes. Considering this aspect, effective integration of the concerned research philosophy was integral in producing an objective-driven effect [11]. Identifying possible data patterns and trends is a significant advantage of positivism research philosophy. Applying this particular aspect, the concerned research ensured identifying necessary trends regarding cyberattacks in the healthcare system during Covid-19. In addition, a deductive research approach has also been undertaken in this research to conduct a logical analysis and enhance data generalization. This particular aspect has helped the concerned research incorporate efficient principles of providing a clear idea of concepts and variables, which happen to be one of the critical benefits of the deductive approach [12]. Moreover, the appropriate application of the reasoned approach also helped the concerned research with time-saving and produced an on-point discussion. Upon considering this particular aspect, the practical application of the reasoned research approach has allowed this study to outline concepts and variables and conduct thorough research on them.
A descriptive research design has been implemented to provide a systematic description of the phenomena and explain the experience during the study in the form of practical data analysis [13]. Moreover, the ability to conduct an in-depth analysis regarding the role of IoT algorithms in preventing cyberattacks appears to be another significant advantage provided by this particular research design. Secondary data for this research has been collected from authentic online sources. Major databases such as "Google Scholar" and "ProQuest" have been accessed to collect relevant scholarly journals for this research. In addition, online journals and websites have also been accessed to gather adequate information for the concerned study. The qualitative data analysis method has been used in this research to produce quality insights on the research topic. One of the significant advantages of the concerned data analysis technique is its ability to work as a content generator, facilitating the research outcome [14]. The concerned research presents data analysis and findings together using this particular data analysis method.
░ 4. ANALYSIS AND INTERPRETATION
The need for adequate data protection became more relevant during the Covid-19 outbreak due to the massive amount of cyberattacks across health institutions in the world. It has been observed that the lack of appropriate data protection systems and increased digital access essentially encouraged cyber attackers to launch online attacks on healthcare institutions [15]. Several countries such as the UK, the US, and the Czech Republic have witnessed such attacks and stealing of sensitive medical information. Patient data, Covid-19 vaccination data and research progress emerged as the essential data to be stolen by the hackers during the pandemic. Frequent attempts of data stealing and lack of adequate security emerged as the major concerns for the healthcare system during this time [16].
It is further believed that increased online medical activities such as telemedicine and digital treatment have certainly been the hackers' main targets in accessing sensitive medical information. Considering the contemporary healthcare system challenges, IoT algorithms appear to be one of the critical solutions to enhance data security within the healthcare system. The practical application of different IoT algorithms facilitates data security in the healthcare system differently. Primarily IoT is used for effective data collection, which enhances data analysis and predicts the potential trends in the healthcare sector [16]. IoT sensors play an influential role in data collection. However, this particular feature can further be utilized while predicting possible cyberattacks on the system. To ensure effective prediction regarding potential security threats, IoT algorithms are used to enhance tracking of the website health institutions are accessing along with the extent it is allowing access to external and dubious sources. IoT algorithms effectively predict potential data breach chances within the healthcare system and its probable solutions upon considering these data. On the other hand, DES, AES, triple data encryption standards, RSA encryption and Twofish encryption algorithms appear to be powerful IoT algorithms used for data protection within the healthcare system. Effective implementation of the IoT algorithm further ensures that digital data is secured with end-to-end encryption, which can certainly be a probable solution to the identified data security challenges [17]. Upon considering this factor, the healthcare system can essentially implement DES and RSA encryption to enhance its data security, which can facilitate the eventual outcome of data management within the healthcare system. This can also ensure sensitive data security within the concerned sector and boost overall data protection by predicting and preventing potential data breaches. The AES algorithm is a symmetric block cypher that encrypts and decrypts the information, enhancing data security. It significantly converts data to an unintelligible form called "ciphertext". This also can be solved if needed. Upon considering this aspect, the precise role of IoT algorithms in enhancing data security within the healthcare system can be seen.
░ 5. DISCUSSION AND FINDINGS
Over the years, digital data emerged as one of the major trends identified in healthcare in terms of maintaining solid databases and facilitating operations. Considering this aspect, effective data management emerged as the ultimate key to success for all industries worldwide. The emergence of Covid-19 has further increased the chances of utilizing digital platforms to ensure effective data management and communication with patients, doctors and other medical staff. Telemedicine and digital treatment appear to be key trends during the pandemic, which helped millions of people [18]. However, increased online activities emerged as critical reasons behind active cyberattack incidents within countries such as the US and UK. Healthcare emerged as the soft target for cyber attackers to conduct massive online attacks and gain access to sensitive information. Data related to confidential patient details, Covid-19 research progress and details about vaccination availability emerged as necessary targets for healthcare, which certainly created a collective threat for the health system in terms of securing sensitive medical information from hackers [18]. Further observed ransomware, malware attacks, password attacks, phishing and DOS emerged as the most common yet effective cyberattacks faced by the healthcare system, which essentially hampered healthcare data management. On the other hand, IoT algorithms emerged as an effective solution for the overall data management process in the healthcare system. Effective integration of IoT usually helps with appropriate data collection, which certainly considers data related to business trends, stakeholder expectations and essential market trends through IoT sensors [18]. The same can be used for effective data security purposes in predicting the potential cyberattacks and alerting the system about them. This, in turn, effectively can help the entire healthcare system to take appropriate measures to prevent potential cyberattacks in the healthcare system. Upon considering the pros and cons of the IoT algorithms DES, AES, triple data encryption standards, and RSA encryption emerged as the powerful IoT algorithms to enhance data security within the healthcare system. On the other hand, the appropriate application of necessary IoT algorithms can undoubtedly create a robust encryption system that is considered the eventual goal of maintaining adequate data privacy. By considering the same purpose, Naïve Bayes Classifier based algorithm has been used in this research study through which the conditional probability can be calculated. The main advantages of such algorithm is that it is intractable due to which the intrusion in the computer can be detected easily.
Hence, more security can be provided. In addition to this, bayes theorem also provides the principled ways for calculating the probability with conditions. The simple form of the calculations for Bayes theorem is Simplified or Naïve Bayes P(yi | x1, x2, …, xn) = P(x1, x2, …, xn | yi) * P(yi) In the next stages the conditional probability of the all the variables effectively changed into separate conditional probabilities. These independent conditional variables are also then multiplied together. P(yi | x1, x2, …, xn) = P(x1|yi) * P(x2|yi) * … P(xn|yi) * P(yi) In this research study a small example on a machine learning dataset has been derived below.
Hence, the input output elements of the first five example is healthcare system can use to facilitate data protection. Experts say AES might take several years to be broken by the hackers, mainly highlighting its effectiveness within the healthcare system. In addition, AES lasts longer than the IoT algorithms, making it a suitable fit in the healthcare system. RSA and AES, both IoT algorithms, are considered equally effective in protecting data and ensuring appropriate security in database systems. Upon viewing this aspect, the healthcare system can essentially use any of these algorithms to facilitate data protection and security enhancement.
░ 6. CONCLUSION
With time and the rise of digital technologies, the need for effective data collection, storage and management emerged as crucial aspects for industries worldwide. Effective integration of digital technology during the Covid-19 pandemic helped numerous people avail of health facilities through telemedicine and digital treatment. However, this appears to be one of the critical reasons behind increased cyberattack challenges within the healthcare system. Considering this aspect, the need for the effective integration of IoT algorithms can be identified. IoT algorithms such as DES, AES, triple data encryption standards and RSA encryption can essentially enhance the data protection of the healthcare system at present. | 2022-07-07T15:02:43.070Z | 2022-06-30T00:00:00.000 | {
"year": 2022,
"sha1": "b7267d8fb898dcf12d10ec8952a29401d4d42145",
"oa_license": "CCBY",
"oa_url": "https://ijeer.forexjournal.co.in/papers-pdf/ijeer-100236.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f9679c4f6d77cd798bbc940eaf9c55c274b0bd7b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234709449 | pes2o/s2orc | v3-fos-license | Glioblastoma Presenting Only as Cortical "Ribbon Sign" in the Early Stage: A Case Report and Literature Review
Background: Glioblastoma (GB) is the most common primary malignant brain tumor and occurs predominantly in the white matter of the brain. MRI often shows an irregular mass with uneven internal signals, necrosis, surrounding edema and a space-occupying effect. The malignant degree of GB is high, and it often relapses after surgery. Case presentation: A 66-year-old male patient was admitted to our hospital due to sudden left superior strabismus and convulsions . During the patient’s first hospitalization, MRI showed only an abnormal cortical signal, and the diagnosis was viral encephalitis.After the treatment of antiepileptic, antiviral and relieving brain edema, the patient was discharged. Seven months later, he was admitted to the hospital again because of memory impairment and slow reaction speed. MRI showed that there was an obvious mass in the original lesion area, and this was diagnosed as glioblastoma. Postoperative pathology confirmed glioblastoma (WHO) grade IV. Conclusions: The clinical diagnosis, treatment and imaging manifestations of this case of GB are reported as follows to improve understanding of this type of GB, which only has cortical abnormal signals in the early stage, to reduce misdiagnosis.
Background
Glioblastoma (GB) is the most common primary malignant brain neoplasm [1] and mainly occurs in the white matter of the frontal lobe, temporal lobe and deep brain. The growth pattern of GB is invasive or central expansive growth along white matter bers and meninges. Some glioblastomas may involve the cerebral cortex. However, GB with a main body located in the cortex is relatively rare, few imaging studies are related to it; GB with only cortical lesions is even rarer. It has been reported in the literature [2] that the survival time of patients with GB located in the cerebral cortex and subcortex is longer than in those with GB in other regions. Therefore, cerebral cortical and subcortical GB may indicate a better prognosis.
Case Presentation
History: A 66-year-old male patient was admitted to the hospital on June 12, 2017 due to sudden strabismus and convulsions for 2 days. The symptoms gradually aggravated to twitching three times in 24 hours, and he was confused.In the past 11 years, diabetes mellitus had been well-controlled by drugs.
Physical examination: blurred consciousness, a poor mental state, and unclear speech. His left upper limb muscle strength was grade 4, left lower limb muscle strength was grade 4, right upper limb muscle strength was grade 5, right lower limb muscle strength was grade 5, and muscle tension was high. Pathological re ex: Babinski sign: left (+), right (+), Chaddock sign: left (+), right (+).
Imaging examination: A head MRI ( Fig. 1-4) showed swelling of the right parietal temporal gyrus, thickening of the cerebral cortex, low signal intensity on T1WI, high signal intensity on T2WI, iso-signal on a FLAIR sequence, high signal intensity on a DWI sequence, and a ribbon-shaped lesion with a clear boundary. There was no obvious involvement of the subcortical white matter, and the adjacent sulcus had become shallow. He was diagnosed with viral encephalitis.
Laboratory examination: cerebrospinal uid glucose was normal; protein levels had increased to 1083.00 mg/L; cerebrospinal uid immunoglobulin levels were high: albumin was 598.00 mg/L, IgA was 15.80 mg/L, IgG was 72.20 mg/L, and IgM was normal; no malignant tumor cells were found in cerebrospinal uid smears; and cerebrospinal uid bacterial smears showed no obvious abnormalities.
Treatment process:according to the patient's symptoms, signs and auxiliary examination,viral encephalitis with epilepsy was considered. After the treatment of antiepileptic, antiviral and relieving brain edema, the patient was discharged on June 20, 2017 after improvement. On January 24, 2018, the patient was readmitted because of "memory loss with slow responses for 20 days". He showed lethargy, decreased calculating ability, and unstable walking. Physical examination: conscious state, good mental state, aphasia, short-term memory loss, normal long-term memory, normal instantaneous memory, decreased calculation ability. His character orientation was normal, his place orientation was decreased, his time orientation was decreased, and his judgment was normal.
Imaging examination: a head MRI scan ( Fig. 5-8) showed abnormal signals caused by irregular masses in the right parietal occipital and temporal lobe, T1WI showed low signal intensity, T2WI showed isointense signals, and FLAIR and DWI sequences showed high and low mixed signals. The boundary was unclear, and a large area of necrosis could be seen in the center of the lesion. Edema signal could be seen around it. The trigone, inferior horn and posterior horn of the right ventricle were involved and deformed. MR Enhancement ( Fig. 9): The lesion showed inhomogeneous circular enhancement and a central necrotic area without enhancement and was considered a malignant tumor, with GB likely. Magnetic resonance spectroscopy analysis ( Fig. 10) showed that NAA was signi cantly lower in the lesion area than in the contralateral white matter, while Cho was signi cantly higher and Lac-Lip was signi cantly higher in the former than in the latter, suggesting that it was a malignant tumor with necrosis. ASL (Fig. 11) showed that the cerebral blood ow (CBF) of the solid components of the lesions had signi cantly increased, suggesting that the lesion was a malignant tumor.
Laboratory examination: cerebrospinal uid glucose was high, at 6.40 mmol/L, and protein had signi cantly increased to more than 3000.00 mg/L. Immunoglobulin in the cerebrospinal uid had increased: albumin > 1630.00 mg, IgA > 43.10 mg/L, IgG > 109.00 mg/L, and IgM > 4.95 mg/L. No malignant tumor cells were found in cerebrospinal uid smears. There was no obvious abnormality in the cerebrospinal uid bacterial smear. Treatment process:the patient was diagnosed as intracranial malignant tumor. Craniotomy was performed under general anesthesia on February 3, 2018 after admission.The tumor was located in the right temporal and occipital lobes, and part of the tumor was protruding to the surface of the brain. The tumor was gray-red, soft, and rich in blood supply and had no capsule. The boundary between the tumor and the brain tissue was not clear. Under a microscope, the adhesion between the tumor and the brain tissue was separated along the edge of the tumor and reached the midline, down to the tentorium of the cerebellum and deep to the occipital horn of the lateral ventricle.The patient was discharged on Pathological diagnosis: the part of the lesion in the brain tissue was 8 × 4.5 × 3 cm, the section was grayish white, its texture was soft, some areas were grayish brown, and the range was 6 × 5 × 5 cm ( Fig. 12-13). Consideration was given to (right temporal occipital) glioblastoma (WHO grade IV, size 6 × 5
Discussion And Conclusion
This report describes one case of glioblastoma with typical clinical and radiographic manifestations of viral encephalitis in the early stage of lesions.These ndings are consistent with nine other Englishlanguage reports describing patients with symptoms of HSE ultimately being diagnosed with GBM or gliomatosis cerebri [3][4][5][6][7][8].Several possible mechanisms of disease are consistent with the atypical progression of glioblastoma described in these cases.First, patients may have developed viral encephalitis and glioblastoma in the same place coincidentally, but given the relative rarity of these diseases and the time course of their discovery and development, this is an unlikely possibility.
Alternatively, viral encephalitis may play a role in the formation of glioma. However, most studies suggests that viral encephalitis plays a role of tumor regulation rather than oncogenic role in the development of glioblastoma. [9][10][11].Therefore, it's most likely that glioblastoma occurs at the rst examination, but only involves the local cerebral cortex.
Several causes of the misdiagnosis of this case were analyzed, and the differences between GB and viral encephalitis can be summarized as follows: (1) epilepsy was the initial symptom of the patient and is a common clinical manifestation of encephalitis, while GB and other brain tumors are also prone to induce epilepsy, making it di cult to differentiate the two based only on clinical symptoms. Generally, encephalitis progresses rapidly, and the progression of tumors is relatively slow, but some glioma or gliomatosis may also show acute neurological symptoms similar to encephalitis symptoms [12][13][14].
Generally, viral encephalitis often has a history of fever, but this patient had no history of fever, and this may be the main characteristic that distinguishes this case from encephalitis based on clinical symptoms. (2) On the rst MR examination, the patient showed only cortical swelling and abnormal cortical signal without obvious vasculogenic edema and space-occupying effect, which is a common manifestation of viral encephalitis. This was the main cause of misdiagnosis during the rst examination. The appearance of these imaging ndings may have been related to invasive growth along the cerebral cortex in the early stage of GB. In the early stage of GB, the growth of the vascular network is weak, and the blood-brain barrier is not destroyed, and there is therefore no obvious edema or spaceoccupying effect. An MR plain scan of viral encephalitis usually shows high signal intensity on T2WI and FLAIR that involves the cortical and subcortical white matter, indicating local cytotoxic edema. Diffusionweighted imaging (DWI) is the most sensitive sequence for detecting the acute phase of encephalitis, typically manifesting as high signal lesions with apparent diffusion coe cient (ADC)limited [15][16][17].
Although the patient showed a high signal on the DWI sequence, it was isointense on the FLAIR sequence, indicating that the abnormal signal was not caused by cytotoxic edema but by tumor cell in ltration, which did not accord with the imaging manifestation of encephalitis. This was the main point of discrimination between this case and viral encephalitis in routine imaging examination. This patient did not have a CT examination at the rst visit, but in a CT scan, viral encephalitis should appear with low density, indicating local edema, while GB often appears with a slightly higher density, indicating locally dense tumor cells.
(3) In this case, the symptoms of the patients signi cantly improved after treatment with antiepilepsy and antiviral drugs and the relief of brain edema, further con rming the diagnosis of viral encephalitis. However, it should be noted that although the relief of epileptic symptoms was achieved by antiepileptic drugs, these drugs did not inhibit the growth of the tumor cells.
The imaging manifestation of high signal intensity in the cerebral cortex on DWI and T2WI sequences is called a cortical "Ribbon sign" and is also known as a "lace sign". In addition to viral encephalitis, diffuse cortical hyperintensity on DWI and T2WI sequences should be distinguished from that observed in Creutzfeldt-Jakob disease, MELAS syndrome, epilepsy-mediated brain changes, autoimmune encephalitis and other diseases. Creutzfeldt-Jakob disease is caused by prion virus infection, which mainly manifests with progressive dementia, mental disorder, and muscle spasm [18]. The main manifestations of Creutzfeldt-Jakob disease on MR are a high signal intensity on T2WI and DWI in the cortex and thalamus.
Typical MR manifestations include "ribbon sign" and "hockey sign" [19]. Mitochondrial encephalomyopathy is a multisystem disease. Typical MELAS syndrome mainly occurs in the cortex and subcortex. The deep white matter is not involved. It shows a high signal intensity on T2WI and DWI. An increase in the lactate peak in brain tissue can be found by MRS and is an important characteristic of MELAS syndrome [20]. In patients with a clear history of epilepsy, epilepsy-mediated brain changes also need to be considered. The main manifestations are high signal intensity on T2WI in the cortex, subcortex, basal ganglia, corpus callosum and cerebellum, and half of affected patients show high signal intensity on DWI [21]. The main feature of this disease is lesions that disappear quickly during short-term review. Autoimmune encephalitis refers to a class of encephalitis mediated by autoimmune mechanisms. Its pathogenesis is related to anti-neural antibodies. Its imaging manifestations are similar to those of encephalitis, but they generally also have inducing factors such as tumors and infections [22]. The pathological basis of the above diseases is similar to that of encephalitis, and all of them involve cytotoxic edema of local brain tissues, so the lesions appear with low density on CT and high signal intensity on FLAIR sequences, while GB is a local tumor cell in ltration, and it therefore appears as high density on CT and iso-signal on FLAIR sequences, and this is the most important difference in imaging characteristics between cortical GB and the above lesions.
GB is highly malignant, it very easily relapses after surgery [23], so early detection of these lesions is particularly important. In this case, only the cortex was involved on MRI in the early stage of the tumor, making it easy to misdiagnose. Therefore, we speculate that when GB in ltrate only the cortex at the early stage after onset, there could be imaging ndings similar to the "ribbon sign" in the cortex. Combined with our diagnostic experience in this case, if only abnormal cortical ribbon signals are found on MR, we should also consider the possibility of cortical GB in addition to encephalitis and other diseases. At this time, a multimodal MR examination should be performed and combined with CT examination. If necessary, brain biopsy should be performed to reduce the rate of misdiagnosis, reduce the possibility of delays, and achieve early diagnosis and treatment.
Consent for publication
The patient's family member have signed the written informed consent and allowed the publication of this Case Report.
Availability of data and material
All data generated or analysed during this study are included in this published article [and its supplementary information les].
Page 7/12
The authors declare that they have no competing interests.
Funding
None.
Authors' contributions JZ was a major contributor in writing the manuscript.CC,CD,LN Provided clinical history and laboratory examination data.XL was responsible for Article nal veri cation and correspondence author.All authors read and approved the nal manuscript. Figure 1 On the rst examination, an MR plain scan showed swelling of the right parietal temporal gyrus and thickening of the local cerebral cortex, low signal intensity on T1WI, high signal intensity on T2WI, isosignal on a FLAIR sequence, high signal intensity on a DWI sequence, and a ribbon-shapes lesion with a clear boundary (as shown by the red arrow).
Figure 2
On MR plain scan, the mass in the right parietal-occipital temporal lobe showed low signal on T1 and high and low mixed signal on T2, and FLAIR and DWI showed high and low mixed signals. The boundary of the lesion was unclear, and a large area of edema was seen around it. MRS showed that NAA was signi cantly lower in the lesion area than in the contralateral white matter, while Cho was signi cantly higher and Lac-Lip was signi cantly higher in the former than in the latter. | 2020-05-28T09:16:42.942Z | 2020-05-27T00:00:00.000 | {
"year": 2020,
"sha1": "55f2920d04fd1bf0302e1bf57d12a2f51708cc26",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-30780/v1",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "60b01f5485fa8f8b4f337246306335080dcd4e8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33433316 | pes2o/s2orc | v3-fos-license | Preventing infective complications in inflammatory bowel disease
Over the past decade there has been a dramatic change in the treatment of patients with Crohn’s disease and ulcerative colitis, which comprise the inflammatory bowel diseases (IBD). This is due to the increasing use of immunosuppressives and in particular the biological agents, which are being used earlier in the course of disease, and for longer durations, as these therapies result in better clinical outcomes for patients. This, however, has the potential to increase the risk of opportunistic and serious infections in these patients, most of which are preventable. Much like the risk for potential malignancy resulting from the use of these therapies long-term, a balance needs to be struck between medication use to control the disease with minimization of the risk of an opportunistic infection. This outcome is achieved by the physician’s tailored use of justified therapies, and the patients’ education and actions to minimize infection risk. The purpose of this review is to explore the evidence and guidelines available to all physicians managing patients with IBD using immunomodulating agents and to aid in the prevention of opportunistic infections. © 2014 Baishideng Publishing Group Inc. All rights reserved.
INTRODUCTION
Patients with one of the inflammatory bowel diseases (IBDs), Crohn's disease (CD) or ulcerative colitis (UC), are at an increased risk of infection, which is partly inherent to the diseases themselves, but may also be due to the therapies used in their management.The pathogenesis of IBD is potentially secondary to an inappropriate in-TOPIC HIGHLIGHT Submit a Manuscript: http://www.wjgnet.com/esps/Help Desk: http://www.wjgnet.com/esps/helpdesk.aspxDOI: 10.3748/wjg.v20.i29.9691nate immune response to normal colonic flora and this may result in the lack of an appropriate immunological response to potential pathogens [1] .In the more severe cases of IBD, patients may suffer from concurrent malnutrition and can need radical surgical procedures, which can further compromise the patients' immunological responses [2] .The drugs required for disease control, such as the corticosteroids, immunological modulators like the thiopurines, methotrexate and cyclosporine, as well as the anti-tumor necrosis factor alpha (TNFα) medications, also have as their primary function the inhibition, and control, of immune system activity.Therefore, these can further reduce the immunological responses resulting in an increased risk of opportunistic infection.
The prevalence of opportunistic infections in IBD, however, is difficult to assess as this can vary markedly between countries but as they may result in mortalities within the IBD patient population their avoidance is of great importance [3][4][5][6] .As an example, the background risk of tuberculosis (TB) in Spain is high at 21/100000, where it is considered endemic and the risk of infection can increase by up to 90-fold in IBD patients receiving an TNFα medication [7] .By contrast there is a much lower background prevalence of TB in countries like the United States at 6.8/100000 [8] and 0.9/100000 in the non-indigenous Australian-born population [9] .The risk to both the general population, and the immunosuppressed IBD patient, is thus vastly less in these countries and was demonstrated in an Australian and New Zealand study examining the prevalence of serious infections in IBD patients receiving a TNFα agent where not a single patient suffered from either primary, or reactivated TB, despite 3 patients receiving TB chemo prophylaxis due to a positive Quantiferon gold test prior to the initiation of TNFα therapy [9] While there is much concern regarding the TNFα drugs, as they may result in reactivation of granulomatous infections, particularly TB [10] , there is frequently less emphasis given to the other immunomodulating medications and whether they should also be regarded with caution, especially when used in combination with the TNFα therapies.The future of IBD medicine is, however, moving towards more biological medications (certoluzimab, golimumab, natalizumab and vedoluzimab) and the use of combination therapies.The risk to benefit ratio of these medications for the IBD patient thus needs to be continually assessed and monitored in order to give the best outcomes, much like the balancing act required to maintain IBD remission while minimizing the risk of cancers in these patients [11] .
Many infections have been associated with the use of the IBD medications, however, some may be specifically due to the mechanisms of action of individual medications [12] .Patients on the thiopurine agents appear to be at greater risk of developing viral infections like cytomegalovirus (CMV), Epstein Barr virus (EBV) and varicella zoster virus (VZV), which is thought to be secondary to the effect of the thiopurine metabolites on T cells lead-ing to the induction of apoptosis [12] .By contrast, macrophage function is primarily affected in patients receiving a TNFα agent and it has clearly been documented that these medications reactivate TB and thus a meticulous screening program is required for these patients prior to undergoing these therapies [10] .
There is thus a definite risk of infections other then TB with the use of the IBD medications, but overall they appear to be uncommon.In the Australia and New Zealand study only 2.2% of the patient population receiving TNFα therapy suffered a serious opportunistic infection.Almost half of these cases, however, were on a combination of immunosuppressive therapies [9] .This is similar to the findings of one of the first studies investigating opportunistic infection rates undertaken at the Mayo Clinic.This investigation demonstrated that the use of steroids, thiopurines and infliximab all impact on the rate of opportunistic infections in IBD.It noted that steroid use alone increased the risk by 2.6 fold (95%CI: 1.4-4.7)but this, however, increased further to 12.9 fold (95%CI: 4.5-37.0)when 2 or more of these drugs were used in combination [13] .
Despite how rare an opportunistic infection may be, however, the difficulty is in recognizing and treating them once they have occurred and the fact that they can result in significant morbidity and mortality.Prevention is thus certainly regarded as much better than cure in these situations.The prevention of opportunistic infections is, therefore, both the patient's and treating physician's primary goal and can be achieved through the use of multiple modalities that include vaccinations, chemoprophylaxis and education of the patients and clinicians.Each of these factors is vital for the successful implementation of appropriate guidelines for the best patient management [14] .
DEFINITION OF IMMUNOSUPPRESSION
An immunocompromised patient is someone in whom there is defective phagocytic, cellular, or humoral immunity, which leads to an increased risk of opportunistic infection and/or infective complications [15,16] .While the presence of active IBD can itself lead to an increased risk of infections, independent of immunomodulating drugs, secondary to loss of the intestinal mucosal integrity, the IBD patient is not considered as immunocompromised per se.IBD patients are thus considered as being immunosuppressed primarily as a result of the therapy they receive and/or from the presence of malnutrition [16,17] .The ECCO Consensus guidelines outline the various IBD therapies, which classify a patient as being immunocompromised and include the following: (1) treatment with steroids (prednisone or its equivalent of > 20 mg/d, or 2 mg/kg per day if < 10 kg, for 2 wk or more, and within 3 mo of stopping); (2) treatment with therapeutic doses of a thiopurine or discontinuation within the 3 mo preceding; (3) treatment with methotrexate or discontinuation within the preceding 3 mo; and (4) treatment with a TNFα agent or discontinuation within the preceding 3 mo [16,17] .
VACCINATIONS
Understanding the role of vaccination in the IBD population is crucial for the patient, specialist and primary care physicians involved in patient care.As advances in medical therapies lead to healthier patients with a better quality of life, focus must shift from treating infection to maintaining well-being in our patients by the prevention of disease.Vaccination is one of those vital, but frequently forgotten, areas in infection prevention.Patients with IBD are at risk of the same vaccine-preventable illnesses as the general population, and since most IBD patients will be diagnosed after they have completed their childhood immunization schedules, and most will require immunosuppression therapy at some stage in their lifetime, the opportunity should be taken to explore each patient's immunization status at the time of the diagnosis of their IBD [15] .
The institution of immunosuppressive and biological therapies also impact on which vaccinations a patient is allowed to receive and can also impact on the patient's response to vaccination with some studies demonstrating a lower response rate to vaccination once on these agents [18,19] .There is usually only a small window of opportunity in which to vaccinate the patient prior to the institution of treatment with an immunomodulatory agent.This must be taken advantage of in order to achieve the best possible patient outcomes.To date it is clear that the vaccination rates in the IBD populations are suboptimal and these need to be improve [20][21][22][23] and as there are now clear published international guidelines created to increase physician awareness of this issue and to improve vaccination rates and outcomes in the IBD population [16] .
What to do at diagnosis
At the initial diagnosis, or first presentation, of an IBD patient, a thorough history, clinical examination and panel of blood investigations should be performed prior to commencing any immunosuppressive, or biologic, therapy.This should include a history of previous, and current, infections including viral [VZV, herpes simplex virus (HSV), human immunodeficiency virus (HIV), hepatitis B virus (HBV) and hepatitis C virus, EBV and CMV], bacterial (TB, pneumococcal and urinary tract infections) and fungal infections.A detailed vaccination and travel history is also crucial in further determining what vaccinations need to be recommended, boostered or checked.Figure 1 summarizes the patient pre-IBD therapy work up and Table 1 summarizes the vaccination recommendations based on current guidelines and evidence.
Once a patient is on a immunocompromising medication, inactivated vaccinations only are recommended and these are suggested in guidelines for immunocompromised patients who do not have an increase risk of infectious complications [24] .Live attenuated vaccinations need to be avoided in these patients (Table 1) as there is a risk that the administration of live vaccines to immunocompromised persons may result in adverse events, or vaccine-related disease, due to unchecked replication of the vaccine virus or bacteria.This is particularly noted for the measles-, mumps-, rubella [25,26] and VZV-containing vaccines [27] and for Bacille Calmette-Guérin (BCG) vaccine [28,29] .The risk of disease, however, varies for different vaccines and for different individuals so caution is required for the use of vaccination in the setting of immunocompromise.In significantly immunocompromised persons the use of almost all the live vaccines are contraindicated.
The live attenuated vaccinations include yellow fever, oral polio, BCG, measles-mumps-rubella, typhoid Ty21a, VZV, live attenuated influenza virus and herpes zoster (Centre for Disease Control, 2009).Ideally a patient should not be receiving an immunomodulating medication for at least 3 mo prior to vaccination and in the case of steroids, the patient should avoid use for at least a month.If a live vaccine must be given to an IBD patient, the recommencement of an immunomodulatory medication should be withheld for at least 3 wk [16] .
RECOMMENDED VACCINATIONS -INACTIVE VACCINES
HBV vaccination IBD patients who have less than 10 IU/L hepatitis B surface antibodies (anti-HBs) should be vaccinated against HBV according to the standard schedule (3 doses at 0, 1 and 6 mo) regardless of if they are immunosuppressed or not.When HBV vaccines are administered to a young healthy population, there is a > 95% protective seroconversion rate [30][31][32] .Yet studies in the IBD populations have revealed much lower rates of detectable anti-HBs post vaccination (33%-36%) [20,33] , which could be attributable to ant on the amount of circulating antibody rather than the immune memory.Thus titres > 100 IU/L in the United Kingdom are now considered as the new cut-off point for the vaccination to be considered as successful in the immunocompromised patient [32,37] .If these titres are not achieved after the 3-dose schedule, a 4 th dose is then administered, or repetition of the full 3-dose series.
Influenza virus vaccination
Annual vaccination against influenza is recommended in IBD patients from the time of diagnosis [16] .Immunosuppressed patients have a higher risk of complications secondary to the influenza virus and there is greater associated morbidity and mortality [38] .The inactivated trivalent influenza vaccine comprises two type A subtypes (H1N1 and H3N2) as well as type B subtype.Studies have demonstrated mixed efficacy of the vaccine in the immunosuppressed population, mostly in paediatric IBD patients, which have demonstrated poor seroprotection results [39][40][41][42] .However, more recent data in adult IBD patients have demonstrated adequate seroprotection rates without exacerbation of intestinal disease [43] .Regardless of these findings, however, in most cases the immune response is adequate to warrant ongoing annual vaccination.
Pneumococcal vaccination
Streptococcus pneumoniae is the most common bacteria responsible for pneumonia and sepsis.IBD patients are also at increased risk of invasive pneumococcal sepsis [17,20] .Vaccination with the 23-valent strain is thus recommended to be administered every 5 years in the IBD population [16] .Again, as seen with other vaccines, effectiveness of this vaccine is diminished in patients on immunomodulating therapies, especially in combination, and therefore it should, ideally, be administered prior to commencement of such treatments [44] .Considering that the vaccine comprises 23 antigens to mount an immune response too, some degree of protection can be achieved and, therefore, it is considered worthwhile.an older age group [33] but also the use of biological therapy [18,34] .The use of a thiopurine medication, however, appears to have no negative impact on the efficacy of HBV vaccination but this may need further investigation [18] .
Due to the significantly lower response rates to standard HBV vaccination in IBD patients, some studies have suggested a modified dosing regimen that doubles the standard antigen dose, given at 0, 1 and 2 mo.This was noted to result in 60% of the IBD patients having hepatitis B surface antigen levels > 10 IU/L [18] .Comparison between this and the standard vaccination regime has been studied in IBD where 148 patients were vaccinated with either the standard, or double dose, protocol with an anti-HBs of > 10 IU/L considered a successful response [35] .The seroconversion rate in the standard protocol group was 41% compared to a 75% seroconversion in the double dose protocol group.The advantage of the double dosing protocol was seen regardless of the use of immunosuppressive treatments and also was noted for achieving higher titres of anti-HBs with levels > 100 IU/L.
Considering the variability of HBV seroconversion, the best time to offer immunization is at the time of diagnosis prior to commencement of any immunomodulator therapy.Serological testing should then be undertaken after completion of the vaccination schedule (1-3 mo after the last dose) to determine if immunity was conferred [16,31,32] .If the standard protocol fails to achieve seroconversion, an additional vaccine can achieve a successful antibody response in between 25%-50% of patients and a complete second three-dose course has been shown to be successful in between 40%-100% of patients in non-IBD population studies [31,36] .
Debate has also occurred around the ideal anti-HBs titre that should be reached post-vaccination in the IBD population.Post-vaccination anti-HBs titres of > 10 IU/L is considered to have conferred protection against infection in healthy subjects.This is long-term protection and relies on immune memory.In the immunocompromised patient, however, protection may be primarily reli-
Human papilloma virus vaccination
The human papilloma virus (HPV) is the most common sexually transmitted infection [45] .This virus is oncogenic and can lead to cervical dysplasia with progression through to carcinoma [46,47] .While data is lacking of a clear link with this risk being heightened and the use of immunosuppression, and biological therapies, there is a theoretical risk of HPV-associated tumours in prolonged and combination immunosuppressing IBD therapy.Vaccination against HPV in IBD is thus advisable in the appropriate populations (young women ageing from 12 years to 26 years old) according to the local guidelines [16,48] .
Tetanus and diphtheria
It is recommended that the general population should receive the tetanus, diphtheria and accellular pertussis vaccine every 10 years.This also holds true for patients with IBD.If the vaccination history is dubious then this should occur early within the IBD patient's course of treatment [16,49] .
Measles-mumps-rubella vaccination
Childhood immunizations against measles, mumps and rubella should be included in the initial history taking of the new IBD patient.In most developed countries there is a low risk of acquiring these infections as an adult due to herd immunity [50] .The evidence to support administration of the combined vaccine in patients prior to immunosuppressive therapy institution is also lacking and thus this is currently not a recommended vaccination by the ECCO guidelines.
Varicella vaccination
If patients with IBD have no history of having had varicella infection in childhood, serology should be checked and vaccination considered.Varicella infection in adults is more severe than in children, can be fatal and particularly severe if the patient is immunocompromised [51] .
Unfortunately, this vaccination is a live vaccine, and thus IBD patients who are varicella naïve and are already immunosuppressed should not receive it.If, however, the patient is known not to be immune to varicella prior to IBD therapy, a 2-dose schedule should be given at least 3 wk prior to commencing an immunosuppressive medication [16] .Careful consideration of patients who might be at greater risk of varicella infection, such as children, teachers or health care workers, should guide the clinician in this decision.
CHEMOPROPHYLAXIS
Antibiotic prophylaxis has been a commonly used therapy in immunosuppressed patients to prevent opportunistic infections and the best example of this in IBD is for suspected latent, or active, TB.The TNF-α agents should be avoided in patients suspected of having latent, or ac-tive, TB until treatment for TB has been commenced and has been in effect for at least 4 wk in order to avoid reactivation of TB, or according to local guidelines [52][53][54][55] .Prophylaxis for Pneumocystis jiroveci with trimethoprimsupha-methoxazole [12] should also be considered in patients on combination immunomodulatory regimes, usually when they are receiving the combination of 3 agents that includes steroids [12,16,56] , or in patients with low lymphocyte counts (< 600/mL) [57] .Alternative agents are aerosolized pentamidine, dapsone and atovaquone [58] .Data is lacking in this area and should be considered closely by the clinician on a case-by-case basis.
HSV
IBD patients with frequent and or severe recurrent HSV disease can be given oral anti-viral therapy to control these infections [16] .Considering most infections with HSV are mild and self-limiting, chemoprophylaxis is not recommended in IBD patients commencing immunomodulators.If HSV infection, however, disseminates during immunosuppressing therapy, then treatment with high dose antivirals and cessation of immunosuppressors is recommended [16] .
HBV infection
HBV is a very common infection worldwide and is well known to reactivate in patients receiving immunosuppressive medications.This can result in significant morbidity and mortality, from liver function tests derangement through to fulminant hepatic failure and death unless anti-viral prophylaxis is given.This treatment strategy in preventing HBV flares is well established in patients with HBV-HIV co-infection and chronic HBV infected patients receiving systemic chemotherapy [59] and is becoming increasingly important in IBD patients particularly on combination immunomodulatory therapy and on the biologics [60] .There have been several case reports of fatal HBV flares in patients with IBD on immunosuppressant drugs [61][62][63][64] drawing concerns that the TNF alpha drugs may be involved in regulating HBV replication [65] .Patients who test positive for HbsAg should go onto anti-viral prophylaxis prior to commencing any immunosuppressive therapy.This is regardless of a detectable DNA viral load or not.Most recent guidelines suggest treatment with either entecavir or tenofovir over lamivudine due to fewer issues with developing viral resistance to these agents however most studies have been focused on lamivudine prophylaxis to date [59] .
HIV infection
Prior to highly active anti-retroviral therapy (HAART), immunosuppressive agents, especially the anti-TNF drugs were contraindicated in HIV-infected IBD patients.Now that viral replication can be controlled and immune reconstitution achieved with the use of HAART, both the immunosuppressive and biologic agents can be used to treat IBD in patients who have a CD4 + T lymphocyte count > 500/mL [66,67] .In those patients who are not on HAART, but require immunosuppressive or biologic agents, then initiation of HAART should take priority, especially if CD4 + T cell counts are < 500/mL.
EDUCATION
Patients need to be educated on how to recognize early symptoms of an opportunistic infection and to act quickly to get the required treatment if they are immunocompromised.Fever tends to be the most reliable, and sometimes the only, symptom for heralding the development of an opportunistic infection [68] and IBD patients should always seek medical advice and/or review should they be experiencing this, especially in combination with other symptoms and the use of immunomodulator therapies.In these circumstances, a thorough history, examination, and septic work up should be performed by the clinician to help isolate the source of infection and guide therapy.Of course fever can also be a sign of a flare and thus this should also be considered.
If the suspicion for an infective cause of a fever in the of digestive symptoms is high, then vigilance to exclude infection should be the priority, rather than the escalation of IBD therapy.Stool cultures that also examine for Clostridium difficile (C.difficile) toxin and ova, cysts and parasites should be performed.It must also be noted that a single stool culture may only exclude 66% of infections.Multiple stool cultures are thus recommenced particularly for the excluding of C. difficile infection [69] .C. difficile is an increasing problem in immunocompromised patients and current estimates suggest that approximately 10% of IBD patients will develop symptomatic C. difficile infection at some point during the course of their lifetime [69] .This is important as it can lead to higher rates of colectomy and mortality.
If stool sample results are negative, an urgent colonoscopy or flexible sigmoidoscopy with colonic biopsies should be considered to assess for CMV colitis.This diagnosis is made on histological examination of biopsies taken at the interface of ulcers.Serum CMV PCR can also be performed but is not specific for active CMV disease [70] .
CONCLUSION
In an era of increasing use of immunosuppressing medications in IBD and for longer durations, together with advocacy of the use of combination therapy, patients and their doctors need to be more vigilant about prevention and detection of opportunistic infections.IBD patients are at the same risk for vaccine-preventable illness as the general population.As IBD therapy can affect vaccine efficacy, then vaccination should be considered early in the course of disease and ideally prior to the commencement of any immunocompromising medication.The challenge for doctors is to balance the medical management of IBD, knowing the risks of individual therapies, and recognizing that prevention of opportunistic infections is as of equal importance.This usually requires the best use of one of the most precious commodities a doctor has with their patient, time.
Table 1 Vaccines recommended in immunocompromised inflammatory bowel disease patients Infectious disease Vaccine type Recommendation
Mill J et al .Medical therapy in IBD | 2018-04-03T02:23:21.910Z | 2014-08-07T00:00:00.000 | {
"year": 2014,
"sha1": "810d5ea6a13f80a4221ab29113dad00b0a3b61d7",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v20.i29.9691",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "810d5ea6a13f80a4221ab29113dad00b0a3b61d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255522264 | pes2o/s2orc | v3-fos-license | Vaa3D-x for cross-platform teravoxel-scale immersive exploration of multidimensional image data
Abstract Summary Vaa3D is a software package that has been widely used to visualize and analyze multidimensional microscopic images in a number of cutting edge bioimage informatics applications. However, due to many recent updates of both software development environments and operating systems, it was highly requested to maintain Vaa3D and disseminate it on latest operating systems. In addition, there has never been a showcase about how to use Vaa3D’s cross-platform visualization and immersive exploration functions for multidimensional and teravoxel-scale images. Here, we introduce a newly developed version of the software, called Vaa3D-x, to address all the above issues. Availability and implementation Vaa3D-x is released in both binary and Open-Source available at vaa3d.org and GitHub (https://github.com/Vaa3D). Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Current high-throughput microscopy techniques enable generation of a massive amount of multidimensional imaging data, which are often at the scale of teravoxel or even peta-voxel. As a result, there has been a very strong demand to develop tools to fulfill the emerging need of dealing and analyzing such massive image datasets. In the Open-Source world, for over a decade Vaa3D has been a widely adopted platform software package for multidimensional visualization and exploration (interaction, annotation, processing, etc) of very large multidimensional volume (Peng et al., 2010;2014;Wang et al., 2019). Vaa3D is cross-platform, supports real-time 3D visualization and analysis capabilities to images of potentially unlimited size (Vaa3D-terafly) and has almost 500 plugins for image acquisition, data management, image processing, data analysis and pipelining. Compared with other bioimage analysis tools (Jeong et al., 2010;Long et al., 2012;Pietzsch et al., 2015;Schroeder et al., 2021), Vaa3D stands out with its intrinsic design to handle large, hierarchically organized multidimensional data volumes and associated surface objects. These features make Vaa3D a natural choice in many large-scale studies that involve hundreds or thousands of multidimensional images.
Vaa3D was originally developed based on Qt4 using Cþþ. Due to multiple recent updates of Qt libraries and operating systems (OS) that are no longer compatible with each other, there is a strong need to re-develop several core parts of Vaa3D so it can be more easily disseminated on latest operating systems. On the other hand, as many users of Vaa3D are using the Windows OS, for which the previous Vaa3D software was built mostly with the currently obsolete Visual Studio 2013 compiler, it is needed to also upgrade the key building scripts of Vaa3D to be OS-independent to simplify the maintenance.
In this work, we developed Vaa3D-x by providing a comprehensive solution that does not only update development environment to maintain and disseminate the software more easily but also allow cross-platform users to visualize and annotate teravoxel-scale images using both hierarchical image annotation and immersive virtual reality. We believe these features can help a broad user group to tackle their big imaging data more efficiently.
Application
We applied Vaa3D-x to quantitative visualization and analysis of teravoxel-scale multidimensional images on all major OS platforms, including Windows, Linux and Mac. We used TeraFly module of Vaa3D-x for real-time hierarchical visualization of whole brain imaging data (Fig. 1), which has several dozens of teravoxels. In addition, we also upgraded the TeraVR module, which was the first teravoxel-scale immersive visualization tool in the field. Currently, TeraVR does not run on Mac due to the Apple company's limited support on specific virtual reality hardware. We also upgraded more than 100 plugins in Vaa3D-x, specifically for image processing such as image filtering, segmentation, registration as well as neuron morphology tracing and analysis. We also performed tests to illustrate the efficiency and robustness of Vaa3D-x ( Supplementary Fig. S2) and a thorough functional testing plan for Vaa3D-x across multiple platforms and configurations to increase the confidence level of the performance of the software (Supplementary Tables S2-S4). Figure 1 shows a typical workflow to use Vaa3D-x in the application of neuron tracing from whole brain images where neurons are labeled fluorescently. In this example, one Vaa3D-x plugin for automatic tracing, APP2, is used to produce 3D reconstruction of a neuron quickly, followed by manual curation to annotate the neuron using the TeraFly module. Further, the TeraVR module can be used to annotate and correct the reconstruction error of the 3D morphology of neurons in the virtual reality space. Hundreds of neuron dendrites can also be visualized all together when needed (Fig. 1b).
Method
We unified the compiler (gþþ) on whole platform and removed outdated compiler (Visual Studio 2013) on which Vaa3D relied. For the development system, we used Qt6 and abandoned Qt4 which caused the compilation conflict on Mac OS system. What's more, all Vaa3D external dependency libraries were recompiled and stored in advance according to the system, which transformed the compilation process from script compilation to one-click compilation relying on Qt Creator (Supplementary Fig. S1). The OpenVR library was also successfully deployed on Windows and Ubuntu20, which meant that TeraVR can bring immersive annotations and analysis to users on the above platform. We discarded the original SDL window and instead embraced SteamVR with its own viewport.
Vaa3D-x optimized the source code systematically and implemented new interfaces instead of these functions deprecated by Qt itself. Meanwhile, more Cþþ17 features were applied in TeraFly to keep the code advanced, and conflicts were all emendated to make the previous functions workable and compatible in new Qt6 environment. Many new features were also added.
Conclusion
Compared to the old version of Vaa3D, the unique differences of Vaa3D-x are: The key building scripts of the software have been upgraded, the outdated Visual Studio 2013 compiler has been obsoleted, and the development system has been upgraded from Qt4 to Qt6. These changes made Vaa3D-x be OS-independent and more easily disseminated on latest operating systems. The upgraded modules Terafly and TeraVR in Vaa3D-x allow cross-platform users visualize and immersive explore functions for multidimensional and teravoxel-scale images using both hierarchical image annotation and immersive virtual reality. The compilation of Vaa3D-x has also been simplified, which can help a broader user group. | 2023-01-09T06:17:18.099Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "0c324dacea881781b487adf7f4d10a34f598da0a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9cdff6dbc227ad50857b792f74b4555a29c63bce",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
15462592 | pes2o/s2orc | v3-fos-license | A Case of Classic Raymond Syndrome
Classic Raymond syndrome consists of ipsilateral abducens impairment, contralateral central facial paresis, and contralateral hemiparesis. However, subsequent clinical observations argued on the presentation of facial involvement. To validate this entity, we present a case of classic Raymond syndrome with contralateral facial paresis. A 50 year-old man experienced acute onset of horizontal diplopia, left mouth drooling and left-sided weakness. Neurological examination showed he had right abducens nerve palsy, left-sided paresis of the lower part of the face and limbs, and left hyperreflexia. A brain MRI showed a subacute infarct in the right mid-pons. The findings were consistent with those of classic Raymond syndrome. To date, only a few cases of Raymond syndrome, commonly without facial involvement, have been reported. Our case is a validation of classic Raymond syndrome with contralateral facial paresis. We propose the concept of two types of Raymond syndrome: (1) the classic type, which may be produced by a lesion in the mid-pons involving the ipsilateral abducens fascicle and undecussated corticofacial and corticospinal fibers; and (2) the common type, which may be produced by a lesion involving the ipsilateral abducens fascicle and undecussated corticospinal fibers but sparing the corticofacial fibers.
Introduction
Classic Raymond syndrome, named after a French neurologist Fulgence Raymond, consists of ipsilateral abducens impairment, contralateral central facial paresis, and contralateral hemiparesis [1]. However, subsequent clinical observations argued on the presentation of facial involvement [2]. To validate this entity, we present a case of classic Raymond syndrome with contralateral facial paresis.
Case Report
A 50-year-old man with hypertension, congestive heart failure, and polysubstance abuse (cocaine, cigarettes, alcohol, and marijuana) experienced three days of acute onset of horizontal diplopia, left mouth drooling, and left-sided weakness. On examination he had right abducens nerve palsy, leftsided central paresis of the lower part of the face and limbs, and left hyperreflexia. Pupils were equal, round, and reactive to light and accommodation. He did not exhibit ptosis.
There was no muscle tenderness. Sensation was normal and intact. Cerebellar coordination exam was normal on the right but limited on the left due to weakness. The findings were consistent with those of classic Raymond syndrome [1] with facial nerve involvement. A brain MRI at 5 days after the onset of the symptoms showed a subacute infarct in the right midpons (Figures 1(a)-1(d)). The patient's symptoms were significantly improved with anticoagulative therapy in 3 days.
Discussion
Dr. Raymond first described a syphilitic woman with left abducens impairment, right central facial paresis, and right hemiparesis in 1896 [1]. Raymond hypothesized that a lesion in the lower medial pons damaged the abducens nerve and the nondecussated corticofacial and pyramidal fibers, but spared the more lateral facial nerve. However, after reviewing Raymond's original description, Wolfe disagreed with the hypothesis, citing the patient's development of a cerebral right hemiplegia, aphasia, and difficulty recognizing her husband's face [2], arguing that Raymond's explanation for the findings was unlikely; thus, not even Raymond's patient had "Raymond syndrome." Subsequent clinical observations demonstrated that Raymond syndrome commonly seen in clinic was without contralateral facial involvement [3][4][5]. However, Sheth and colleagues recently reported a 55year-old woman suffering from a sudden onset of right-sided weakness and diplopia with right central facial paresis and left abducens palsy due to a lacunar infarct in the base of the left medial caudal pons on neuroimaging studies [6]. Their observations were consistent with those of classic Raymond syndrome. Supportively, we presented an additional case of classic Raymond syndrome with contralateral facial paresis.
Raymond syndrome is an extremely rare neurologic entity. Its localization involves a restricted but sophisticated neural network resided in the medial lower pons among many other nuclei and neural fibers. Early neuroanatomic study demonstrated that the facial decussation occurs in the pons at the level of the facial nuclei [7], which provides evidence supporting the notion that a lesion in the basis of the medial caudal pons could produce Raymond syndrome [7,8]. It has been suggested that the clinical manifestation of simultaneously ipsilateral abducens nerve palsy with contralateral central facial paresis seen in the classic Raymond Case Reports in Neurological Medicine 3 syndrome results from a lesion in the pons involving the corticofacial decussation at the level of abducens nerve (Figures 1(e) and 1(f)) [6,7], while a commonly seen Raymond syndrome may occur if corticofacial tract is spared (Figures 1(g) and 1(h)) [3][4][5]. Notably, a lesion in the more dorsal area may produce an isolated abducens nerve palsy [9] while the lesion in the ventral caudal pons may produce Millard-Gubler syndrome consisting of both ipsilateral abducens and facial paralysis, and contralateral hemiplegia [8,10].
Innovations in technology have dramatically expanded our knowledge of the functional and neuroanatomical structures. Urban and colleagues used transcranial magnetic stimulation to study the course of corticofacial projections in the human brainstem in patients with and without central facial paresis due to unifocal ischemic lesions of the brainstem [11]. In correlation with brain MRI, they identified corticofacial fibers may loop down into the ventral part of the upper medulla, cross the midline, and ascend in the dorsolateral medullary region to the facial nucleus in some patients. Their findings provide additional evidence suggesting that a contralateral central facial paresis may occur due to a unifocal lesion at the pontine base [11]. Interestingly, the corticofacial projection has also been identified in paramedial lemniscus as an aberrant pyramidal tract in the pons through the upper medulla [12].
To date, only a few cases of Raymond syndrome, commonly without facial involvement have been reported [3][4][5]. To our knowledge, the current case, with facial involvement, is the second validation of the classic Raymond syndrome after an extensive MEDLINE search [6]. We would, therefore, propose the concept of two types of Raymond syndrome: (1) the classic type, which may be produced by a lesion in the mid-pons involving the ipsilateral abducens fascicle and the non-decussated corticofacial and corticospinal fibers (Figures 1(e) and 1(f)); and (2) the common type, which may be produced by a lesion involving the ipsilateral abducens fascicle and non-decussated corticospinal while sparing the corticofacial fibers (Figures 1(g) and 1(h). | 2016-05-12T22:15:10.714Z | 2012-08-09T00:00:00.000 | {
"year": 2012,
"sha1": "2ce5e7c5306bc3ab02241eaee3b4ca6af89c8d1a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crinm/2012/583123.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d23ff7f895c29803ac685bccdbb41191d7b00d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
61156486 | pes2o/s2orc | v3-fos-license | Plastic-scale-model assembly of ultrathin film MEMS piezoresistive strain sensor with conventional vacuum-suction chip mounter
We developed a plastic-scale-model assembly of an ultrathin film piezoresistive microelectromechanical systems (MEMS) strain sensor with a conventional vacuum-suction chip mounter for the application to flexible and wearable strain sensors. A plastic-scale-model MEMS chip consists of 5-μm ultrathin piezoresistive strain sensor film, ultrathin disconnection parts, and a thick outer frame. The chip mounter applies pressure to the ultrathin piezoresistive strain sensor film and cuts the disconnection parts to separate the sensor film from the outer frame. The sensor film is then picked up and placed on the desired area of a flexible substrate. To cut off and pick up the sensor film in the same manner as with a plastic scale model, the design of the sensor film and disconnection parts of MEMS chips were optimized through numerical simulation and chip-mounting experiments. The success rate of the 5-μm ultrathin sensor film mounting increased by decreasing the number and width of the disconnection parts. For a 5-μm-thick 1 × 5 mm2 sensor film, 4 disconnection parts of 20 μm in width achieved 100% success rate. The fabricated ultrathin MEMS piezoresistive strain sensor exhibited a gauge factor of 100 and high flexibility to withstand 0.37 [1/mm] bending curvature. Our plastic-scale-model assembly with a conventional vacuum-suction chip mounter will contribute to more practical manufacturing of ultrathin MEMS sensors.
etched silicon dioxide, and backside silicon substrate. In this structure, ultrathin silicon film weakly adheres to the backside silicon substrate with an etched silicon dioxide layer. The ultrathin-MEMS-film-transfer method involves a Polydimethylsiloxane (PDMS) stamp because PDMS is adhesive and soft 6,7,10 . After the PDMS stamp adheres to the ultrathin MEMS film and breaks the etched silicon dioxide layer to release the ultrathin MEMS film from the backside silicon substrate, the MEMS film is moved to the flexible substrate. However, the PDMS-based ultrathin-MEMS-film transfer is not compatible with conventional chip mounters since such mounters use a vacuum-suction chip-mounting head. To develop a ultrathin-MEMS-film-stain-sensor assembly for conventional semiconductor factories, we developed a plastic-scale-model assembly for ultrathin-MEM-film-strain sensors by using a conventional vacuum-suction chip mounter [3][4][5]12 . With our plastic-scale-model assembly, we fabricated a plastic-scale-model MEMS chip, which utilizes silicon-on-insulator (SOI) wafer and consists of ultrathin MEMS strain sensor film, disconnection parts, and an outer frame, and the ultrathin MEMS strain sensor film is cut from the outer frame by breaking the disconnection parts with the conventional vacuum-suction chip mounter. Figure 1 shows our plastic-scale-model assembly for ultrathin film MEMS strain sensors. Firstly, we fabricate a plastic-scale-model MEMS chip which consists of 5-μm ultrathin piezoresistive strain sensor film, ultrathin disconnection parts, and a thick outer frame. After the vacuum-suction chip-mounting head pushes on the ultrathin MEMS strain sensor film and breaks the disconnection parts, the MEMS strain sensor film is picked up by vacuum suction and moved to the desired spot on a flexible plastic substrate. Our plastic-scale-model assembly with the conventional vacuum-suction chip mounter has advantages of low introduction cost of fabrication tools and stable adhesive force to pick up an ultrathin MEMS strain sensor film because the PDMS-stamp tools are not conventional and PDMS gradually loses its adhesive force. Because the conventional vacuum-suction chip mounter picks up mechanically strong MEMS bare chips of more than 300 μm thick, handling mechanically weak ultrathin MEMS strain sensor film is difficult, breaking the sensor film. Our plastic-scale-model assembly also requires not only picking up the ultrathin MEMS strain sensor film but also separating the disconnection parts at the same time. When pressure to separate the disconnection parts is applied to the sensor film, high pressure is applied to the ultrathin MEMS strain sensor film, resulting in breaking of the sensor film. We thus previously reported 3-5 the low success rate of chip-mounting.
To solve the problem of ultrathin-sensor separation, we made a mechanical model of ultrathin-MEMSsensor-film separation and optimized the design of ultrathin MEMS strain sensor film and disconnection parts to achieve high yield of ultrathin MEMS sensor film mounting with a conventional vacuum-suction chip mounter. Specifically, we analyzed the pressure and strain distribution on an ultrathin MEMS sensor film and disconnection parts under separation pressure. We made a simple theoretical model of a plastic-scale-model MEMS chip to find key parameters of disconnection-part geometry. We then conducted a finite element model (FEM) simulation of the plastic-scale-model MEMS chip with different geometry design in which key parameters of the disconnection part were changed. Then, we conducted ultrathin MEMS strain sensor film mounting experiments with a conventional vacuum-suction chip mounter after we fabricated the plastic-scale-model MEMS chip with the same geometry. The optimal design of the plastic-scale-model MEMS chip for plastic-scale-model assembly was finally determined. To measure strain, the chip had a piezoresistor which was p-ion-implanted n-type silicon film and changed its electric resistance according to applied strain. We also evaluated the piezoresistive stain sensitivity of the fabricated ultrathin MEMS strain sensor.
Results
Structure of plastic-scale-model MEMS chip and fabrication process. The structure of the plastic-scale-model MEMS chip is shown in Fig. 1. The ultrathin MEMS strain sensor film is only 5 μm thick while conventional piezoresistive MEMS strain sensors are more than 100 μm thick. The strain sensing material is a piezoresistor formed on the surface of 5 μm thick device silicon layer. Phosphor-ions are implanted on n-doped silicon layer to form 100 nm thick piezoresistor. The piezoresistive effect of silicon exhibits higher electric resistance change under applied strain than conventional metals 8 . The plastic-scale-model MEMS chip is made of an SOI wafer, and the thickness of the ultrathin MEMS strain sensor film is determined by the 5-μm-thick device silicon layer of the SOI wafer. The substrate is a conventional flexible printed circuit board consisting of polyimide (PI) film and a copper (Cu) electrode. The ultrathin MEMS strain sensor film is attached to the PI film with glue and the sensor film and Cu electrode on the PI film are connected using stretchable silver paste. Our plastic-scalemodel assembly consists of the following three parts: 1. A MEMS fabrication process to fabricate plastic-scale-model MEMS chip containing ultrathin MEMS strain sensor film, disconnection parts, and an outer frame, which is a similar structure to a plastic-scale-model before assembly. 2. separation of ultrathin MEMS strain sensor film from the outer frame and mounting the sensor on the substrate with a conventional vacuum-suction chip mounter, which is similar to the separation of the plastic model parts from the outer frame. 3. picking up of ultrathin MEMS sensor film by vacuum suction and placing the sensor film onto the flexible substrate.
Analytic model of ultrathin MEMS strain sensor film mounting with conventional vacuum-suction chip mounter and key parameters.
In previous studies 3-5 , the ultrathin MEMS sensor film breaks when the sensor film is separated from the outer frame since the pressure is applied not only to the disconnection parts of the MEMS chip but also the sensor film itself. To avoid breaking the sensor film, the pressure and strain distribution on the sensor film and disconnection parts under separation pressure were analyzed and a suitable design of a plastic-scale-model MEMS chip for sensor film-mounting were investigated. Figure 2 shows a simplified structure model of a plastic-scale-model MEMS chip. The cross-sectional view of the MEMS chip is an ultrathin MEMS strain sensor film with several fixing supports, which are disconnection parts. Uniformly distributed load is applied to the sensor film by the chip mounter head and shear stress is concentrated on the disconnection parts. The load applied from the chip-mounter head induces a moment around the disconnection parts, which breaks the sensor film. The shear stress on the disconnection parts and the bending stress on the sensor film around the disconnection parts are modeled by changing the number and width of the disconnection parts and thickness, width, and length of the sensor film. The parameters of the disconnection parts and sensor film are as follows.
Load on the sensor film: f Sensor film thickness: t Sensor film length: l Sensor film width: w Distance among disconnection parts: 1 Number of disconnection parts: n Width of disconnection parts: d When the uniformly distributed load is applied to the sensor film, the shear stress per disconnection part τ is Considering the sensor film between disconnection parts, the bending moment around the disconnection parts M is. The stress on the sensor film at the nearest point of the disconnection parts ρ is Here, I is the cross-sectional second moment of the sensor film According to Equations (3) and (4), According to Equations (2) and (5), The shear stress when the sensor film is separated from outer frame silicon is τ = τ B , where τ B is the rupture stress of single crystal silicon, which is a constant value defined by the material property of silicon. Then, maximum stress on the sensor film ρ is defined as ρ M . In addition, the rupture stress of silicon is ρ B . If the sensor film is separated from the outer frame without breaking the senor film, the following inequation should be managed.
In other words, Therefore, the condition of separating the sensor film without breaking the sensor film is that the left item of Inequation (9) should be small. To make this item small, w, l, t, d, 1 , and n are the key parameters of the plastic-scale-model MEMS chip geometry. Among these parameters, w and l cannot be changed because these geometries are defined by the strain sensor application. For separation of the sensor film, d, n, and 1 can be changed because the geometry of the disconnection parts does not affect the sensor film design. Therefore, we changed d and n and analyzed the mechanical model of ultrathin MEMS strain sensor film mounting. Figure 3(a) shows the resultant stress concentration on the disconnection parts to cut the sensor film from the outer frame and the stress distribution on the sensor film. To cut the disconnection parts, the shear stress induced by the chip-mounter head should be concentrated only on the narrowest area of a disconnection part. Figure 3(a) shows that the shear stress concentrated in the narrowest point of a disconnection part to cut the sensor film from the outer frame. Figure 3(c) shows that the shear stress concentration increased when n decreased from 8 to 4. Specifically, when d was 20 μm, the concentrated shear stresses with 4, 6, and 8 disconnection parts were 1.7, 2.5, and 17 GPa, respectively. The decrease in d also increased the shear stress. For 4 disconnection parts, the shear stresses to the narrowest area of that part with d of 100, 80, 60, 40, and 20 μm were 7. which shows that the shear stress becomes concentrated by decreasing d. Therefore, a small n and narrow d are suitable to cut the ultrathin MEMS strain sensor film from the outer frame.
Comparison between numerical and experimental analysis of ultrathin
On the other hand, to avoid breaking the ultrathin MEMS strain sensor film, we also evaluated the bending moment on the sensor film, as shown in Fig. 3(b,d). Figure 3(b) shows the bending stress induced by the pressure of the chip mounter head concentrated on the area of the sensor film around the disconnection parts. Figure 3(d) shows the normalized bending stress, which is the ratio of the largest bending stress on the sensor film/shear stress on disconnection parts because the shear stress increases upto constant silicon rupture stress when the sensor film is cut off. It should be noted that the bending stress on the sensor film is required to be normalized because f in the numerical analysis was assumed a constant value of 12 kPa, but f for cutting the disconnection parts changes depending on the dimension of the MEMS chip geometry. Figure 3(d) shows that the bending stress/shear stress of 4 disconnection parts was smallest, which prevented the sensor film from breaking. For 4 disconnection parts, by narrowing d from 100 to 20 μm, the ratio decreased to 0.17. For 6 and 8 disconnection parts, the bending stress/shear stress also decreased by narrowing d. Thus, 4 disconnection parts with the small d (<40 μm) exhibits a small ratio between bending stress/shear stress and is suitable for sensor film separation without breaking the sensor film.
We conducted the experimental analysis by fabricating the same geometrical design of plastic-scale-model MEMS chips as that used in the numerical analysis and separating and mounting the sensor film. Figure 4 shows the experimental results of the success rates of sensor film mounting and the resultant photographs of ultrathin MEMS sensor film mounted on the PI films. The MEMS chips with 4, 6, and 8 disconnection parts whose d ranged from 20 to 100 μm were used, five MEMS chips for each design were tested using a conventional vacuum-suction chip mounter (Hisol Model 400, Hisol corporation), and the success rate was evaluated. Figure 4 also shows the optimal design of MEMS chips and the disconnection parts. The 1 × 5 mm 2 ultrathin MEMS strain sensor film had 4 disconnection parts with d of 20 μm because its success rate was 100%. By narrowing the d of the 4 disconnection parts from 100 to 20 μm, the success rate increased from 60 to 100%. The success rates of the 6 and 8 disconnection parts were 80% when d was 20 μm, while the rates were 0% when d were 40, 60, 80, and 100 μm. From photographs of broken ultrathin MEMS strain sensor films with 6 and 8 disconnection parts in Fig. 4, cleavages running between the disconnection parts where the high tensional strain to break the sensors film were found during the FEM analysis (the circled area of FEM results in Fig. 4). From comparing the numerical and experimental analyses, if 2n and d decreased to 4 and 20 μm, respectively, the success rate of chip-mounting will increase to 100% because the shear stress on those disconnection parts were concentrated for ease of separation, and the resultant small load on the sensor film reduced bending stress on the sensor film to avoiding breaking of it. Therefore, for 1 × 5 mm 2 sensor film, 4 disconnection parts with d of 20 μm is an optimal design for highly successful plastic-scale-model assembly of an ultrathin MEMS strain sensor film. Figure 5(a,b) show a photograph and an SEM image of MEMS sensor films on a PI film under bending, respectively. The sensor film was only 5 μm thick; thus, it was highly flexible. Mechanical durability was investigated by conducting bending test. The both sides of sensor were connected to a multi-meter (Keithley 2400), and electrical resistances were measured when the 5 sensor films were bent to radii ranging from 50 to 2 mm. The bending curvature was tuned from 0.02 to 0.5 [1/mm] because the curvature is the multiplicative inverse of the bending radius. Figure 5(c) shows the typical relationship between bending curvatures and the resultant electric We measured a gauge factor of our piezoresistive MEMS strain sensor film and demonstrated the human finger motion sensing with our sensor as a potential wearable application. The PI film with the strain sensor film was attached to a 0.5-mm-thick stainless-steel plate and small strain was applied to the stainless-steel plate for measuring the electric resistance change. The PI film with ultrathin MEMS strain sensor film was cut and placed on the stainless-steel plate by using glue (Loctite 496). Both sides of the stainless-steel plate were pulled with the force gauge and automatic moving stage (AIKOH engineering FTN-3001). The applied force ranged from 2 to 20 N, and the corresponding electric resistance of the MEMS strain sensor was measured with a Wheatstone bridge circuit, differential amplifier (NF circuit block corporation, model 5307), low pass noise filter (NF, model 3611), and stable electric power supply (NF, model 5394). The Wheatstone bridge circuit consists of two ultrathin MEMS strain sensor films and two dial electric resistance gauges (Sanhayato, DRB-6). The gain of the differential amplifier was 40, and cut off frequency and gain of the active low pass filter was 10 Hz and 10 times, respectively. The applied strain was calibrated using a commercial metal strain gauge (Kyowa Electronic Instruments Co., Ltd. 632-124). Figure 6(a) shows the relationship between applied strain and electric resistance change. The calculated gauge factor was at most 100, which is 33 times larger than with a conventional metal strain gauge and the same as with a semiconductor strain gauge. Therefore, a highly sensitive sensor can be fabricated and assembled with our plastic-scale-model assembly. Finally, we demonstrated the human finger motion sensing with our flexible ultrathin piezoresistive strain sensor. Figure 6(b) shows the photograph of the strain sensor attached on the human finger joint area of a glove. The graph in the Fig. 6(b) shows the electric resistance changed under human finger bending. Since the strain induced by bending is larger than that on the stainless-steel plate, larger electric resistance is observed. The advantages of our ultrathin MEMS strain sensor films are that it is thin (5 μm) and highly flexible while a conventional semiconductor strain gauge is thick (>100 um) and rigid.
Discussion
We theoretically and experimentally analyzed our plastic-scale-model assembly with a conventional chip mounter for an ultrathin MEMS strain sensor film and found the optimal design of plastic-scale-model MEMS chips to achieve high yield of ultrathin MEMS sensor film mounting on a flexible substrate. To separate a MEMS sensor film and successfully mount it on a flexible substrate, the stress concentration of applied pressure on the disconnection parts for separation should be large, but the bending force on the sensor film should be small. From the mechanical characteristic model of ultrathin MEMS sensor film-mounting, d and 2n were found to be key parameters for successful sensor film separation. When 2n and d decreased from 8 to 4 and from 100 to 20 μm, respectively, the success rate increased. For 5-μm-thick 1 × 5 mm 2 sensor film, the 4 disconnection parts with d of 20 μm achieved 100% success rate. Therefore, decreasing 2n and d is important for highly successful plastic-model MEMS-sensor assembly. The successfully fabricated MEMS sensor exhibited a high gauge factor of 100 and high flexibility. Therefore, our assembly with a conventional vacuum-suction chip mounter will lead to highly flexible and sensitive MEMS piezoresitive strain sensors toward wearable and flexible physical sensor applications.
Methods
Plastic-scale-model MEMS chip fabrication process. The plastic-scale-model MEMS chip fabrication is as follows. (1) The MEMS fabrication process starts with a SOI wafer made up of a 5-μm-thick device silicon layer, 1-μm-thick silicon dioxide layer, and 500-μm-thick handling silicon layer (SOI wafer, KST World Corporation). A phosphor ion is first implanted into the device silicon layer. After ion implantation, the surface of the device silicon layer is annealed at 950 °C with a lamp-annealing system to form a piezoresistive sensing layer. The phosphor-ion doping concentration is 10 20 . (2) Electrode films of Cr and Au are deposited and patterned on the device silicon layer. The device silicon layer is patterned and etched using the inductive coupled plasma reactive ion etching system (ICP-RIE) to define the sensor area and disconnection part. (3) After the backside silicon is patterned and etched using the ICP-RIE, the silicon dioxide layer is etched using the reactive ion etching system (RIE) to release the sensor area and disconnection parts of the device silicon layer. Ultrathin MEMS strain sensor film mounting process and wiring. By using a conventional vacuum-suction chip mounter (Hisol-model 400, Hisol Corporation), the chip mounter head separates the disconnection parts and places the sensor film on the desired area of the flexible circuit board. The chip mounter head vacuum-sucks the ultrathin MEMS strain sensor film. Specifically, the position of the ultrathin MEMS strain sensor film is first detected with the microscope camera of the chip mounter and the chip mounter head is moved above the sensor film. The chip mounter head first moves down without vacuum and breaks the disconnection parts of the MEMS chip. Then vacuum suction starts, and the sensor film is picked up by the chip mounter head. After the chip mounter head moves to the desired position on the flexible substrate, the sensor film is taken down slowly. The chip mounter head releases the sensor film by stopping the suction. Then, the adhesive glue, which is stronger than PDMS adhesive, is used to release the sensor films and fixes the sensor film on the substrate. Silver paste (PE873, Dupont) is then coated between the Au pads of the sensor film and Cu electrode on the flexible circuit board. | 2019-02-14T15:25:25.496Z | 2019-02-13T00:00:00.000 | {
"year": 2019,
"sha1": "85f93cf7120f8cafac0d230656dc0321f9e03ded",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-39364-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "85f93cf7120f8cafac0d230656dc0321f9e03ded",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
233220545 | pes2o/s2orc | v3-fos-license | Choices of chromatographic methods as stability indicating assays for pharmaceutical products: A review
Stability indicating assay describes a technique which is used to analyse the stability of drug substance or active pharmaceutical ingredient (API) in bulk drug and pharmaceutical products. Stability indicating assay must be properly validated as per ICH guidelines. The important components in a stability indicating assay include sensitivity, specificity, accuracy, reliability, reproducibility and robustness. A validated assay is able to measure the concentration changes of drug substance/API with time and make reliable estimation of the quantity of the degradation impurities. The drug substance is separated and resolved from the impurities. Pros and cons of HPLC, GC, HPTLC, CE and SFC were discussed and reviewed. Stability indicating assay may consist of the combination of chromatographic separation and spectroscopic detection techniques. Hyphenated system could demonstrate parallel quantitative and qualitative analysis of drug substances and impurities. Examples are HPLC-DAD, HPLC-FL, GC-MS, LC-MS and LC-NMR. The analytes in the samples are separated in the chromatography while the impurities are chemically characterised by the spectroscopy in the system. In this review, various chromatographic methods which had been employed as stability indicating assays for drug substance and pharmaceutical formulation were systematically reviewed, and the application of hyphenated techniques in impurities characterisation and identification were also discussed with supporting literatures.
Introduction
Stability-indicating assay is utilised in the forced degradation analysis of pharmaceutical products and active pharmaceutical ingredients (Blessy et al., 2014). It is a procedure that could detect the degradation and change in active pharmaceutical ingredient (API) concentration in pharmaceutical products (Rawat and Pandey, 2015). United States Food and Drug Administration (FDA) guidance documents defined that stability indicating method is a validated quantitative analytical procedure that can be used to evaluate the stability of the drug substance (Blessy et al., 2014). It is also a method which could measure the changes in drug substance concentration without the interference from other substances present, including degradation impurities, excipients and other potential substances (U.S. Department of Health and Human Services, 2000). The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) guideline Q3B, Impurities in New Drug Products stated that it is mandatory to provide documented evidence, to show that the analytical methods are properly validated and they are suitable for detection and quantification of degradation products and impurities (Guideline, 2006). The validated methods should be reliable, specific and able to demonstrate that the impurities of the new drug substance are separated from the API and other pharmaceutical substances. Various methods have been implemented as stability indicating assay. The common ones include high performance liquid chromatography (HPLC), gas chromatography (GC), high performance thin layer chromatography (HPTLC), capillary electrophoresis (CE) and super critical fluid chromatography (SFC). Some of these chromatography methods could also be coupled with other spectroscopy methods as modern high-end and high-resolution separation and chemical characterisation techniques, e.g. high performance liquid chromatography-diode array detector (HPLC-DAD), high performance liquid chromatography-Fluorescence (HPLC-FL), gas chromatography-mass spectrometry (GC-MS), liquid chromatography-mass spectrometry (LC-MS) and liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy. With the chromatography-spectroscopy combination, many degradation impurities were identified and documented ( Figure 1).
Stability indicating assays
There is no single assay or parameter that could profile the stability of all products. The suitability of the methods is dependent on the chemistry and physiochemical properties of the API and ingredients in the formulations. Therefore, knowledge of the physiochemical properties of the drug substance and the pharmaceutical formulations is extremely crucial. The properties of targeted substance such as pK a value, log P, solubility, polarity, volatility and absorption maximum (λ max ) of the drug must be known (Blessy et al., 2014). These physiochemical properties could provide important information on selection of stability indicating assays and the parameter settings. For instance, log P and solubility of API and formulations are taken into consideration in selection of mobile phase and sample solvent in HPLC, while pK a values could determine the suitable pH for the mobile phase (Patel Riddhiben et al., 2011;Blessy et al., 2014). Understanding the chemical profile of APIs, such as the chemical structures, chemical properties, the degradation pathways, number of degradants, and the optimum conditions for peaks separation are equally important in the development of stability indicating assays (Jadhav et al., 2012). The mandatory information can be retrieved from scientific literatures, company drug profiles, spectral libraries and reports (Patel Riddhiben et al., 2011). Many studies have reported the use of various stability indicating assays in analysing the degradation of API and pharmaceutical products. The main objective of this review is to provide an overview on various stability indicating assays used in forced degradation studies, and the list of drugs which had been successfully analysed and their impurities were resolved using specific techniques.
High performance liquid chromatography (HPLC)
HPLC is the dominant technique in pharmaceutical analysis. HPLC is carried out in a chromatographic column in which a solid or liquid sample is dissolved in a suitable solvent. This system is simple to operate, versatile, requires minimal sample preparation, provides high resolution and excellent recovery (Ravisankar et al., 2017Alsohaimi et al., 2018). This technique is also applicable for numerous types of compounds, such as compounds with diverse polarity, molecular mass, volatility and thermal sensitivity (Kumar and Kumar, 2012). Analyte elution could be performed either in isocratic or gradient elution mode (Table 1). The separation output of HPLC is represented in chromatogram and each analyte in the sample is displayed as a sharp peak at a specific time (Kazakevich and Lobrutto, 2007;Raza et al., 2015). HPLC is an extremely useful technique for drug stability evaluation. It is specific, rapid, sensitive, and robust (Ravisankar et al., 2017). The unique properties of HPLC include various detection wavelengths that can be set for detection; adjustable flow rate and the mobile phase elution profile (Aljerf and AlMasri, 2018).
HPLC could simultaneously detect various analytes in pharmaceutical formulations (Table 1). It has been vastly used as stability indicating assay for bulk drugs and drug products, separating drug substances and degradation impurities simultaneously. For instance, Dongala et al. (2019) recently had developed a stability indicating assay using HPLC to separate 14 impurities from Excedrin tablet which consisted of acetaminophen, aspirin, and caffeine via gradient elution. The separation was excellent, and the peaks were perfectly resolved in the chromatogram. The optimisation of the parameters to achieve good separation of multiple drugs could be achieved using the response surface methodology in HPLC where the retention time response the surfaces of the three drugs present in Excedrin tablet. The retention times of the three drugs would not be identical if the three surfaces did not intersect. The capacity factor of a good chromatography should be neither too low nor too high. The separation method in HPLC were developed based on the optimisation of mobile phase, including the concentration of organic modifier and pH. The pH of the mobile phase may affect the degrees of ionisation of analytes, the stationary phase and mobile phase additives. The selectivity and the analytes retention times change with pH.
In addition, Araujo et al. (2020) had successfully developed and validated the method for simultaneous determination of enrofloxacin and piroxicam and their respective degradation products in veterinary formulations. The method showed good specificity, high precision, accuracy, sensitivity, and robustness, which is suitable for routine quality-control analysis, as per ICH guidelines. Our recent studies have also managed to separate up to six degradation impurities in flibanserin in the stability indicating assay developed using HPLC (Chew et al., 2020).
Gas chromatography (GC)
Gas chromatography (GC) is a method that utilises gases to separate and analyse compounds that can be vaporised without decomposition. To analyse a sample using GC, the sample is dissolved in a solvent before it is injected into the system. The sample is vaporised before the analytes are separated between stationary and mobile phases. Chemically inert gas, such as helium and nitrogen, carries the analytes through the heated column, where the separation and partition of analytes happens. GC works similarly to HPLC and thin layer chromatography (TLC), except that it has liquid stationary phase and gaseous mobile phase.
GC has high precision, accuracy, sensitivity and resolution in sample analysis and peaks separation. It had been used as stability indicating assay for numerous pharmaceutical substances and products since 1980s. Bergh and L€ otter (1984) developed the stability indicating assay for acetaminophen and aspirin. Subasranjan et al. (2010) developed a validated assay for divalproex sodium in pharmaceutical formulation. Both studies reported that the stability indicating assays were able to quantify the standard drugs, detect and resolve the degradation impurities and other substances or contaminants present in the pharmaceutical matrices. GC had been used for drug stability of magnesium valproate and other salt form of valproic acid. The detection and quantification the impurities were determined as per ICH guidelines (Ambasana et al., 2011). GC analysis is also applicable to non-chromophoric substances in drugs. It had been used in the detection of memantine hydrochloride and its non-chromophoric impurities in bulk drug and drug products, where they had been successfully resolved via GC system (Jadhav et al., 2012). Most of the stability studies using GC as the analytical technique commented that GC method is specific, accurate, linear, reproducible, rugged, and robust (Subasranjan et al., 2010).
GC is more environmentally friendly than HPLC because it minimises the environmental pollution and save organic solvents (Subasranjan et al., 2010). However, GC system is only limited to the analysis of volatile samples and samples with lower melting point (Sojitra et al., 2019). Chemical compounds with molecular weight above 1000 Da are difficult to vaporise because they are rarely volatile (Feng et al., 2019). Hence this method is more suitable for smaller size molecules. Even if the chemical species could vaporise, thermally unstable molecules are also not suitable for GC analysis (Feng et al., 2019). Besides, the sample to be analysed by GC must be salt free and absence of ions (de Koning et al., 2009).
High performance thin layer chromatography (HPTLC)
High performance thin layer chromatography (HPTLC) is the advanced version of TLC which provides better separation efficiency, and it is suitable for both qualitative and quantitative analysis (Choukaife and Aljerf, 2017). This method is rapid and cheap (Ansari et al., 2005), the results are reproducible and large number of samples could be analysed simultaneously with small amount of mobile phase. Combinations of organic solvents are also applicable in HPTLC as mobile phase (Table 1). The mobile phases can be mixture of non-polar and polar organic solvents (Ansari et al., 2005;Dixit et al., 2008;Damle et al., 2009;Bhole et al., 2017), as well as combination of organic with acidic or alkaline solvents (Makhija and Vavia, 2001;Thoppil et al., 2001;Kulkarni and Amin, 2000;Puthli and Vavia, 2000;Kotiyan and Vavia, 2000;Ali et al., 2007;Prajapati et al., 2017;Padh et al., 2017;Ghode et al., 2019). This method is also suitable for samples that require mobile phases with extreme pH, where the ionisation state of the analytes was dependent on the pH of the mobile phases (Mohammad and Moheman, 2011). Vast combination of mobile phase allows simultaneous separation of analytes in drug samples (Devanand et al., 2011). This method is especially suitable for samples that require combination of mobile phases as this is not achievable via other analytical methods, especially HPLC (Dhandhukia and Thakker, 2011). This system is also applicable to suspension samples. It produces coloured bands and retention factors for analytes identification (Loescher et al., 2014). Numerous studies showed the specificity of HPTLC in drug stability analysis. For instance, Ansari et al. (2005) had employed HPTLC as stability indicating assay for analysis of curcumin in pharmaceutical formulation. The curcumin in pharmaceutical formulation was spotted on TLC aluminium plates precoated with silica gel 60F 254 and the TLC plates were developed in chloroform:methanol (9.25:0.75 v/v) solvent system. The peaks of curcumin and degradants were analysed using densitometer with wavelength set at 430 nm (Gupta et al., 1999). This method was selective and exhibited high precision, specificity and accuracy in the stability studies of curcumin. This is in agreement with Bober (2017), who had reported the stability of diphenhydramine using HPLTC equipped with densitometer. Reduction in diphenhydramine content in the spot and appearance of degradation impurities peaks were noticed in the densitograms upon exposure to thermal and light stresses.
However, HPTLC has several limitations. The separation bed is short with limited developing distance and lower plate efficiency (Kamboj and Saluja, 2017). This limitation may result in ineffective separation if the retention factors, R f values and the polarities of analytes are similar to each other where the spots and peaks of the analytes will overlap with each other . Sample derivatisation may be needed prior to detection, if it is not detectable under 254 nm, 336 nm and white light (Loescher et al., 2014).
Capillary electrophoresis (CE)
Capillary electrophoresis (CE) is a high performing separation method which is carried out in narrow-bore capillaries with the influence of external electric field (Al Azzam et al., 2011;Anastos et al., 2005). This method is applicable to various substances, including inorganic ions, chiral biomolecules, biotechnological, biopolymers and clinical samples (El Deeb et al., 2013;Anastos et al., 2005). Separation in CE is selective, highly precise and efficient. It is able to analyse complex mixtures, and requires small sample size (in microliter range or below) and reagents (El Deeb et al., 2013;Gordon et al., 1988;Anastos et al., 2005;Currell, 2008). CE has several advantages over HPLC and GC. CE method is versatile. The separation time is short and it is suitable for thermally unstable compounds (Anastos et al., 2005;Currell, 2008). CE can also be used to separate structurally similar compounds, i.e. chiral molecules. Compared with HPLC and gas chromatography, capillary electrophoresis has distinct advantages, including automation, minimal sample preparation, low cost of capillary columns, use of very small amounts of organic solvents and chemicals (Thormann et al., 1996).
CE has become a complementary and alternative method in stability indicating assay. It is feasible in separation of drugs and impurities which have similar structures and chemical properties in pharmaceutical formulations (Fakhari et al., 2008;Altria and Rogan, 1994;Alnajjar et al., 2007). The samples require either no or minimum pre-treatment (Cianciulli and W€ atzig, 2012;El Deeb et al., 2013;Anastos et al., 2005) before analysis. CE system is also applicable to water insoluble, charged and neutral drug substances (Altria, 2013;Anastos et al., 2005). Therefore, the system is applicable to various pharmaceutical product analysis, including stability indicating studies, determination of drug impurities, main component assays, chiral separation and detection of drug residue (Altria, 2013). For instance, stability of metformin hydrochloride in tablet was evaluated using CE and the method developed showed good linearity, accuracy, precision, sensitivity, selectivity and robustness (Hamdan et al., 2010). Metformin hydrochloride was successfully resolved from its major degradation products in this study (Hamdan et al., 2010).
CE is able to resolve and differentiate enantiomers and structurally similar compounds with different polarity and solubility (Altria, 2013). Highly sensitive, selective and accurate nature of CE system is shown in the stability indicating assay of amlodipine under various stresses (Mohamed et al., 2016). Degradation was noticed under acid and alkaline hydrolysis, oxidative and photolysis. R-(þ) and S-(-)-amlodipine enantiomers were detected as impurities upon degradation (Fakhari et al., 2008). The excipients in the tablet and the enantiomers were perfectly resolved and appeared as sharp peaks in electropherogram. CE was also used as stability indicating assay for tramadol (TR). Mohammadi et al. (2011) had developed a chiral stability-indicating assay using CE system to evaluate the stability of TR enantiomers. To assist in separation of the chiral molecule, maltodextrin was added into the buffer as chiral selector (Tabani et al., 2015). The studies showed that both (þ)-TR, (-)-TR and the degradation impurities were detected as individual peaks in electropherogram.
The main limitation with CE as the stability indicating method is the separation of analytes with different polarity and water solubility (Toraño et al., 2019). This is seen in the stability study of quetiapine, an antipsychotic drug for the treatment of schizophrenia (Hillaert et al., 2004). Series of impurities were present in this drug, which were produced during synthesis, acid hydrolysis and oxidative degradation, namely desethanol quetiapine, N-formyl-quetiapine, quetiapine carboxylate, N-ethylpiperazinyl thiazepine, ethylquetiapine, bis(dithiazepine) (dimer), N-and S-oxides (Borst et al., 2013). Due to the variation in water solubility of these impurities, CE was not suitable for such analysis.
Super critical fluid chromatography (SFC)
Super critical fluid chromatography (SFC) functions similarly to GC and HPLC. It merges the advantages of GC and HPLC (Hofstetter et al., 2019;Hage, 2018), but it utilises supercritical fluids such as carbon dioxide (CO 2 ) as the mobile phases. SFC can be connected to wide range of detectors, such as Flame Ionization Detector (FID), Flame Photometric Detector (FPD), Electron Capture Detector ECD, Mass Spectrometer (MS) and Fourier Transformer Infrared, fluorescence emission spectrometer, and thermionic detectors (Lafont et al., 2012;Pavan and Raja, 2020;Thi ebaut, 2018;Jumhawan and Bamba, 2017). FID and MS are commonly used for SFC (Pavan and Raja, 2020). This method is sustainable and more cost effective. It is considered as green technology, because it uses less organic solvents, produces less system waste and eco-friendly (Ganipisetty et al., 2013).
The distinct physical properties of SFC have several advantages over conventional HPLC. The analysis is more rapid and its gas like mobile phase has lower viscosity and higher diffusion coefficients than HPLC (Berger, 2007;Pinkston, 2005, Pavan andRaja, 2020). SFC is more preferred for compounds with high solubility in organic solvents (Wang et al., 2011). This method is reliable, rapid, displayed good efficiency in separation and could separate analytes with different polarities. It allows higher flow rates and the system could utilise shorter or longer columns than conventional HPLC (Pinkston, 2005;Pavan and Raja, 2020). The solvent evaporation and product isolation using SFC system are also rapid (Montañ es and Tallon, 2018). SFC also has better resolving power than HPLC due to the high diffusivity of the mobile phase, which could lead to better separation of the chemical species in shorter running time (Pavan and Raja, 2020). SFC also has several advantages over GC. It could analyse chemical species which is thermally unstable, high molecular weight and without the need of derivatization to convert polar groups into non polar (Pavan and Raja, 2020). These advantages would make SFC a better choice of chromatographic method for pharmaceutical substances with these properties.
SFC has excellent performance, cost effective and requires minimum use of solvent. SFC was used in profiling the impurities in API degradation (Alexander et al., 2013;Ganipisetty et al., 2013;Majewski et al., 2005). Ganipisetty et al. (2013) had effectively separated clofarabine and its impurities within 6 min with SFC. The validated assay was rapid, accurate, precise, specific, robust and showed good linearity. This method also provides orthogonal selectivity, which is complementary to RP-HPLC. This is also in agreement to Wang et al. (2011), where authors had quantified mometasone furoate and its impurities using SFC, and the results showed that it was comparable to RP-HPLC. Authors reported that the SFC method developed is suitable for stability testing for mometasone furoate, due to its good linearity, high accuracy and precision. Alexander et al. (2013) had critically evaluated the advantages and disadvantages of HPLC and SFC in impurity profiling of lamivudine, festinavir and efavirenz in pharmaceutical products. Both analytical methods possess their pros and cons.
Despite the advantages of SFC over HPLC and GC, SFC has one of the biggest limitations. It is not able to analyze extremely polar samples due to the nonpolar mobile phase (Silva and Collins, 2014). CO 2 lacks of polarity and hence it may be quite challenging to elute polar compounds from the stationary phase (Pavan and Raja, 2020). To overcome this, a polar modifier, either methanol or ethanol will be added in small amount to increase the polarity. Higher temperatures and pressures will be required to increase the reactivity if too much modifier has been added. This will possess health risk to the operator. Comparison of the parameters in HPLC, HPTLC, GC, CE and SFC had been summarised in Table 2.
Combination of hyphenated chromatographic and spectroscopic technique
Chromatographic method separates the chemical components in a mixture while spectroscopy provides selective information for identification of unknowns using standards or library spectra (Patel et al., 2010). The combination of separation and spectroscopic detection techniques could demonstrate both quantitative and qualitative analysis of known drug compounds and unknown impurities in pharmaceutical matrices (Cortese et al., 2020). Therefore, the characterization of unknown impurities requires sensitive, selective and sophisticated spectroscopic methods that could provide comprehensive structural information. These hyphenated techniques offer excellent separation efficiency, on-line complementary spectroscopic library and structure-related information of the impurities within reaction mixtures (Patel et al., 2010). Hyphenated techniques range from the combination of separation-separation, separation-identification and identification-identification techniques (Phale and Korgaonkar, 2009). Hyphenated methods are applied in characterization of impurities in forced degradation studies particularly when impurities cannot be isolated in pure form. Below are the examples of method which are commonly selected in impurities identification in drug stability indicating assay. Ahmed et al., 2016). DAD acquires the spectra of peaks across a range of wavelengths simultaneously. Assessment of spectral peak purity and identification of unknown compounds could be performed ( Figure 2). Studies reported that HPLC-DAD is simple, specific, reliable and suitable to be used for routine analysis, quality control and stability indicating assay in pharmaceutical preparation (Baker et al., 2017;Shaalan et al., 2017;Sharma and Pancholi, 2010;Verbeken et al., 2011). HPLC-DAD provides diagnostic information about the drug substances and degradation impurities. Shifting in maximum absorption (λ max ) could offer valuable information about the structural changes that take place in the degradation process. The degradation impurities of anti-malarial drug lumefantrine was identified using HPLC-DAD, where desbenzylketo derivative had been identified as degradant (Verbeken et al., 2011). DAD-UV spectra showed that λ max of desbenzylketo impurities and lumafentrine were 266 nm and 234 nm, respectively. Shifting in λ max showed the replacement of benzyl group of lumefantrine via keto function (Verbeken et al., 2011). Besides, Sharma and Pancholi (2010) had successfully identified the degradation impurities of olmesartan medoxomil using HPLC-DAD. UV spectra of olmesartan medoxomil was compared to the impurities. It was noticed that the λ max 260 nm of ester moiety of drug substance was not visible in the spectra of impurities. Therefore, authors had deduced that the ester moiety, 5-methyl-2-oxo-1, 3-dioxolen-4-yl-methyl group of olmesartan medoxomil had been de-esterified in the degradation process (Sharma and Pancholi, 2010).
Besides peak identification, DAD could also verify peak purity, where co-elution of other compounds, such as adjuvants and excipients could be detected. This technique was used as stability indicating assay for amlodipine besylate, valsartan and hydrochlorothiazide in antihypertensive mixtures (Shaalan et al., 2017) and dihydrochloride in hepatitis C antiviral agent (Baker et al., 2017). Both studies commented that DAD could identify and verify the drugs and impurities peaks. The purity angle of the peaks indicated the spectra homogeneity, where the obtained purity angles within the purity threshold limits would confirm the peaks were homogeneous and pure in samples in forced degradation (Baker et al., 2017).
HPLC-fluorescence detector (HPLC-FL)
HPLC-fluorescence detector (HPLC-FL) is a highly sensitive and specific method in detecting fluorescent analytes. The light emission from analyte could be detected and measured by Fl detector. It is useful in detecting analytes with natural fluorescence. When the light energy is absorbed by the analyte, some of the electrons would be raised to excited state. When the electrons had returned to the ground state, fluorescence light would be emitted. The FL detector is coupled to the HPLC for detection. This method had been used in the analysis of pharmaceuticals and clinical samples, especially with samples with high levels of impurities.
HPLC-FL was reported to have high sensitivity, selectivity, and repeatability. Kamal et al. (2019) reported that HPLC-FL showed that it is better than HPLC-DAD as the stability indicating method was equally simple, accurate, and reproducible for the analysis of daclatasvir bulk drug and drug products. They had made comparison between HPLC-FL with Ultra-High Performance Liquid Chromatography (UPLC) coupled with DAD detector. The sensitivity of the detection was enhanced using FL detector. The stability indicating method of cyproheptadine hydrochloride, a sedating antihistamine drug had also been developed and compared to the United States Pharmacopeia (USP) method (Sharaf El-Din et al., 2018). It was reported that the HPLC-FL method developed was comparable to the USP method in terms of reliability, sensitivity, accuracy, precision specificity, and robustness. HPLC-FL is also useful for simultaneous determination of more than one drugs from plasma samples. Sacubitril and valsartan from rat plasma were determined using HPLC-FL. It was reported that both analytes were determined simultaneously from the plasma sample and the study showed good linearity and correlation coefficient. The percentage recovery, relative standard deviation and relative error were within the acceptable range (Attimarad et al., 2018). It was proposed that this method was suitable to be used in pharmacokinetic studies of clinical samples, because the sample preparation is simple and the analysis time is short.
However, FL detectors are not commonly available compared to DAD detector. In addition, the addition of a fluorescence derivative is needed if the analytes do not fluoresce naturally.
Gas chromatography-mass spectroscopy (GC-MS)
Gas chromatography-mass spectroscopy (GC-MS) is a direct, fast, and reliable method for the separation, quantification and identification of drug and impurities in forced degradation studies. GC-MS uses the energetic electron to ionise and fragment analyte molecules before mass spectrometric analysis and detection. Molecular fingerprint or fragmentation pattern of the analyte will then be compared to the spectra library for compound identification. This method is specifically for analytes which could be resolved in GC.
A GC column is connected via a transfer device to a mass spectrometer. Samples to be analysed using GC-MS will first be separated in GC column, where the analytes are volatized. Analytes will pass through the MS ion source, where they will be impacted by the ionising electrons, causing the formation of cation radicals which are later fragmented into molecular ions. GC-MS is popularly used in drug and impurities analysis because it has comprehensive mass spectral library and its mass spectra is reproducible even in different instrument (Lynch, 2017). Furthermore, samples pre-treatment or derivatisation may not be required prior to analysis (Belal et al., 2009).
GC-MS had been used to resolve and identify the impurities in trimetazine dihydrochloride (Belal et al., 2014). The sample analysis was simple without sample pre-treatment and derivatization. The identity of the degradation impurities was revealed and confirmed via the fragmentation patterns in MS. The molecular fingerprint confirmed the structures of 2,3,4-trimethoxybenzyl alcohol and 2,3,4-trimethoxybenzaldehyde as the impurities.
GC-MS has also been used in identification of products of isomerisation (Aljerf and AlHamwi, 2018). It was used as the stability indicating assay to evaluate the stability of rosmarinic acid under various stresses, namely light, thermal, solvent and relative humidity (Razbor sek, 2011). Razbor sek (2011) reported that reduction of the trans-isomer peak was noticed in GC chromatogram, which indicated the isomerisation of trans-rosmarinic acid in the degradation process. The trans-isomer had slowly isomerised to the cis-form, where the cis-rosmarinic acid peak was increasing over time in the chromatogram. They had also noticed that the MS fragmentation pattern of the degradant was almost identical to the spectra of trans-rosmarinic acid. Therefore, the MS spectra of degradant was compared to the standard MS library and literatures to confirm the identification of cis-rosmarinic acid. Razbor sek (2011) commented that this method is fast, specific, selective, accurate, precise and displayed satisfactory analytical performance of the method (i.e. LoD, LoQ, linearity, robustness).
Liquid chromatography-mass spectroscopy (LC-MS)
HPLC is widely used as stability indicating method in forced degradation studies. However, the results of HPLC analysis alone may not always be sufficient to elucidate and confirm the identity of the known and unknown degradation impurities (Marin and Barbas, 2004). HPLC method used in process analysis, impurity profiling, or stability studies is transferable to liquid chromatography-mass spectroscopy (LC-MS). LC-MS is applied for structural identification and confirmation. This technique is popularly used in characterization of degradation and drug impurities (Ramesh et al., 2014). LC-MS is a versatile tool which could separate and provide the information on the molecular weight and fragmentation pattern of the analytes. Based on the fragmentation pattern, reasonable chemical structures could be proposed (Qiu and Norwood, 2007).
LC separation utilises buffers and additives in mobile phases. The pH of the mobile phases can be controlled to ensure the ionisation of analyte. However, only LC-MS compatible modifiers such as formic acid, acetic acid, ammonium formate, ammonium acetate, ammonium bicarbonate, ammonium hydroxide, and volatile ion-pair reagents such as trifluoroacetic acid and hexafluorobutyric acid can be used (Qiu and Norwood, 2007;Garcia, 2005). The usage of non-volatile buffers and mobile phases in LC-MS system, such as phosphate, sulfate, borate, citrate, and octane sulfonate will cause deposition of salts on the ion source, resulted in capillary obstruction, suppress the ionization, affect the sensitivity and accuracy in analysis, and hence reducing the operation lifetime (Qiu and Norwood, 2007;Garcia, 2005).
LC-MS had been applied as stability indicating assays in numerous studies (Toli c et al., 2018). Some studies utilised HPLC in method development followed by LC-MS for compound identification. HPLC and LC-MS were selected in forced degradation impurities profiling due to its high precision, accuracy, specificity, selectivity, resolution and capacity (Marin and Barbas, 2004;Ramesh et al., 2014;Ramisetti and Kuntamukkala, 2014;Bhardwaj and Singh, 2008;Siddiqui et al., 2014Siddiqui et al., , 2018Wabaidur et al., 2013Wabaidur et al., , 2015Hakami et al., 2020). High temperatures and gases source were reported to result in higher sensitivity due to the ionic evaporation Wabaidur et al., 2016). For instance, Marin and Barbas (2004) analysed acetaminophen, phenylephrine or phenylpropanolamine hydrochloride, chloropheniramine maleate and the degradation impurities in cough-cold products using validated HPLC method, followed by impurity profiling using LC-MS. Similarly, Bhardwaj and Singh (2008) analysed the stability of enalapril maleate using HPLC, followed by impurities characterisation in LC-MS. The application of LC-MS managed to resolve the structures of degradation impurities in numerous forced degradation studies (Marin and Barbas, 2004;Ramesh et al., 2014;Ramisetti and Kuntamukkala, 2014;Bhardwaj and Singh, 2008).
Liquid chromatography-nuclear magnetic resonance (LC-NMR)
LC-NMR is also a hyphenated technique which could be used to separate and characterize the degradation impurities in forced degradation studies. LC-NMR consists of various modes of operation (Figure 3), namely on-flow measurements, LC-NMR under static conditions, LC-NMR/MS and LC-solid-phase-extraction-NMR (Exarchou et al., 2005). Similar to the application of LC-MS, HPLC is normally used in method development and separation, followed by LC-NMR for structural characterisation. However, the cost of LC-NMR analysis is higher than LC-MS as deuterated solvents are required in the analysis (Elipe, 2011). Therefore, this technique is preferred if the impurities could not be isolated individually for structural characterisation .
Conclusion
Forced degradation studies provide information and knowledge about possible degradation mechanisms and the impurities formed in the degradation of the pharmaceutical API and help to elucidate the structure of the degradants. Stability indicating assay is mandatory in all forced degradation studies. However, no single stability indicating assay could fit perfectly into all drug stability studies. The selection and suitability of the technique is dependent on the chemical properties of the drugs and the impurities. Stability indicating assays developed should be validated for linearity, accuracy, sensitivity, precision, robustness, LoD and LoQ, as per ICH guidelines. A good stability indicating assay must be able to detect the stability and changes of drug substances and products with time, accurately measure the changes in API concentration without interference from other substances, including degradants, pharmaceutical impurities and excipients.
Author contribution statement
All authors listed have significantly contributed to the development and the writing of this article.
Funding statement
This work was supported by UCSI University Pioneer Scientist Incentive Fund (PSIF) research grant (Grant no.: Proj-In-FPS-012).
Data availability statement
Data included in article/supplementary material/referenced in article. | 2021-04-14T05:16:47.230Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "b0dd61e6e2aff864289dab83e447a0713b91b4ad",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2405844021006563/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0dd61e6e2aff864289dab83e447a0713b91b4ad",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19960092 | pes2o/s2orc | v3-fos-license | Inhibitory effects of 2,6-di-tert-butyl-4-hydroxymethylphenol on asthmatic responses to ovalbumin challenge in conscious guinea pigs
This study evaluated the anti-asthmatic activities of 2,6-di-tert-butyl-4-hydroxymethylphenol (DBHP) that is a potent phenolic antioxidant in edible vegetable oil. The effects of DBHP on bronchial asthma were evaluated by determining the specific airway resistance (sRaw) and tidal volume (TV) during the immediate asthmatic response (IAR) and the late-phase asthmatic response (LAR) in guinea pigs with aerosolized ovalbumin-induced asthma. Recruitment of leukocytes and the levels of biochemical inflammatory mediators were determined in the bronchoalveolar lavage fluids (BALFs), and histopathological surveys performed in lung tissues. DBHP significantly inhibited the increased sRaw and improved the decreased TV on IAR and LAR, and also inhibited recruitment of eosinophils and neutrophils into the lung, and release of biochemical inflammatory mediators such as histamine and phospholipase A2 from these infiltrated leukocytes, and improved pathological changes. However, anti-asthmatic activities of DBHP at oral doses of 12.5 to 50 mg/kg was less than those of dexamethasone (5 mg/kg, p.o.) and cromoglycate (10 mg/kg, p.o.), but more potent or similar to that of salbutamol (5 mg/kg, p.o.). These results in the present study suggest that anti-asthmatic effects of DBHP in the guinea pigs model of OVA-induced asthmatic responses principally are mediated by inhibiting the recruitments of the leukocytes and the release of biochemical inflammatory mediators from these infiltrated leukocytes.
Jeong SY and Lee JY flavonoids and inhibits the release of chemical mediators, and has been used as an anti-asthmatic drug, especially the prophylactic treatment of allergic asthma [8]. Prophylactic treatment with antiallergic drugs such as cromoglycate and corticosteroids inhibits both IAR and LAR, whereas β 2 -selective adrenoceptor agonists such as salbutamol inhibits IAR but not LAR [9,10].
In approximately 60% of all asthmatic subjects, the IAR is followed by a LAR, and LAR is also accompanied by an increase in bronchial responsiveness to nonspecific stimuli and thought to lead to serious and chronic asthma. Therefore, agents that can inhibit the LAR may be of therapeutic benefit in the treatment of bronchial asthma [3]. A number of animal models have been developed in which LAR occurs in the airways after exposure to antigen, and many studies concerned about LAR suggest the contribution of many kinds of mediators, cytokines and inflammatory cells, especially eosinophils in the development of LAR in humans [3]. However, the pharmacological characteristic of LAR on the animal models is not documented satisfactorily, and the mechanisms underlying LAR are not clear. Moreover, despite the enormous efforts that have been put into developing anti-asthmatic agents, to date, there has been limited success in developing therapeutics for long-term treatment of the bronchial asthma without significant side effect. We previously reported that the plant-based flavonoids, quercetin and rutin which has significantly inhibited asthmatic responses, based on observed inhibition of chemical mediator release in bronchoalveolar lavage fluids (BALFs) [11,12]. Furthermore, it was found that a methanolic extraction of Aralia cordata Thunb. (Araliaceae), which is used as a traditional herbal medicine for disorders such as inflammation, fever, and pain, has significant anti-asthmatic activity in guinea pigs with IgE-mediated asthma [13].
In the present study, we described that 1) a guinea pig model of bronchial asthma, and associated leukocyte infiltration and biochemical mediators releases in the airways as measured by BALF as well as histopathology of the asthmatic lung tissue, and 2) the effect of 2,6-di-tert-butyl-4-hydroxymethyl phenol (DBHP) (Fig. 1A), an antioxidant aromatic compounds used in foods and belongs to the family of cumenes, against IAR and LAR upon exposure of aerosolized ovalbumin (OVA) to conscious OVAsensitized guinea pigs in the double-chambered plethysmograph (Fig. 1B). Louis, MO, USA). All other chemicals purchased were of analytical grade.
Animals
Specific pathogen-free male Dunkin-Hartley guinea pigs were (250-300 g) were obtained from the HanLim animal house facility (Hwa Sung-Gun, Kyung Ki-Do, South Korea), and housed under standard laboratory conditions (temperature 24±2°C, humidity 50±5°C, illumination 300-500 Lux) with free access to pathogenfree food and water ad libitum. The animal studies were approved by the Institutional Animal Care and Use Committee of Chung-Ang University (IACUC-2015-00068). This double-chambered plethysmograph is a basic method for measurement of the specific airway resistance (sRaw) together with the standard parameters tidal volume and respiratory rate in conscious animals placed into the plethysmograph box (HSE type 855, Hugo Sachs Elektronik, Germany). The phase shift between two the nasal and thoracic respiratory flows is measured with two differential pressure transducers (PT5, Grass Instument Co., USA). The PULMODYN "PENNOCK 89" software is able to record the signals and calculate the sRaw with a respiratory analyzer (7E polygraph, Grass Instument Co., USA), and the PLUGSYS 603 system is used to control the valves from the plethysmograph boxes and interface the boxes to the computer (This diagram modified from an original kindly provided by Hugo Sachs Elektronik company, Germany).
Sensitization and aerosolized-OVA challenge
Guinea Pigs were actively sensitized by i.p. and s.c. injections of 0.5 ml of 10% (w/v) OVA in saline on the same day. Twenty-one days after sensitization, these animals were selected on the basis of positive skin response to i.d. injection of 1% OVA (0.1 ml per site), and the sensitized animal was challenged with 10 ml inhalation of 1% OVA which was generated by compressed air with an aerosol nebulizer (Module PY2-73-1963, Hugo Sachs Elektronik, Germany) connected to the nasal chamber in a double-chambered plethysmograph (HSE type 855 and PLUGSYS 603, Hugo Sachs Elektronik, Germany) for 5 min after measuring baseline airway function. An operating air pressure was approximately 1.5 bar, and the generated particle size was below 10 μm, with a portion of particles having a diameter ≤2.5 μm of 60% [3,4]. Test drugs suspended in 5% carboxymethylcellulose solution were administered orally 1 h prior to OVA challenge, and again 12 h later.
Measurement of airway function
To determine pulmonary and airway functions of conscious guinea pigs to the OVA challenge, tidal volume (TV), tidal airflow, respiratory rate, and specific airway resistance (sRaw) were measured using a barometric double-chambered plethysmograph (HSE type 855 and PLUGSYS 603, Hugo Sachs Elektronik, Germany) connected to volumetric differential pressure transducer (PT5, Grass Instrument Co., USA) and noninvasive respiratory analyzer (Model 7E Polygraph, Grass Instrument Co., USA) 5 min (for IAR) and 24 h (LAR) after the OVA [3,4]. The sRaw, the main index of airway responsiveness, was calculated by Pennock Program 89 and expressed in mmHg×sec [14].
Bronchoalveolar lavage and cellular analysis
After measurement of pulmonary function parameters at LAR, animals were anesthetized with a mixture of zolazepam and tiletamine (20 mg/kg, i.p.) and lungs were lavaged four times with 5 ml aliquots of Ca 2+ /Mg 2+ -free Hank's balanced salt solution (HBSS) containing EDTA (10 mM), HEPES (20 mM) and bovine serum albumin (1%). Bronchoalveolar lavage fluids (BALFs) were centrifuged (200×g for 10 min at 4°C), and the cell pellets were resuspended in 1 ml of HBSS for cellular analysis. Supernatants were stored at -80°C for biochemical analysis. Total leukocytes in BALF were counted manually with hemocytometer, and differential cell counts were performed on cytocentrifuged preparations (Cytospin II, Shandon Southern Instruments, PA, USA) after modified Wright-Giemsa staining. A minimum of 300 cells were counted and classified as eosinophils, neutrophils, macrophages, or lymphocytes based on standard morphological criteria [4].
Biochemical analysis in BALF
Histamine assay: The released histamine content in BALF was determined by O-phthalaldehyde (OPA) spectrofluorometric method [15]. Briefly, 1 ml BALF was transferred to a test tube and 1 ml H 2 O, and 0.4 ml of 1N NaOH was added, followed 4 min later by 0.1 ml of OPA reagent and 0.2 ml of 3N HCl. The reaction mixture was transferred to a microplate and fluorescence measured at excitation 350 nm/emission 650 nm with a spectrophotometer (FL600 Microplate Fluorescence Reader, Bio-Tek, USA).
Phospholipase A2 assay: Phospholipase A 2 activity in BALF was determined with a pyrene-labeled phosphatidylcholine (10-pyrene PC) in the presence of serum albumin with a spectrophotometer at excitation 345 nm/emission 398 nm [16]. Spectrofluorometric analysis of pyrene phospholipids and fatty acids was carried out in reaction mixture. The 10-pyrene PC was dried under nitrogen and suspended in ethanol at 0.2 mM. Reaction solution was prepared by sequential addition of 1 ml buffer containing 50 mM Tris-HCl (pH 7.5), 100 mM NaCl, 1 mM EDTA; 10 μl of substrate (2 μM, final concentration); 10 μl of substrate (to 0.1% final concentration) and 6 μl of 1 M CaCl 2 (to 6 mM, final concentration). The fluorescence of the reaction medium (black) was recorded and the reaction was initiated by the addition of BALF. The specific activity in nanomoles per minute and per milligram protein was obtained by dividing the activity by the amount of protein in mg.
Protein assay: To determine the protein exudates quantitation in BALF was determined with bicinchoninic acid (BCA) colorimetric method [17]. Briefly, a 50 μl of aliquot of a 10-fold dilution of BALF was incubated with 1 ml of BCA reagent for 15 min at 37°C. After re-equilibration to room temperature, the reaction mixture was quantified spectrophotometrically at 595 nm. Because this is a kinetic assay, a set of standards must be run at the beginning and end as well as at appropriate intervals between the reaction mixtures.
Histopathological analysis of lung
For histologic analysis, lung tissues were fixed by infiltrating the lung with phosphate-buffered formalin saline, and dehydrated with a graded aqueous ethanol series and embedded in paraffin. The embedded lung tissues were sectioned (4 μm), and then stained with hematoxylin and eosin (H&E) to visualize inflammatory responses and pathological changes in the lung tissue.
Statistical analysis
All values are represented as mean±standard error of the mean (n=6). Statistical analysis was done by one-way ANOVA followed by the Student t-test. p-values of <0.05 were considered statistically significant.
Effect of DBHP on the release of biochemical inflammatory mediators in BALF
Antigen challenged guinea pigs during LAR showed significant increases in the released levels of biochemical inflammatory mediators, histamine, protein and PLA2. Histamine contents in the BALF of vehicle-and OVA-challenged guinea pigs were 125±7 and 425±13 ng/ml during LAR, respectively, which was equivalent to an increase of 340% in OVA-challenged guinea pigs as compared with vehicle control (Fig. 5A). Moreover, protein contents, which was considered the component of exudates in BALF by airway inflammatory reaction of vehicle control and OVA control were 111±39 and 432±23 μg/ml on LAR, respectively. It was equivalent to an increase of 389% in OVA-challenged guinea pigs as compared with vehicle control (Fig. 5B). In addition, PLA 2 activity in the BALF of vehicle control was 2.9±0.1 nmol/min/mg, increasing by 250% in the OVA control to 7.1±0.2 nmol/min/mg during LAR (Fig. 5A). It means PLA 2 activity related to eicosanoid inflammatory mediators increased during an asthmatic response induced by aerosolized antigen in OVA-sensitized guinea pigs. DBHP inhibited the releases of biochemical inflammatory mediators and protein exudation. DBHP at an oral dose of 25 mg/ kg significantly inhibited histamine release into BALF (425±13 to 346±15 ng/ml, p<0.05), but also significantly decreased protein exudates (432±23 to 307±28 ng/ml, p<0.05) and PLA 2 activity (7.1±0.2 to 6.2±0.2 nmol/min/mg, p<0.05), respectively. However, its activities were less than those of dexamethasone (5 mg/kg, p.o.) and cromoglycate (10 mg/kg, p.o.).
Effect of DBHP on the histopathological changes in the asthmatic lung tissue
The lungs of antigen challenged guinea pigs showed eosinophil and neutrophil recruitment in the alveolar sacs, peripheral vasculature and terminal bronchioles, compared with vehicle control (Figs. 6A and B), which is consistent with previous reports [11,18]. Lung of vehicle control on LAR showed a normal structural architecture, and no inflammatory cell infiltration around terminal bronchioles (Fig. 6A). The oral administration of DBHP at a 50 mg/kg reduced the recruitment of leukocyte, particularly, eosinophils and neutrophils into alveolar sacs and peripheral vasculature on LAR (Fig. 6C), and dexamethasone (5 mg/kg, p.o.) also significantly improved pathological changes with mild leukocytes infiltration around bronchioles on LAR, compared with OVAchallenged control (Fig. 6D).
dIScuSSIon
We have developed a guinea pig model of IAR and LAR through the inhalation of antigen OVA in OVA-sensitized guinea pigs. The guinea pig has generally been chosen as an experimental animal because of histological similarities that exist between lungs of antigen exposed guinea pigs and asthmatic lungs in human because this species can exhibit early and late-phase airway obstruction, bronchial eosinophilia and increased airway reactivity following exposure of sensitized animals to antigen in the putative involvement of chemical mediator, cytokines and inflammatory cells on the development of LAR, compared to other species such as mouse, rat, primate or rabbit [4,19]. Furthermore, the double-chambered plethysmograph system for restrained guinea pigs in this study has been specially developed for the investigation of bronchospasm-molytically active substances on a conscious animal, and it has been proven as a reliable and standard method in the study of pulmonary functions in a noninvasive approach. In spite of the higher sensitivity and specificity of invasive lung function tests in anesthetized animals, the noninvasive technique highly useful to access effects on breathing pattern and to detect pulmonary irritation and airflow limitation, and to test on adverse effects of chemicals and drugs. Moreover, this technique is simple to handle and the breathing pattern is nearly natural since no anesthesia is required [20,21].
In animals sensitized with 10 μg of OVA, aerosolized OVA inhalation increased sRaw by 3-fold in IAR and 2-fold in LAR, respectively, compared with sRaw before OVA challenge. Furthermore, the well-established BAL technique in the present study and in subsequent histopathological study showed recruitment of leukocytes and eosinophils into the lung increased by 5-fold and 16-fold, respectively, also similar to a segmental challenge in human lung [22]. Of particular interest, bronchoconstriction can be associated with leukocyte infiltration, leading to the release of biochemical mediators such as histamine, leukotriene B 4 (LTB 4 ), and PLA 2 [23]. In addition, eosinophils, neutrophils, lymphocytes, and macrophages are important cellular mediators of the allergic inflammation through the production of inflammatory cytokines [24].
In the present study, the oral administration of DBHP significantly inhibited the increased sRaw induced by antigen OVA challenge in IAR and LAR, but also improved the histopathologic impairments, with less effect than that of anti-asthmatic drugs, dexamethasone, cromoglycate and salbutamol. Furthermore, DBHP significantly decreased the levels of cellular and biochemical mediators induced by OVA challenge in BALF like other phenolic compounds, as in other previous reports [18,25]. These results suggest that DBHP exerts significant anti-asthmatic activity during IAR and LAR in this in vivo model, probably due to the inhibition of histamine content in BALF. DBHP may act as mast cell stabilizers and bronchodilators through the downregulation of mast cell activation. Moreover, DBHP significantly decreased the OVA-induced PLA 2 activity in BALF, which generates chemokines in eosinophil. In particular LTB 4 could be derived from arachidonic acid through the PLA 2 pathway at a cell membrane. LTB 4 also mediates constriction of airway smooth muscle, leukocytes chemotaxis, and vascular permeability [26,27]. In agreement with these rationales, we suggest that the increase of sRaw by antigen challenge is caused by the release of biochemical inflammatory mediators such as histamine and PLA 2 metabolites such as leukotrienes, resulting in bronchi contraction, and DBHP has an anti-asthmatic effect on antigen-induced bronchoconstriction, which is mediated primarily by histamine and leukotrienes [28][29][30]. Indeed, DBHP significantly inhibited the recruitment of total and subtypes of leukocytes, particularly eosinophils in BALF, which are also consistent with the histopathological survey. As well known, eosinophil accumulation into the inflamed tissue leads to damage in the asthmatic lung tissue by released eosinophil-derived cationic proteins, and eosinophilic inflammation is a hallmark of bronchial asthma [31,32]. In the present study, our findings indicated that DBHP may inhibit eosinophilic allergic inflammation in human asthma [33,34].
Of additional biological significance in other reports, is that DBHP, as a lipid-soluble phenolic antioxidant in edible vegetable oil, plays conflicting and complex role to inhibit lipid oxidation toward the oil-water interface and scavenge free radicals. In general, free radical scavenging activity of phenolic antioxidants improves as the number of hydroxyl and methyl groups [35]. It was also reported DBHP completely inhibited recombinant TNFα-induced cytotoxicity in L929 cells, while another butylated hydroxytoluene compound had minimal effect due to the only different molecular structure of DBHP, which is a hydroxymethyl substituent instead of a hydroxyl group on the phenolic ring [36]. It is also well known that the lungs are always exposed to higher levels of oxygen than most other tissues, and toxic free radicals in human lungs have been implicated as an important pathologic factors in pulmonary disorders. An imbalance between the reducing and oxidizing systems is present in asthma, and this oxidative stress can trigger chronic inflammatory disorders including pulmonary diseases which can be counteracted by various antioxidant effects [37,38]. We previously reported that flavones containing more hydroxyl radicals had greater anti-asthmatic effects in the asthmatic model [18], and this result is in agreement with other studies about the anti-asthmatic activities of natural antioxidant and anti-inflammatory products [39].
Collectively, in the present study, we showed that DBHP exerted significant anti-asthmatic effect in the guinea pigs model of OVA-induced asthmatic responses, and its inhibitory effect principally mediated by inhibiting the recruitments of the leukocytes and the release of biochemical inflammatory mediators from these infiltrated leukocytes, which leads to damage in the asthmatic lung. Moreover, the anti-asthmatic effect of DBHP may be mainly attributed to its potent antioxidant activity through the hydroxyl or methyl substituents are placed in the phenolic ring. In addition, our study might provide a novel insight into the pathological mechanisms underlying the inhibition of bronchial asthma via suppression of eosinophilic inflammatory mediators | 2018-04-03T01:11:55.381Z | 2017-12-22T00:00:00.000 | {
"year": 2017,
"sha1": "a7f19d4e47651feb9979b631c1ba1a889410a915",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4196/kjpp.2018.22.1.81",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7f19d4e47651feb9979b631c1ba1a889410a915",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
24537882 | pes2o/s2orc | v3-fos-license | Intermediate uveitis complicated by choroidal granuloma following subretinal neovascular membrane : case reports
Trabalho realizado na Universidade Federal do Rio Grande do Norte UFRN Natal (RN) Brasil. 1 PhD, Associate Professor, Ophthalmology Department, Universidade Federal do Rio Grande do Norte Natal (RN) Brazil. 2 Retina Fellow, Ophthalmology Department, Universidade Federal do Rio Grande do Norte Natal (RN) Brazil. 3 Resident at the Ophthalmology Department, Universidade Federal de São Paulo UNIFESP São Paulo (SP) Brazil. 4 Medical Student, Universidade Federal do Rio Grande do Norte, Natal (RN) Brazil.
Choroidal neovascularization is a very rare complication in intermediate uveitis.A 27-year-old female patient diagnosed with intermediate uveitis two years ago.She presented with 20/200 visual acuity, snowballs, snowbanks, and macular cystoid edema in the right eye observed by fluorescein and optical coherence tomography (OCT).Photocoagulation was performed in the inferior peripheral retina in both eyes.The patient refused to undergo the prescribed clinical treatment.She returned twelve months later presenting with count fingers visual acuity, dry retina and subretinal macular pigmented granuloma observed on OCT.A 15-year-old female patient with decreased visual acuity of 20/400 in the right eye for eight days.She presented with bilateral vasculitis and papilitis, in the right eye, hemorrhage and extramacular subretinal neovascular membrane were observed on fluorescein and OCT.She was treated with 40 mg prednisone and intravitreous injection of 1.25 mg bevacizumab.Five months later she presented with 20/50 visual acuity, and extramacular granuloma observed on OCT.The formation of subretinal granuloma in intermediate uveitis is a possibility when complicated by subretinal neovascular membrane.
INTRODUCTION
Intermediate uveitis is an intraocular inflammation involving the anterior vitreous, peripheral retina and pars plana.It usually affects patients from 5 to 30 years old, without gender or racial preferences.The etiology is unknown but there are several associated diseases.Symptoms are blurry vision, floaters and distortion of central vision.The syndrome is bilateral in 80% of the patients and chronic with periods of exacerbation and remission.Clinical presentation includes: mild to moderate anterior chamber inflammation, thin keratic precipitates in the lower portion of the cornea, autoimmune endotheliopathy, vitreitis, vasculitis in the peripheral retina, intravitreal "snowballs," retinal "snowbanking," optic neuritis and cystoid macular edema.The edema may become chronic and cause retinal cystoid degeneration and macular hole (1) .It might also form epiretinal membrane and subretinal choroidal neovascularization (1)(2) .
Treatment of intermediate uveitis is based on periocular and oral corticosteroids.Cryotherapy or laser photocoagulation of the peripheral retina are options in patients with snowbanking when there is an insufficient response to periocular or systemic corticosteroids (3) .
This study describes two cases of patients with intermediate uveitis that formed subretinal neovascular membrane and posterior macular granuloma.
Case 1
A 27-year-old female presented with 20/30 in the right eye (RE) and 20/25 in the left eye (LE), endothelial corneal deposit, vitreous cells, snowballs, snowbanks in both eyes and macular cystoid edema in the right eye observed by fluorescein and optical coherence tomography (OCT) (Figure 1-A) two years ago.Results of systemic exploration as well as the usual tests were normal for infections and sarcoidosis.
She was followed for one year and using anti-inflammatory non-hormonal irregular (AINE) drops until a worsening in visual acuity and cystoid macular edema occurred, with VA 20/60 (RE) and 20/30 (LE).She used prednisone for sixty days and photocoagulation in the extreme inferior retinal periphery was performed in both eyes.Visual acuity improved to 20/20 in both eyes two months later.
The patient refused to continue the prescribed clinical treatment.She returned twelve months later presenting with count fingers visual acuity, dry retina and subretinal macular pigmented granuloma observed at OCT and fluorescein angiogram (Figures 1-B,1-C,1-D) in the RE and 20/20 in LE.
Case 2
A 15-year-old female patient with decreased visual acuity of 20/400 in the RE for eight days.She presented with bilateral vasculitis, vitreitis, snowballs, and papilitis.In the right eye, hemorrhage, extramacular subretinal neovascular membrane and serous subretinal detachment were observed on fluorescein and OCT (Figures 2-A We describe two cases of healthy, young female patients with intermediate uveitis and cystoid macular edema that developed subretinal neovascular membrane and posterior subretinal granuloma. Oréfice published two cases of patients with intermediate uveitis that formed subretinal neovascular membrane (SRNM) (1) .
The chronic inflammatory process and persistent cystoid macular edema could be responsible for the retinal degeneration and consequent damage to the retinal pigmentary epithelium and choriocapilar complex and posterior SRNM (1) .Other authors have shown the role of RPE in SRNM maturation that would be enveloped, followed by resolution of subretinal leakage and consequent change in granuloma (4) .
In the second case systemic corticosteroid treatment and anti-VEGF intravitreous injection could be related to accelerated SRNM maturation process and less retinal damage with better visual outcome.
In experimental models, the results suggest that involution of the neovascular membrane with maturation, as demonstrated by the cessation of visible fluorescein leakage, is the result of RPE proliferation that tightly envelopes the newly formed vessels and probably reabsorbs the previously accumulated subretinal fluid, preventing its further accumulation in the subretinal space (4)(5) .Histopathologic findings in postmortem eyes after photodynamic therapy for choroidal neovascularization in age-related macular degeneration showed SRNM enveloped with RPE in both eyes (6) .
Two new cases of patients with intermediate uveitis with subretinal neovascular membrane have been described. | 2016-09-29T08:41:17.449Z | 2008-12-01T00:00:00.000 | {
"year": 2008,
"sha1": "9b2aa0f5725d93b83c1b093f89130576e2ac4a9e",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/abo/a/xd3LZzDnyYq5WcSpfrTs6wh/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "9b2aa0f5725d93b83c1b093f89130576e2ac4a9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238103268 | pes2o/s2orc | v3-fos-license | Temporomandibular Joint Hypermobility - Diagnosis Based on Auscultation and Signals Analysis
Background The temporomandibular joint is a well-known anatomical structure a vast majority of dentists doesn’t know how to treat patients with temporomandibular disorders. What is more, even physiological sounds accompanying joint movement can deceitfully indicate pathological features. An example of TMD is temporomandibular joint hypermobility, a disorder which still requires comprehensive study and analysis. Methods
Background
The temporomandibular joint (TMJ) is an anatomical structure that chie y consists of the mandibular condyle, temporal bone, articular disc, joint capsule, ligaments, blood vessels and nerve supply. It is a synovial joint. The functions of the TMJ and surrounding tissues and muscles are for example mastication, speech, opening, and closing during food chewing, swallowing saliva. The articular surface in human joints is formed by hyaline cartilage, but the exception is the TMJ, where brocartilage occurs.
During the examination, it is necessary to remember that both joints work together. The work of TMJ strictly depends on the different architecture of the mandibular condyle, muscle strength and tension, ligaments and type of occlusal contacts. Another important part of TMJ is articular disk, which participates in movements -hinging and gliding actions. The anatomical shape is oval and covered brous surface. The disc shape is often referred to as peaked cap. The TMJ is divided with disc into upper and lower compartment. The upper one is bigger than lower. (1). The work of TMJ also remains in direct correlation with muscles.
Biomechanics of TMJ is a complex issue and its understanding is the basis for effective diagnosis and treatment. In human body there are two seemingly independent temporomandibular joints but the fact of their connection with the mandibular bone enables them to work together. Therefore, in a complex motion system, the translation of one element results in rotation in another. The joint can be divided into 2 compartments: upper and lower. The lower consists of condylar process and articular disc which movements are limited by articular ligaments. Therefore, in this part, the only possible and physiologically correct movement is rotation. In the upper part condylar process with articular disc construct with mandibular fossa a kinematic system. Since the articular disc is not rmly xed in articular fossa, translation in this area is possible. The articular disc is common part of both systems and functions as a non-ossi cated bone without confusing with meniscus and is classi ed as a complex joint. In general, ligaments do not take an active part in joint movement but restrict it passively or neuromuscularly. Articular surfaces must be in constant contact, which, during the adduction and abduction movement, is ensured by the proper displacement and deformation of the disc. (2) Each joint in the human body might be modelled in a simplistic way as a mechanical connection of a couple of lubricated surfaces, which is inevitably prone to friction. Such a condition of the movement of roughly degenerated surfaces of the joint causes energy transfer by mechanical compressions and decompressions of the matter which generates mechanical waves propagating through surrounding tissues. Depending on the frequency of these vibrations they might be classi ed as infra, ultra or acoustic sounds. The last ones are hearable and feature spectral range of 20Hz-20kHz. Moreover, these sounds carry diagnostic information and may be successfully used in preliminary examination of joints condition. It turns out that a healthy joint is either soundless or generates only expected physiological sounds.
It is also worth mentioning that temporomandibular articular disc plays a critical role in the context of absorbing the shocks applied to the joint and lubricating surfaces of contacting parts. The disc is generally composed of macromolecules and the lubricating medium, the various chemical composition of which results in small differences of the mechanical properties. Therefore uneven pressure distribution on the disc is observed. During the movement of TMJ, the articular disc splits the exerted pressure across the whole active area. As a result the shocks are absorbed and the condylar movement is stabilized. Therefore the mandibular disc is particularly prone to permanent wear damage, which may lead to TMD. Moreover, presumably the vast majority of temporomandibular disorders (TMD) are claimed to be caused by degeneration and perforation of the articular disc. According to Xiaoyun et.al the bone of the skull and the mandible may be in general considered as linearly elastic. Elasticity modulus of the latter is reported to differ from cortical bone to the cancellous bone. Another important yet obvious conclusion is that deformability of the bone is much lower than deformability of the articular disc. The elasticity modulus of the mandible is reported to vary between 13000 and 7300 MPa, whilst in case of the articular disc -between 0.675 and 44 MPa. As it has already been pointed out, neither the TMJ ligaments nor the cartilage have a signi cant impact on the TMJ movement and hence are often skipped in the course of the of the TMJ biomechanics modelling. Many mechanical models have been proposed in the literature to investigate and simulate the undoubtedly complex biomechanics of TMJ. Usually such models are accompanied by Computer Aided Design techniques such as Finite Element Method (FEM) or Multi-body Dynamics Analysis (MDA). It turns out that a simple single-phase elastic model does not allow to express the mechanical behaviour of TMJ. As a result, the biphasic approach was suggested to describe both uid and solid components together with accompanying viscous effect but it also features some limitations. Therefore other more promising models such as poroelastic and viscoelastic material have been introduced to ful l the demanding and complex nature of TMJ biomechanics (3). Stress distribution practically occurring in TMJ is still not well de ned because of di culties taking place during experimental measurements. However Zhan Liu et. al. proposed a 3D model of temporomandibular joint based on the CT scans of a person without any TMD. Based on this example the strain in TMJ components was simulated with FEM in ANSYS software. The maximum Von Mises stress obtained for articular disc was 0.32 MPa in interior middle area, 0.12 MPa in anterior area and 0.026 MPa in posterior area which proves an uneven stress distribution in this part of TMJ (4).
Healthy TMJ is painless, soundless and hassle-free during chewing, speaking and eating. Acoustical effects in TMJ are related to joint dysfunction. They occur as the result of incorrect disc displacement during movement -because of the condylar head displaced from the proper position. Joint sounds have an important role in examination, diagnosis and future treatment in temporomandibular disorders. There is a wide spectrum of sounds from joints. Internal derangement of TMJ leads to clinical symptoms such as clicking, disc displacement, disc dislocation with reduction or disk displacement without reduction.
Under the term of disc displacement with reduction the situation when the articular disc has been displaced to the condylar head is understood. It can be displaced anteriorly, medially or laterally. At the time of adduction and abduction of the mandibular, acoustical effects like clicking, crepiting or thud sound can occur. The other type of TMD is known as disc displacement with intermittent locking -a situation similar to the disc displacement with reduction but with the limited opening. (5). Among TMD we also distinguish hypermobile temporomandibular joint (TMJH), which may be a disorder or physiological condition.
TMJH is a physiological state in which the correct disc position is observed but the maximal mouth opening leads to subluxation or luxation of the joint and occurs with pain. Hypermobility is commonly known as subluxation. It is a movement of TMJ during wide mouth opening. The correct anatomy of TMJ is associated with smooth condylar head movement down into the top of the articular tubercle. In the group of patients during the widest mouth opening, due to sudden skip , the temporary stop occurs. Such a stop results in a speci c death sound-thump. The phenomenon is physiological and results from the speci c anatomy of the articular tubercle. Among the factors that lead to such a speci c anatomy of the articular tubercle we can distinguish the excessive mouth opening during yawning, singing, vomiting, eating, factors such a generalized joint hypermobility (GJH), Ehler Danlos syndrome or Marfan syndrome and intubation under general anesthesia (5,6). The dynamic compression during excursive gliding and incursive movements con rm subluxation (5)(6)(7) Subluxation may also be considered a non-physiological condition if the patient reports chewing muscle disorders as well as TMJ pain and discomfort. The sound that appears in the nal step of opening the mouth may be a sign of subluxation of the condyle, which moves forward (8) When the patient has repeated episodes of subluxation, this may result in lengthening of the ligaments potentially leading to disorders of the disc.(2) TMJH may also occur as a result of injury, damage to the joint capsule. Some of the patients in addition to TMJH also have hypermobility of other joints in the body. This is known as GJH. It is greater than average range of motion in many joints. There is no single test commonly used to con rm the diagnosis of GJH (9). The various tests are used to diagnose patients, for example: Carter&Wilkinson, Kirk, et al, Beighton&Horan, Beighton, et al. (10) TMJH has been also investigates by Nosouhian et. al.: the carried out case-control clinical study included 69 patients in the age 22-42. The main aim was to examine correlation between the etiological factors of TMJH and its relations to habitual status. Participants had the manifestation of TMJH. The researchers divided the patients into three groups based on the maximum mouth opening (measured between upper and lower incisors). 25 patients were in the group with the range 50-55mm, 18 patients with the range of 55-65mm and 26 patients with more than 65mm. All the participants lled the questionnaire concerning discomfort, pain during opening, mastication, pain in the ear, temple or neck, side of chewing and about rst symptoms of TMJH. The last part of the research was a clinical examination by an oral and maxillofacial specialist. The procedures included also exam of masticatory system, facial symmetry observation, abnormalities in teeth and jaw system, palpation and auscultation of TMJ with stethoscope. Obtained results have led to the conclusion that the largest number of patients was at the age of 31-42 (70.99%), 37.67% of patients were at the age of 26 to 35 years old. What is more the researchers notice correlation between the pain in TMJ and maximum mouth opening. TMJH was observed much more frequently for women(78.2%) than men [9].
Pasinato et. al. have proven that generalized joint hypermobility (GJH) as a predisposing factor for the development of TMD. 34 women aged 18-35 were diagnosed with TMD using Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD). GJH hypermobility was examined by Carter and Wilkinson's criteria modi ed by Beighton. The volunteers were divided into two groups: with hypermobility and without hypermobility. 64.71% of patients with TMD had also GJH including TMJH. Patients from hypermobile group suffer from painful mouth opening. What is more there was compatibility in higher results GJH and wider range of motion with and without pain. The prevalence of GJH was higher in group with TMD. (11) . Mehndiratta et. al. examined patients with TMJH using MR and noticed that TMJH has been connected with an anatomic variant where the articular eminence has a steep -short, posterior slope and longer anterior slope (5) Nowadays, a lot of attention has been paid to the development of questionnaires to examine a temporomandibular joint and new ways of diagnosing based on the adopted and modi ed physical methods such as e.g. auscultation, RTG, tomography CT, ultrasonography, MR and many other different investigation methods.
The RDC/TMD is commonly known and used by researchers. In this case two axis for systemized diagnosis of TMD are used. In the Axis I the clinical questionnaire is used in order to classify TMD for the further diagnosis. Each participant after examination is being diagnosed for one or both sides: group I: IAmyofascial pain, IB-myofascial pain with a limited opening, group II: IIA-disk displacement with reduction, IIB disk displacement without reduction with a limited opening, IIC-disk displacement without reduction, without limited opening, group III: IIIA-arthralgia, IIIB-osteoarthritis, IIIC-osteoarthrosis. Patients can get maximum 5 diagnosis or no diagnosis. According to axis II, patients independently ll in a questionnaire concerning psychosocial factors such as chronic pain, jaw disability checklist, depression and non-speci c physical symptoms, demographic data.
Noise and other sounds associated with TMJ have been the objective of researchers for years. In 1952 Ekenstein, in 1974 Outlette and other authors recorded the performed experiments on videotape an carried out audiovisual evaluation during jaw opening and closing (12). Nowadays stethoscope is a basic and typical instrument for acoustical examination of TMJ (13) (14) (15). Another option is more advanced device created by Radke Auscultation is a commonly used non-invasive diagnostic method. Traditional acoustic phonendoscopes have been used since the 19th century. However, nowadays digital devices facilitate many elds of science and life including clinical measurements. Therefore currently dentists take advantage of electronic stethoscopes during TMJ auscultation. Such instruments not only allow to digitally lter noise and redundant frequencies but also support acquisition of recorded signals for the further analysis. This approach allows also to incorporate digital signal processing to implement dedicated algorithms supporting detection of pathological syndromes and pattern recognition. Nonetheless, from the biomedical signals processing point of view, any kind of disorders detection requires establishing a link between medical context and digital representation of the signal. Therefore seeking for the patterns occurring in the signal which correspond with real pathologies detected by physician is usually crucial in the development of algorithms facilitating automated diagnosis. It is often di cult to nd representative signal features in time domain, especially during visual evaluation of the graph. Much more useful methodology involves analysis of the spectral characteristics, which can be performed in the frequency domain. The operation of transforming time domain into frequency domain is called Fourier Transform and is described by the following mathematical formula: where: Χ(f) -resulting function, f -frequency, t -time, This operation may be interpreted as a measure of the similarity between the original signal and each basic sinusoidal component. The equation (1) can be successfully used for analogue signals. However, in case of digital measurements, it is necessary to take advantage of discrete version of Fourier Transform (DFT): where: ∆t -time interval, f s -sampling rate, n -sample number.
According to the above equation (2), the spectrum can be calculated as a sum of products of signal sample and harmonic components. This is however the particular and inverted case of generic theorem, which states that the analysed signal may be approximated by a sum of elementary (basic) signals: where: a k represents the coe cients determining the overall impact of each component, g k (t) -elementary signals with different frequencies.
Nevertheless, in practice the considered method is e cient only for stationary signals which include periodic occurrence of a particular pattern. In this case, the overall spectrum of the signal is su cient to evaluate all possible harmonics features. However, for non-stationary signals, which may be de ned as variable in time or impulse, this method is not e cient hence it is required to provide information about frequencies present in each atomic time window. An approach which meets such requirements is called time-frequency analysis and, in general, is based on calculating the correlation between the kernel function and a signal in basic consecutive time sections. The kernel function should effectively match spectral characteristics of the original signal, i.e. if the signal is non-stationary then the kernel function should have a form of non-stationary impulse oscillation with compact carrier. The more the shape of the basic functions is similar to the original signal the less basic functions are needed to e ciently approximate the signal. An example of time-frequency analysis is the Short-Time Fourier Transform (STFT), which uses sinuses as basic functions. STFT allows to obtain the two dimensional representation of time-frequency coe cients called spectrogram with every time-frequency atom of the same dimensions and area. As the major drawback of such a result the same resolution of all components can be treated. The step of key importance in every time-frequency analysis method is consists in selection of the appropriate window function used for slicing the time domain into atomic signal sections. The special variant of STFT, with the Gaussian function used as a window, is called Gabor Transform. More advanced methods, like Wavelet Transform, allow to obtain the lower time resolution for lower frequencies and higher time resolution for higher frequencies (Figure 1.). However, the area of all cells still remains the same. Moreover, the uncertainty principle implies that the value of product of frequency bandwidth and signal duration time cannot be lower than de ned constant. In other words, the higher the frequency resolution the lower time resolution can be achieved and vice versa (17).
The required distribution of time-frequency atoms depends on the particular application, but the design of computational method shall always strive to limit the resulting area of each cell to obtain maximal resolution for both spectrogram axis. Another interesting technique of time-frequency analysis is Reduced Interference Distribution (RID). This method belongs to the general Cohen's class distributions, which constitute the generalization of Wigner-Ville transform. RID attempts to solve the parasite cross interferences occurring on Wigner-Ville representation by nding a compromise between the resolution and readability of the result. Cohen's class time-frequency methods, including spectrogram, are feasible to feature time and frequency shift invariance, which means that any time shift in the signal is re ected as an equivalent time shift in the TF distribution and any shift in the frequency of the signal is re ected as an equivalent frequency shift in the time-frequency representation. This property is often required for a successful pattern recognition procedure. Nevertheless, in some applications, either time or frequency invariance might not be desirable, i.e. in case when the time or frequency shift is relevant for feature extraction (4,18,19). Unlike in the case of time and frequency shift invariances, not every of Cohen's generic class distributions can be characterized by the scale covariance which ensures that the representation of both scaled and original signals is the same (20).
Widmalm et al. reported 2 major groups of sounds evoked by TMJ which could be speci ed with the use of the Reduced Interference Distribution (RID) -another method of time-frequency analysis. The rst group consisted of three types of short-duration signals with a single energy peak below 600 Hz, from 600 to 1200 Hz and above 1200 Hz which were classi ed as clickings of RID type 1, 2 and 3 respectively. The second group included two types of signals which were characterized as having multiple energy peaks distributed over prolonged period of time: the clickings of RID type 4 with energy peaks in frequency range below 600 Hz and clickings of RID type 5 with peaks between 600 and 3600 Hz (17) (21) . In the literature many attempts to implement automated classi cation method of mentioned clicking types have been presented. Reported approaches include amongst others: RID supported by Arti cial Neural Networks, Adaptive Gabor transforms with the third order-Rényi number or Nearest Constraint Linear Combination (NCLC) accompanied by Neural Networks. Djurdjanovic et al. (22) placed an electret microphone with at frequency response between 40 and 20000 Hz into each opening of auditory canal of the patient. A Reprosil polysiloxane putty was used as a cover for each microphone to attenuate external noise. Signals were recorded with 14-bit resolution at the 48 kHz sampling rate. The motivation for using such high sampling frequency was driven by literature reports indicating the analysis of TMJ should cover the whole audible spectrum from 20 to 20000 Hz. The low-pass analogue Butterworth lter with the cut-off frequency of 20000 Hz was used in order to avoid aliasing. In the next step, the 512-point Hanning window was applied when calculating the time-frequency representation. The RID of the signal was performed based on the binomial distribution which constitutes particularly attractive discrete estimation of RID which can be computed very e ciently. RIDs of each clicking were extracted manually which constituted the drawback of the proposed method and could have been improved by implementing automated segmentation method. As a result, 35 cliclings of types 1, 31 of type 2 and 38 of type 3 were visually isolated and used for testing of implemented method. Crepitations were not considered in this research. In the next step, the authors exploited the time-shift (TIR) and timeshift scale-invariant (STIR) RID and calculated the moments of obtained representation (order 1 up to 15) as a vectors of features. Finally, the NN, ZSS and NCLC classi ers were trained and the performance of each method was evaluated. The major result showed that in case of STIR all the pattern recognition algorithms featured similar performance. Nevertheless in case of TIR the e ciency of NN was better (one misclassi cation for 15 instances from each clicking type) than for other techniques. It has been also concluded that exploiting scale invariance to the TMJ sounds classi cation deteriorates classi er e cacy, regardless of which feature recognition technique was utilized. What's more, the authors have also stated that probably more than just three TMJ clicking types can be distinguished (22) . Sungyub Yoo et al. proposed an analysis of TMJ clicking sounds using Radially Gaussian Kernels. During this research TMJ sounds were recorded during maximal jaw opening and protrusion followed by opening and closing from ten patients. The acoustic signals were acquired using two electret microphones placed in each ear canal at 15000 Hz sampling rate. The measured frequency range (initially up to 16000 Hz) was limited to 100-500Hz by a band-pass lter. In the preprocessing step signals were down-sampled to 7500 Hz. Finally six types of clickings were differentiated based on the time-frequency patterns observed in RGK distribution (22) (23) . It is also worth mentioning about the modern trend of wearable sensors, which has been a subject of interest for the scientists and commercial ventures all over the world. Toreyin H. et al. reported that this technology is not available in the context of joint sound measurement and proposed an interesting architecture of a wearable system for knee joint disorders detection based on MEMS sensors and FPGA (4,18). Such an approach could be also considered in the regard of TMJ analysis and TMD diagnosis.
Aim of the study
The main goal of the research was to examine the group of patients suffering from TMD in order to describe acoustical symptoms accompanying hypermobile temporomandibular joint and compare them with the sounds of clickings. Special attention was concerned about patients who had no recognition in RDC / TMD diagnosis.
Our second aim was also to provide a comparative analysis of mentioned disorders from digital sound processing standpoint.
Last but not least, the nal goal of this research was to create the TMJ sounds database which might have constituted a reference for further research and development of computer aided TMJ diagnostic methods as well as for educational purposes.
Methods
The study involved patients who decided to come because of problems with the temporomandibular joints. Participants were from the area of Cracow, Poland. The participants were informed about the aim of the examination and signed consent forms. The Jagiellonian University Bioethics Committee gave consent for examination, number 1072.6120.71.2019 from the 24th of April 2019 year. All participants were informed about the objectives of the study. Each of them expressed their written consent to participate in the examination and signed the consent form before the study. During the procedures, the rules of Good Clinical Practice were applied and the Helsinki Declaration was followed. Criterion for inclusion was a click sound that appeared at the end of the abduction and was not diagnosed as an RDC / TMD diagnosis. The exclusion criteria were: trismus, patients who do not agree to participate in the study, non-cooperative patients during the study, patients with active herpes and patients with systemic sclerosis.
The temporomandibular disorders were classi ed with the use of the RDC/TMD (24) . The right and left sides of the body were considered separately. The Polish version of the RDC/TMD questionnaire (translated by Osiewicz) was used (25) (26) . Two axes of RDC/TMD were used. Axis I was lled by the dentist and the axis II were lled by patients. After examination with RDC/TMD axis I and axis II every participant were auscultated with the two electronic stethoscopes Littmann 3200 form the right and left body side at the same moment. Devices convert analogue to digital domain signal with 4000 Hz sampling frequency. It makes it possible to reproduce harmonics sound components up to 2 kHz. Ten the auscultatory signal is modi ed by three implemented lters to emphasize selected section of the band, simulating standard bell and diaphragm (since they are known and common in clinical practice of pneumatic stethoscopes usage) and additional extended mode. The bell mode ampli es sounds from 20Hz -1 kHz but with higher gain between 20 -200 Hz, diaphragm mode allows to use the full bandwidth (20Hz -2kHZ) with the part of 100 to 500 Hz emphasized. Extended mode works similarly but, in this case, the range between 50 and 500hz is favoured. The maximal ampli cation available in Littmann 3200 is 24x with 9 levels of regulation. In all the operating modes ambient noise reduction feature is enabled. User can modify settings using small graphical display and buttons. Stethoscopes are battery powered and provide power saving features. To obtain acquired signals, the device can be connected with computer via Bluetooth wireless interface. Dedicated application allows to create local patients database, graphically represent signals and also export them in the sound format. In the course of the carried out research, this feature has been used to prepare collected signals for further processing.
The stethoscopes are approved for use in Poland, noti cation no. 0065 5244 2581, supplement no. 1836 4103 1912. Assessment of acoustical symptoms was performed on the basis of two acoustical biosignals that were measured using electronic stethoscopes for about 15 seconds during which the patient kept on opening and closing the mouth.. Auscultation was carried out by applying the tip of the stethoscope to the facial skin in the preauricular areas, before the scrap of the ear to right and left simultaneously. After that, the signals from electronic stethoscopes were sent to the computer to analyze the recorded signals.
Python 3 programming language was used for TMJ sound processing. The signal preprocessing consisted of [-1,1] amplitude normalization and removal of the DC component. In the next step the representation of the signal in the frequency domain was computed with the use of the Numpy library and Fast Fourier Transform algorithm. Nevertheless according to the papers cited in the Introduction section, the global signal spectrum is not an e cient method of analysis of non-stationary signals.
Therefore we provided STFT of representative sounds calculated with the Matplotlib library. In order to split the signal into atomic sections for local spectral analysis, the Blackman window function of 512 samples length and 256 samples window overlapping was used. Spectrograms were presented in the linear time and frequency scale while the magnitude of each component was represented in the logarithmic scale. The frequencies below 80Hz were removed from spectrograms to emphasize higher harmonics that are responsible for the effect of clicking. They are however persisted for global spectral representation of each signal.
Results
In the study group there were 47 people, including 29 women and 18 men. The mean age of participants was 32,46 years. The results of RDC/TMD diagnosis is shown in Table 1. (27). The result was 1 score. In Axis I RDC / TMD the patient reported no facial pain on both sides of the face. Opening pattern was straight in a clinical trial. The vertical range of motion was measured using the 11 incisor opening without muscle and joint pain was 40mm, the maximum active opening was 44mm and there were no muscle and joint pain complaints. The maximum passive opening was 48mm and there was no pain.
Incisal relationship: horizontal 4mm, vertical 3mm. Opening movements were without muscle and facial pain: right and left lateral 13mm, protrusion 4mm. In examination followed with instruction RDC/TMD there were no noises during open, close, lateral and protrusive movements. Muscles and temporomandibular joint pain with palpation were not reported. In axis II, the questions about the general health and oral health was assessed by the patient as good. The patient felt no pain in the face, ear or temple. The female noticed a problem with sounds in both joints created during wide opening and while eating hard food or yawning. Sounds from both joints during wide opening were the reason of report for an examination. In the examination with RDC/TMD the result was without diagnosis. The last step was clinical examination with electronic stethoscope and the signals were sent to the computer for further analyze.
The second patient was a man at the age of 22. The reason of examination was sounds from TMJ during wide opening and closing and fear of abnormalities in the patients opinion. The patient is under dentist control. In a medical history, no orthodontic treatment in the past and currently only conservative treatment-llings. The clinical examination showed the dental arches with the probability of agenesis of tooth 25 and 45. The relation of Angle's Class was I and Class II canine on the right and on the left Class I canine relationship. The patient had a midline that was correct, with no deviations from the norm. Teeth 18, 17, 16, 24, 27, 28, 36 contain composite llings. In Axis I RDC / TMD the patient reported facial pain especially muscles and joint pain on both sides of the face. The patient described pain intensity as strong as 3 in the scale 0-3 were 3 is the most painful. Opening pattern was straight in a clinical trial. The vertical range of motion was measured using the 11 incisor opening without muscle and joint pain was 56mm, the maximum active opening was 63mm with pain located in the muscles on both sides of face. The maximum passive opening was 65mm, with pain in the muscles both sides. Incisal relationship: horizontal 4mm, vertical 2mm. Opening movements were without muscle and facial pain: right lateral 13mm, left lateral 14mm, protrusion 8mm. During protrusion movement patient noticed pain in the area of right muscle and joint with intensity of 3. In examination followed with instruction RDC/TMD there were no noises during open, close, lateral and protrusive movements. Muscles and temporomandibular joint pain with palpation were reported-masseter with intensity of 2 on both sides and lateral pole of TMJ on both sides with intensity of 3. In axis II, the question about the general health was assessed by the patient as very good and oral health as good. The patient felt pain in the face, ear or temple in the last month. The male noticed a problem with sounds in both joints created during wide opening and while eating hard food, yawning, swallowing and ringing in ears. In the examination with RDC/TMD the diagnosis was Ia-myofascial pain and IIIa arthralgia. At the end the examination patient was auscultate with electronic stethoscope.
Both patients have hypermobility of the joints, which was con rmed by a functional x-ray of the temporomandibular joints, where it shows condyloma translation.
Recorded signals were collected and analyzed using algorithms implemented in the Python 3 language.
The software provides graphical representation of the acquired sounds, which is an important functionality. Thanks to such a graphical representation, the person analyzing records can spot the differences between particular cases and also provide an information about their morphology from the signal processing point of view and further development.
On the basis of the Fig. 2 it can be easily assessed how the quality of the registered signal depends on the proper placing and contact of the stethoscope's head with the skin surface. When the downforce is too high or a grip is uncertain during the examination, uncontrolled movements of the stethoscope are possible. This in turn leads to friction, which may manifest as high amplitude ripples in time representation. On the other hand, the tension of diaphragm and tissues can signi cantly in uence acoustic properties of the system. On the right part of the Fig. 2. higher frequencies of the spectrum are reduced in comparison to signal with no artifacts on the left. This proves that artefacts appearing during measurement results in changes of bandwidth which in turn might contribute to deceptive diagnosis.
In general, joints auscultatory signals feature noise-like spectral characteristics with ampli ed regions corresponding to joint movements during examination. Therefore each signal contains very low frequency component resulting from cyclic repetition of motoric activity. On the other hand, pathological clickings are caused by rapid dislocations of joint structures which is accompanied by the formation of considerable acoustical waves. One of the steps of the proposed processing methodology consists in DC removal. However asymmetrical sharp peaks are still visible in the signal time representation (Fig.3.). This phenomenon corresponds to the rapid stimulation of the stethoscope's membrane by click vibrations. Spectrum of the TMJH sounds is situated in the lower part of the frequency band and the amplitudes of its components are lower in comparison with pathological records.
In the Fig.4 there is presented original hypermobile TMJ sound with visible double peaks waveform, which is typical for TMJH. Another observation is that the amplitudes of vibrations corresponding to the joint movement between extreme positions are very low. This is in line with the anatomy and physiology of the joints. Movement of TMJ structures of healthy person is smooth and soundless according to the traditional auscultation criteria.
As it was already stated, analysis of non-stationary signals requires observation of frequency changes as a function of time, which can be achieved by the application of the time-frequency representation (spectogram). As a result, characteristic frequencies can be differentiate in each section of recorded joint movement. Nonetheless artefacts present in the acquired sounds introduce heterogeneity and therefore can overlap diagnostic-valuable parts (Fig.5).
Hypermobility of TMJ can be easy distinguished from pathological clicking using spectrogram (Fig. 6).
TMJH can be characterized by periods of quiet in recordings whilst cracks and noise are present (with different amplitude) during the whole record of sound of TMJ diagnosed with clickings. A second typical feature of TMJH is represented by double vertical bars corresponding to the hypermobile clicks, which does not occur in pathological signals.
Discussion
Daily activities such as eating of hard foods, gasping, yawning in case of patients with hypermobility of the temporomandibular joints are associated with a speci c sound described as thud (6) . Sounds in the joints created during everyday activities performed by patients draw the attention of others, which is usually embarrassing. Ashamed patients focus their attention on repetitive loud sounds. This is often the reason why patients try to nd help and treatment. In the standard RDC / TMD classi cation, there is no diagnosis for people who have sound (referred to as thund) in the nal phase of the mandibular abduction movement (25) (26) (24). As current research shows signals from the temporomandibular joints can also be physiologically connected with the anatomy as in the participants with TMJH. It is necessary to take care of that group of patients and to give the right diagnosis. However, there has been published, an extended version of the RDC / TMD questionnaire, which includes a brief diagnosis of "Hypermobility Disorders". The name of the questionnaire is Diagnostic Criteria for Temporomandibular Disorders (28). Nevertheless only further clinical examination and additional medical investigation, for example with an X-ray, ultrasound or MRI can con rm the diagnosis. It should be stressed that in the proposed methodology the common stethoscope was used as the main measuring device. The stethoscope is very popular, cheap and also well-known medical equipment, easy to use and noninvasive. With the help of an electronic stethoscope and numerical data, it is easy to distinguish a graphical representation (e.g. spectrogram), which is associated with the sounds of TMJH patients. Also other researchers e.g. Nosouhian et. al. noticed the difference in auscultation in the group of patients with severe GJH and TMJH (29). Many advantages of using a stethoscope for the purposes of con rming the TMJ diagnosis can be pointed out. First, auscultation with a stethoscope is a part of an examination that can be performed during a patient's visit in the consulting room and immediately analysed in order to visualize the measured signal on the computer screen. Measured acoustical signal and its representation in the frequency and time domain are characteristic for each sound-related dysfunctions in TMJ. Moreover a stethoscope can be used in certain cases when the x-rays are contraindicated (e.g. in case of pregnant women). In the courde of the carried out research, all the recorded and analyzed signals were acquired using the same equipment. On one hand, this seems to be a proper approach since the signals are comparable, but on the other, signals are subjected to the same artefacts. Errors are inevitably related to the construction of the stethoscope itself. Among the most common problems we can distinguish signi cant in uence of transfer function / ampli cation frequency characteristic on resonance phenomena or attenuation of a signi cant part of the band. To verify the quality of the acquisition setup, further analysis with advanced equipment and repeatable measurement conditions is required.
Nevertheless it should be stressed that possibility of application of the proposed methods in clinical conditions forces the usage of commercially available and certi ed medical devices -not sophisticated laboratory equipment that is di cult to operate.
TMJ dysfunctions are manifested through incorrect disc displacement during movement. The whole mechanism is very complex and all the essential components such as the mandibular condyle, temporal bone, articular disc, joint capsule, ligaments, blood vessels and nerve supply are involved in the joint movement.
Changes in the TMJ internal structure result in the acoustical symptoms such as clicking, crepiting or thud sound during the joint movement. Therefore nowadays, one of the commonly used TMJ examination methods is auscultation by stethoscope. It is a popular technique in which the obtained results depend a lot on the experience of the medical personnel and the normalized properties of the devices. Auscultation could farther play an important role in the examination, diagnosis and future treatment of temporomandibular disorders. However the abnormal noise generated during joint movement could be nowadays evaluated by normalized and objective methods, which could eliminate disadvantages of the classical approach and could be used at the different stages of the joint disorders providing correct diagnosis.
Development of such simple and e cient methods should be treated as a challenge due to the wide spectrum and types of sounds (coming out) from degenerated joints resulting from many primary functional and anatomical reasons. However, in this paper, such a methodology has been proposed.
Based on that, an examination of the joint and recording of noises has been carried out during the ordinary, free movement of the joint, which ensured repeatable condition of the movement. The investigation could be conducted in all accessible ranges of the movement. As a scope of investigation the functionality of the whole mechanism is considered, which is a great difference and advantage in the comparison with e.g. medical imaging methods that provide information concerning geometrical properties of the joint or the internal bone structure only. Medical imaging methods do not provide the su cient information concerning the in uence of tissues of lower density: muscles, ligaments not to mention synovial uid or phenomena occurring in microscale on moving surfaces such as the grease layer, local Hertz stress, etc. Such methods have limited applications in the TMJ diagnostics due to the fact that the measurement of geometry is performed in the static state (no motion). At the same time, spatial resolution of medical imaging methods makes it impossible to measure structural changes of bones or joint surfaces of the size of which is smaller than the resolution of used equipment.
Important feature of the method proposed in the paper is that acquisition of the acoustical signals throughout the examination of the joint takes place during the functional activity of the joint. Such conditions allow to measure and then analyse the in uence of all the sources of joint dysfunctions such as degeneration of joint surfaces and synovial uid or excessive muscles tension. This primary sources of dysfunction appear jointly and in uence the reciprocal movements and rearrangement of tissues in the proximity of the joint.
Analysis of acoustical signals of THJ activity carried out in the frequency-time domain by the use of short time Fourier Transform algorithm has allowed for visualization of the generated sounds and decomposing them into time and frequency components. On the basis of the obtained results of STFT analysis, hypermobility of TMJ can be easy distinguished from pathological clicking. It is worth to emphasise that such an analysis could be performed immediately during the patient examination.
Therefore methods of auscultation can be complemented with the biomechanical observation of the movement of joint and analysis of accompanying signals displayed in the form of spectrograms in the linear time and frequency scale and the magnitude of each component expressed in logarithmic scale.
The acoustical disturbances of the recorded signals are removed by ltering the frequencies below 80 Hz in order to purify and emphasize more signi cant higher harmonics corresponding to the acoustical effect of clicking. The examination process is comfortable for both patients and doctors. The signals collected with the electronic stethoscope Littmann 3200 are recorded and sent to the computer via the Bluetooth connection without the necessity of establishing any wire connection or manually prompting signal processing since the whole analysis is performed automatically and the results are immediately displayed.
Conclusion
Based on our research, we note that hypermobile joints are rare case among patients. In our study group, only 4% has sounds like thund. Due to the uncommon occurrence of such patients, research should be conducted to look for TMJH and spread them so that the results of the research can expand the knowledge of dentists.
The comparative analysis of disorders mentioned in this paper proved that hypermobility can be distinguished from other dysfunctions based on the characteristic time frequency features present in the spectrogram representation of TMJ sound.
We recorded and analyzed sounds from the temporomandibular joints of patients with TMJH. However, due to the small number of TMJH patients it was not possible to create a reference database of these sounds. Unfortunately, the goal has not been achieved yet, but we plan to continue further research.
Additional analysis of signals from Littmann 3200 can become easily in the future including more advanced diagnosis with electronic equipment. The kind of sound and characterization of them is helpful in diagnosis. Cooperation with medical and technical research can improve and become easier to understand the character of the sound by using computer technology. Comparison of pathological (on the left) and healthy but hypermobile (on the right) TMJ sounds in time and frequency domain. Comparison of spectrograms estimated for the patient with pathological clicks of TMJ (left side) and the patient with TMJH (right side). | 2020-07-16T09:07:19.497Z | 2020-07-14T00:00:00.000 | {
"year": 2020,
"sha1": "8c04718e8c57f876f3dc12d739bfcecff1dff485",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-41127/v1.pdf?c=1631860028000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bcae8f4be8a4e2a3ba01ae080fdbb4e043de463e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237230644 | pes2o/s2orc | v3-fos-license | Use of the Immunodiffusion Test in the Serodiagnosis of Aspergillosis
The diagnostic value of an immunodiffusion (ID) test with standardized precipitinogens derived from five Aspergillus species was determined with sera from 60 proven and 12 suspected cases of aspergillosis. The data demonstrated that the greatest number of aspergillosis cases were detected by the concurrent use of A. fumigatus and A. niger precipitinogens. With these precipitinogens, the ID test permitted the serodiagnosis of aspergillosis in 82% of the 60 proven cases and in 83% of the 12 suspected cases. The presence of one or more precipitins was indicative of aspergilloma, of allergic bronchopulmonary aspergillosis, or of invasive aspergillosis. Precipitins were detected in 93% of the sera from patients with aspergilloma, in 50% of the sera from patients with allergic bronchopulmonary aspergillosis, and in 88% of the sera from patients with invasive aspergillosis. Although the presence of one or two precipitin bands could indicate any form of aspergillosis, the presence of three or four was strong evidence of either aspergilloma or invasive aspergillosis. The ID test was found to be 100% specific in an evaluation of its effectiveness with 65 sera from individuals with other systemic mycotic infections, bacterial or neoplastic diseases, and from apparently normal humans. In diagnosed cases of aspergillosis, the examination of serial serum specimens provided information about the clinical course of the disease. A reduction in the number of precipitin bands and significant titer changes were noted as the patients responded to therapy.
The diagnostic value of an immunodiffusion (ID) test with standardized precipitinogens derived from five Aspergillus species was determined with sera from 60 proven and 12 suspected cases of aspergillosis. The data demonstrated that the greatest number of aspergillosis cases were detected by the concurrent use of A. fumigatus and A. niger precipitinogens. With these precipitinogens, the ID test permitted the serodiagnosis of aspergillosis in 82% of the 60 proven cases and in 83% of the 12 suspected cases. The presence of one or more precipitins was indicative of aspergilloma, of allergic bronchopulmonary aspergillosis, or of invasive aspergillosis. Precipitins were detected in 93% of the sera from patients with aspergilloma, in 50% of the sera from patients with allergic bronchopulmonary aspergillosis, and in 88% of the sera from patients with invasive aspergillosis. Although the presence of one or two precipitin bands could indicate any form of aspergillosis, the presence of three or four was strong evidence of either aspergilloma or invasive aspergillosis. The ID test was found to be 100% specific in an evaluation of its effectiveness with 65 sera from individuals with other systemic mycotic infections, bacterial or neoplastic diseases, and from apparently normal humans. In diagnosed cases of aspergillosis, the examination of serial serum specimens provided information about the clinical course of the disease. A reduction in the number of precipitin bands and significant titer changes were noted as the patients responded to therapy.
The nonspecific clinical and radiological pulmonary manifestations of aspergillosis create diagnostic problems. A combination of cultural and histological evidence provides the only basis for an unequivocal diagnosis. Such evidence, however, cannot always be obtained. In such situations, serological methods may be useful adjuncts in establishing a diagnosis of aspergillosis.
In response to an increasing number of requests for aspergillosis antibody tests, this study was undertaken to evaluate the diagnostic adequacy of the immunodiffusion test and its prognostic value. MATERIALS AND METHODS Serum specimens. Sera used in this study were 301 obtained from patients with proven aspergillosis and other systemic mycotic infections, from patients with pulmonary disease of unconfirmed etiology, patients with asthma, bacterial, and neoplastic diseases, and from apparently normal humans. In each case, the clinical diagnosis and the cultural data were obtained from the attending physician. In 17 of the aspergillosis cases, only the generic identification of the etiologic agent was provided. The aspergillosis cases were classified into three categories: (i) aspergilloma with evidence of fungus ball(s), (ii) allergic bronchopulmonary aspergillosis with no tissue invasion but with wheezing, mucous plugs with hyphae, eosinophilia, or transitory pulmonary infiltrates, and (iii) invasive aspergillosis in which the fungus had invaded tissue. All sera were preserved with Merthiolate (0.01%). ID tests. Immunodiffusion (ID) tests were performed in 1% Noble agar and 0.25% phenol in 25 ml of pH 8.6 Veronal buffer (LKB) in 75 ml of distilled water. Glass slides (25 by 75 mm) were cleaned with alcohol, placed in slide frames, and coated with a 0.1% solution of the buffered agar and 0.05% glycerine. After the slides had dried, 10 ml of the 1% agar was added (10 ml per three slides). The slides were incubated at 37 C for 1 hr in a moist chamber before wells were cut.
Antigens were 8X-concentrated, acetone-precipitated culture filtrates from 5-week-old Sabouraud dextrose broth cultures of A. fumigatus B1014, B1172, B1173, and B1181; A. flavus B15 and B771; A. nidulans B599; A. niger 107; and A. terreus B1178 grown at 31 C. Similarly treated, uninoculated Sabouraud dextrose broth served as an antigen specificity control. The carbohydrate content of the antigens was determined by the Anthrone test (13) and adjusted to 1,000 to 1,500 gg/ml. The antigens were tested for the presence of C-reactive protein (7) by using anti-CRP serum (Difco). C-reactive protein was not detected in any of the filtrate antigens. The antigenicity of the concentrated filtrates was checked with homologous rabbit antiserum prepared basically by the method of Pepys et al. (11).
For routine tests, 3-mm wells were cut with a distance of 6 mm between all wells. Titrations were performed by placing various dilutions of serum in the outer wells. Slides were incubated at room temperature in a moist chamber for 3 days. They were then washed in 1% saline overnight and in distilled water for 1 hr; wells were then refilled with 1% Noble agar, dried, and stained with Buffalo Black NBR (Allied Chemical). The reactions were then read and the results were recorded. Sera that produced a line or lines of identity with a reference serum from a proven human aspergillosis case or with an anti-A. fumigatus B1181 rabbit serum were considered positive.
RESULTS
Sera from 60 patients with culturally, histologically, or radiologically proven aspergillosis were studied (Table 1). A. fumigatus was reported to have been isolated from 23 of these patients, A. niger from four, A. flavus, A. terreus, and A. versicolor from one each, and unidentified aspergilli from 17. In 10 of the aspergilloma cases and three of the allergic bronchopulmonary cases, aspergilli were not isolated. In these cases, diagnosis was based solely upon clinical and histopathological data. The absence of positive cultures from patients with aspergilloma and allergic bronchopulmonary aspergillosis has been reported previously. Campbell and Clayton (3) noted that the mycelium in a fungus ball is of low viability, and Pepys (10) found that sputum cultures from patients with allergic bronchopulmonary aspergillosis are frequently negative during episodes of pulmonary infiltration and that sputum cultures from patients with aspergilloma may be negative.
The data in Table 1 show that 7 of the 60 patients appeared to have primary cases of aspergillosis. The other 53 patients had a variety of underlying diseases. Tuberculosis, the most frequently occurring, was found in 23 of the 53 patients. Sera from 12 patients with pulmonary disease of unknown etiology were also studied. In these cases, aspergillosis was strongly suspected, but a diagnosis could not be confirmed. A. fumigatus was isolated from two of these patients, A. niger from one, and Aspergillus sp. from four others. No aspergilli were isolated from the other five. These patients had a variety of underlying diseases; the most prominent was tuberculosis.
The precipitin reactivity of sera from patients with aspergillosis, patients with other pulmonary diseases, and from apparently normal subjects is shown in Table 2. The immunodiffusion test was positive in a total of 49 of the 60 (82%) aspergillosis cases studied: 28 of 30 (93%) of the aspergilloma cases, 7 of 14 (50%) of the allergic bronchopulmonary cases, and 14 of 16 (88%) of the invasive aspergillosis cases. Ten of the 12 (83%) sera from patients with pulmonary disease of unknown etiology were precipitin-positive as were 2 of the 17 (12%) from asthma patients. Fifty-five sera from proven cases of other systemic mycotic infection, bacterial diseases, or neoplastic diseases and 10 sera from apparently normal humans were all negative for precipitins to the five Aspergillus sp. tested.
Of the 95 sera from aspergillosis cases studied, 75 were reactive in the immunodiffusion test (Table 3). Seventy-four of the 75 reactive sera demonstrated precipitins to A. fumigatus antigens; 52 of the 75 sera contained only precipitins for A. fumigatus, whereas 22 contained, in addition, precipitins to other Aspergillus sp. The one serum which did not react with A. fumigatus reacted only with A. niger.
Seven of the 16 sera from patients with pulmonary diseases of unknown etiology showed precipitin activity only with A. fumigatus antigens. Two reacted only with the A. niger antigen. The other three precipitin-positive sera in this group reacted with precipitinogens to A. fumigatus and other Aspergillus sp. The two precipitin-positive sera from patients with asthma reacted only with A. fumigatus antigens. None of the sera in this study had precipitins to A. flavus, A. nidulans, or A. terreus in the absence of demonstrable precipitins to A. fumigatus.
The number of precipitins noted in aspergillosis case sera varied from one to four. Sera reacting with A. fumigatus or A. niger antigens produced as many as four precipitin bands. In contrast, sera reacting with antigens to the other species of Aspergillus showed only one precipitin. The data in Table 4 show the number of precipitin bands produced after reaction of sera from patients with different clinical forms of aspergillosis with A. fumigatus antigens. Sero-positive aspergillosis cases, regardless of clinical type, usually demonstrated one to two precipitins. Only 2 of the 43 positive sera from the aspergilloma cases produced three precipitin bands, and three sera produced four bands. None of the nine positive sera from the allergic bronchopulmonary cases produced more than two precipitin bands. Four of the 22 positive sera from invasive cases produced three precipitin bands, and two produced four bands. All 22 produced at least one band of identity with the reference sera. Figure 1 shows reactions obtained with sera from aspergilloma cases and sera from pulmonary invasive cases with the band produced by the human reference serum against A. fumigatus antigen B1172. Figure 2 illustrates reactions of A. fumigatus antigen B1172 and aspergilloma case serum, invasive case serum, and allergic bronchopulmonary case serum in reference to proven human case serum and rabbit reference serum. The three precipitin bands produced against A. niger precipitinogen by a serum from a patient with an A. niger aspergil- loma are shown in Fig. 3. This serum also reacted with A. fumigatus precipitinogens, producing a band which shows partial identity with the reference A. fumigatus human serum. Examination of the sera from the patients with suspected aspergillosis revealed one or two precipitin bands in 9 of the 12 positive sera and three bands in only one serum. The two sera in this group that reacted only with the A. niger antigen produced only one precipitin band. Precipitin-positive sera from the two patients with asthma produced only one band with A. fumigatus precipitinogens.
Seventy-two positive sera were titrated with the A. fumigatus antigens (Table 5). Of 42 positive sera from cases of aspergilloma, 34 reacted in the range of undilute to 1: 8, whereas eight reacted at a dilution of 1: 16 or greater. None of eight positive sera in the allergic bronchopulmonary group had a titer greater than 1:8. Only 5 of the 22 positive sera from patients with invasive aspergillosis had titers of 1: 16 or greater. All five produced at least two precipitin bands. Sera with titers of 1:32 or 1:64 produced either three or four precipitin bands.
With the exception of one serum, the titers of the positive sera from the suspected aspergillosis group of patients varied from undilute to 1:16. The exceptional serum, from a patient with chronic pulmonary disease, showed a titer of 1:512 and produced only two precipitin bands. The two precipitin-positive sera from asthma patients demonstrated titers of 1:1 and 1: 2, respectively. Serial sera from four patients with proven aspergillosis were studied to determine the prognostic value of the ID test. The pertinent clinical, laboratory, and serological data from these patients are given in Table 6. It shows that the antibody response of patient ER reflected his clinical state. The number of precipitin lines dropped from two to zero, and the titer dropped from 1:4 to zero. In the case of BC, the number of precipitin lines escalated from two to three during treatment and then declined to one, whereas the corresponding titers changed from 1:8 to 1: 16 to 1:4. When the third specimen was taken, the physician considered the patient clinically well. JE is the patient with an A. niger aspergilloma whose serum reactions were referred to in Fig. 3. The second serum from patient JE was the only positive one from the proven aspergillosis cases that did not react with A. fumigatus antigens. Case DR represents an inadequately Our studies indicate that the ID test for aspergillosis is an excellent aid in the laboratory diagnosis of aspergillosis. The presence of precipitating antibodies, regardless of the number of bands or titer, indicates infection, colonization, or allergy due to an Aspergillus species. Use of the ID test permitted the diagnosis of 28 of 30 (93%) cases of aspergilloma, 14 of 16 (88%) cases of invasive aspergillosis, and 7 of 14 (50%) patients with allergic bronchopulmonary aspergillosis. These results are in agreement with the findings of other investigators. Longbottom and Pepys (7) found that 98% of the sera from 57 patients with aspergilloma had demonstrable precipitins. Similarly, Campbell and Clayton (3) found 91% of 23 aspergilloma patients to be precipitin-positive and 70% of 87 patients with allergic aspergillosis to be positive. These authors suggested that the presence of serum precipitins bears no direct relationship to allergic bronchopulmonary aspergillosis but probably signified active or recent infection. Our ID test data with sera from allergic bronchopulmonary cases support this contention.
Our experiences with the ID test indicate that it is specific. Aspergillus sp. precipitins were not detected in any of the 55 sera from patients with other systemic mycotic infections or bacterial or neoplastic diseases or in 10 sera from apparently normal humans. These results conflict with the view of Stallybrass (14) that precipitins may be formed from chance contact with Aspergillus sp. antigens and may persist in otherwise healthy individuals for many years. We did find precipitins in two patients with asthma and in 10 of 12 patients with pulmonary disease of unknown etiology; however, aspergillosis was strongly suspected in all of these patients. We do not regard the reactions noted in the sera from the asthma patients and the patients in the "strongly suspected" group as merely cross-reactions. We feel that a positive precipitin reaction is good evidence of aspergillosis or hypersensitivity to an Aspergillus sp. Longbottom and Pepys (7) reported positive precipitin reactions in 63% of 93 patients with asthma and pulmonary eosinophilia. The same report showed that 8% of 185 patients with different pulmonary conditions had precipitins in their sera and that 14% had positive skin test reactions. Campbell and Clayton (3) included in their study eight precipitin-positive sera from patients with a clinical diagnosis of asthma. They also found sera from patients from whom A. fumigatus was repeatedly isolated and who had a long history of bronchitis and productive cough to be precipitin-positive. In these cases, as in our strongly suspected group, a diagnosis of aspergillosis was not confirmed. In 1967, English and Henderson (4) reported on their study of 21 patients with various lung conditions. In determining the diagnostic significance of the ID test, these investigators stressed the importance of "reactivity," which they defined as the number of precipitin lines produced, and of the "range," or number of antigenic extracts with which the serum reacted. Our results do not support this approach. They do agree, however, with the view of Walter and Jones (15) that the presence of precipitating antibodies, regardless of the number of precipitin bands or titer, indicates infection with, or development of, an allergy to an Aspergillus sp.
Antigen preparation was based on the recommendations of Longbottom and Pepys (7) who found that 3-to 5-week-old surface cultures on Sabouraud medium yielded the most suitable antigens free of C-substance glycopeptide. We found that standardized and reproducible precipitinogens can be prepared by acetone precipitation of 5-week-old Sabouraud dextrose broth cultures. We did not investigate the use of other antigenic extracts. We used precipitinogens to four A. fumigatus strains; however, a serum specimen that does not react against all four of the A. fumigatus antigens is rare.
Our results indicate that precipitinogens to A. fumigatus and A. niger may be used for the maximal detection of aspergillus precipitins. Use of these two antigens permitted a presumptive diagnosis of aspergillosis in 49 of 60 (82%) patients whose sera were examined. We found three cases of proven or suspected aspergillosis in which a serum specimen failed to produce precipitin lines with A. fumigatus. All three sera reacted only with A. niger precipitinogens. Several investigators (3,7,8) have reported cases of aspergillosis due to species other than A. fumigatus in which serum specimens failed to react with A. fumigatus antigens. These infections were due to A. flavus, A. nidulans, A. niger, and A. terreus. On the basis of these studies, we used a battery of precipitinogens prepared from A. fumigatus, A. flavus, A. nidulans, A. niger, and A. terreus.
We observed an association between the number of precipitin bands and the clinical type of aspergillosis (Table 4). One or two precipitin bands occurred in sera from patients with each form of aspergillosis. None of the sera from the allergic bronchopulmonary cases produced more than two bands. Our data suggest that three or four precipitin bands may be indicative of an aspergilloma or of pulmonary or disseminating invasive aspergillosis.
Although a titer of 1:16 may be useful for differentiating aspergilloma and invasive aspergillosis from the allergic bronchopulmonary form of this disease, our data showed that sera with titers of 1: 16 or greater from patients with proven aspergillosis also produced three or four precipitin bands. Consequently, the titration of positive sera with A. fumigatus precipitinogens does not appear to be useful in determining the clinical form of aspergillosis. Titration might prove useful in following the clinical course of the disease. However, our studies (Table 6) indicate that, with significant titer changes, a corresponding drop in the number of precipitin lines occurs. This observation is in accord with reports of other investigators (5,12,14,15) that precipitating antibodies diminish or disappear with treatment. Apparently, therefore, titration is not needed to determine the progress of an infection and the patient's response to therapy. ACKNOWLEDGMENT We thank Irving Abrahams, Nassau County Hospital for Pulmonary Diseases, Plainview, N.Y., for supplying a reference rabbit serum and antigen and for advising us in the initial stages of our study. | 2020-12-10T09:04:16.892Z | 1972-02-01T00:00:00.000 | {
"year": 1972,
"sha1": "4ab70fd0d542fb2f01abe101b1f52fe655117be9",
"oa_license": null,
"oa_url": "https://aem.asm.org/content/aem/23/2/301.full.pdf",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "7bc8021dce469acdc41edee77a9a75fbad5ae195",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
20316296 | pes2o/s2orc | v3-fos-license | Interaction of Aldehyde dehydrogenase with acetaminophen as examined by spectroscopies and molecular docking
The interaction of acetaminophen, a non-substrate anionic ligand, with Aldehyde Dehydrogenase was studied by fluorescence, UV–Vis absorption, and circular dichroism spectroscopies under simulated physiological conditions. The fluorescence spectra and data generated showed that acetaminophen binding to ALDH is purely dynamic quenching mechanism. The acetaminophen-ALDH is kinetically rapid reversible interaction with a binding constant, Ka, of 4.91×103 L mol−1. There was an existence of second binding site of ALDH for acetaminophen at saturating acetaminophen concentration. The binding sites were non-cooperative. The thermodynamic parameters obtained suggest that Van der Waal force and hydrogen bonding played a major role in the binding of acetaminophen to ALDH. The interaction caused perturbation of the ALDH structures with an obvious reduction in the α-helix. The binding distance of 4.43 nm was obtained between Acetaminophen and ALDH. Using Ficoll 400 as macro-viscosogen and glycerol as micro-viscosogen, Stoke-Einstein empirical plot demonstrated that acetaminophen-ALDH binding was diffusion controlled. Molecular docking showed the participation of some amino acids in the complex formation with −5.3 kcal binding energy. With these, ALDH might not an excipient detoxifier of acetaminophen but could be involved in its pegylation/encapsulation.
Introduction
Aldehyde dehydrogenases (ALDH; EC 1.2.1.3) are short-chain dehydrogenases/reductases (SDR) superfamily containing NAD(P) +dependent enzymes that catalyse the irreversible dehydrogenation of a wide range of endogenous and exogenous aldehydes to their corresponding less toxic carboxylic acids [1][2][3]. ALDHs are widely distributed in prokaryotic and eukaryotic cells and play important roles in detoxification of toxic and reactive aliphatic and aromatic aldehydes formed during the metabolism of alcohols, amino acids, carbohydrates, lipids, biogenic amines, vitamins and steroids [4]. Currently, there are 19 known members of the ALDH superfamily [5,6]. ALDHs functional and physiological properties have been studied extensively and are involved in the maintenance of cellular homeostasis, modulate cell proliferation, differentiation, survival and cellular response to oxidative stress [1,7,8]. ALDHs play essential role in the metabolic pathways that are critical for cell development and response to environmental changes [9].
ALDHs are homo-biopolymers composed of two or four polypeptides of 50-55 kDa, and made up of N-terminal NAD + -binding domain, a catalytic domain and an oligomerisation domain [10,11]. Aldehyde dehydrogenases kinetic mechanism is literarily an ordered sequential kinetic mechanism with NAD(P) + binding first, followed by the aldehyde [12][13][14]. In some cases, it is random kinetic mechanism with preference for initial binding of NAD(P) + [15]. The ternary complex forms thio-hemiacetal intermediate which is transformed to thioester by giving its hydride ion to NAD(P) + . Eventually, the thioester is hydrolysed by a water molecule to carboxylic acid. The sequential dissociation of carboxylic acid and NADH, which is the rate-limiting step, ends the reaction [14,16].
ALDHs exhibit additional, non-enzymatic functions, the non-catalytic binding properties for endobiotics, some hormones and other small molecules [1,17]. It is 'housekeeping' functions linked with detoxification. This is associated with the ubiquitous, ample and constitutively expressed properties of the enzyme. These ligand binding properties might be connected to protective function through the sequestration of metabolites. They conceivably serve to prevent the accumulation or minimize potentially toxic free endobiotics and xenobiotics or involved in the uptake and transport of hydrophobic non-substrate prior to its detoxification. Catalytic and ligand complexing properties (ligandin) are important for detoxification mechanism [1] and there is connection in both [17]. Although ALDH catalytic mechanisms of detoxification have been investigated extensively, however, relatively little is known about its non-catalytic binding function.
Acetaminophen (N-acetyl-p-aminophenol, AAP) ( Fig. 1) is a medically important, low cost, readily available and commonly used over the counter analgesic and antipyretic drug [18,19]. Acetaminophen monotherapy is efficient and is safer than Aspirin and Ibuprofen [20]. The efficacy and tolerability in individual condition is warranted [18]. The mechanism of analgesic action of acetaminophen is complex and its action of medicament has not been completely understood [20]. At therapeutic doses, acetaminophen is safe drug but not devoid of side effects [18] and suggest the possibility of acetaminophen exerting other specific biological effects [21]. High dosages, in humans and experimental animals, lead to necrosis, nephrotoxicity, and extra hepatic lesions [22]. Nevertheless, it is grossly abused in Nigeria and it has been blamed for the rising cases of heart attacks, stroke and early death [23,24]. The negative effect of Acetaminophen on the antioxidant defense enzyme system has been documented [23]. The interaction of acetaminophen with Human Serum Albumin (HSA) was previously investigated [25]. The authors detailed the biochemical and biophysical data illustrating the relevance of HSA to the acetaminophen pharmacokinetics. However, in a pathogenic state of human serum albumin, lower albumin concentration and weaker drug-protein interaction can result in the increase of drug concentration in the blood and lead to toxicity [26,27]. More worrisome, is the use of acetaminophen with alcoholic beverages [21,28].
The link between Aldehyde dehydrogenase and Acetaminophen metabolism is becoming increasingly imaginable [7,21]. ALDH has been identified as a major acetaminophen-binding protein [28]; and was down regulated in mouse liver exposed to high dosage of acetaminophen [29]. However, the affinity and interaction mechanism of acetaminophen to ALDH still remain uncharted. The effect of the complexation on ALDH structure and conformation is yet to be elucidated.
Several spectroscopic techniques, as powerful tools, have been used to study the interaction between drugs and proteins. They allow nonintrusive measurements of substances in low concentration under physiological conditions [30]. Fluorescence technique is the simplest method to study the interaction of drugs/ligands and bio-macromolecules because it has the advantage of high sensitivity, rapidity and ease of implementation [31,32]. It is an important method to sense changes in the local microenvironment of fluorescent chromophore [33] and help understand the biopolymer's binding mechanisms to drugs and provide clues to the nature of the binding phenomenon [34,35]. The information on the acetaminophen-ALDH binding mode, the binding constant and the effects of acetaminophen complexation on the protein structure is obscured. In the present work, the binding of Acetaminophen to ALDH was studied under physiological conditions by spectroscopic techniques. The quenching mechanism between Acetaminophen and ALDH with regards stoichiometric and thermodynamic of ligand binding and consequently the effect on the protein conformation were investigated at molecular level. In addition, the effects of pH and viscosity of Acetaminophen -ALDH complex were also examined. All these were complimented by in silico analysis and molecular docking.
Materials
Aldehyde dehydrogenase (ALDH; molecular weight 200,000 Da) was obtained from Sigma-Aldrich Fine Chemicals, USA and was used without further purification. Acetaminophen concentrate (≥99% purity,) was a generous gift from Deshalom Pharmaceuticals Nig. Ltd., Ilesha, Nigeria. All reagents were of analytical grade unless specialized. All solutions were prepared with double distilled water. Acetaminophen stock solution was prepared in analytical grade double distilled ethanol. All glass Ostwald viscometer (VWR, USA) was used to measure the intrinsic and extrinsic relative viscosity. ALDH protein concentration was measured using Bradford method. Protein sample, ligand solutions and buffers were filtered through a Millipore membrane filter (0.45 µm membrane filter) immediately before use. The pH was checked with a Sartorius PP-50 standardized pH meter (Germany).
Fluorescence spectra
All fluorescence spectra were measured with a Hitachi F-4500 Fluorescence Spectrophotometer (Hitachi Ltd., Tokyo, Japan) equipped with a refrigerated circulating water bath (Pharmacia Biotech) and interfaced with HP Window XP Computer. The equipment was furnished with a 150 W Xenon lamp and a 1 cm quartz cell. The spectra were recorded in the wavelength range of 300-500 nm upon excitation at 280 nm when ALDH samples were titrated with acetaminophen. Both excitation and emission bandwidths were set on 5 nm with a scan speed at 900 nm/min. The response time was set to 2 s with a high sensitivity signal. Titrations were performed manually by using trace syringes. A 2.0 mL solution containing an appropriate concentration of ALDH (0.120 µM) in 25 mM Tris-HCl pH 7.4 containing 0.1 M NaCl was titrated manually by successive additions of ethanol stock solution of Acetaminophen to a very saturating concentrations of 125 µM. The final ethanol concentrations never exceeded 1% (v/v), and all fluorescence readings were corrected for the dilution effect. The presence of this volume of ethanol in the assay mixtures had no effect on the fluorescence measurements. Also, respective blanks of the buffer were used for the correction of all fluorescence spectra.
Synchronous fluorescence spectroscopy (SFS) was used to study the environment of amino acid residues. It involves the measurement of any shift, to reflect the changes of polarity around the chromophore molecule, in the emission maximum on addition of ligand molecules. Synchronous fluorescence spectra of solutions prepared as above were measured on the same fluorescence spectrophotometer. The excitation wavelength (λex) was set at 280 nm. The excitation and emission slit widths were set at 5.0 nm. The D-value (Δλ) between the excitation and emission wavelengths was set at 15 or 60 nm. PMT voltage was 700 V.
UV-Visible absorption spectroscopy
All absorbance spectra and equilibrium ligand binding experiments were measured in 25 mM Tris-HCl buffer, pH 7.4, at 25°C using Shimadzu double beam UV-Visible spectrophotometer (UV-1800) equipped with a Pharmacia refrigerating circulator for temperature control (25 ± 0.1°C) unless otherwise stated. The scan speed and slit of absorbance (λ abs ) were set to medium and 1.0 nm respectively. The spectra were recorded between 200-500 nm. A 1.0 mL solution of 0.120 μM ALDH was titrated with successive addition of acetaminophen. Circular dichroism (CD) measurements were made on a J-810 Spectropolarimeter (Jasco, Tokyo, Japan) at room temperature under constant nitrogen flush. The Circular dichroism (CD) spectra were measured from 190 to 240 nm at a scan speed of 200 nm/min. Each result was the average of the three scans.
Effect of viscosity
Efforts to probe the effects of solution viscosity upon ALDHacetaminophen association constant, k a , were explored using glycerol as micro viscosogen and Ficoll 400 as macro-viscosogen. Viscosities were determined relative to a solution containing only buffer (25 mM potassium phosphate buffer, pH 7.4) using all glass Ostwald viscometer at 25°C. The resulting data were fitted into Stoke-Einstein empirical relationship of [(K a )°/(K a )] =(η rel /η°) exp . Superscript o indicate absence of viscosogen. Exponent of 1, according to the formula is maximum diffusion-limited binding.
Acetaminophen-ALDH molecular modeling and docking
The docking analysis of acetaminophen molecule with yeast aldehyde dehydrogenase docking was carried out using AutoDock Tools (ADT v1.4.2) and AutoDock Vina. Bakers yeast aldehyde dehydrogenase "Fasta" file (accession ID= AAA34419.1) was retrieved from www. pubmed.org and used to model the starting structure of Bakers yeast aldehyde dehydrogenase. Homology modeling was done using Swissmodel Server (http://swissmodel.expasy.org). The coordinate file of template from protein data bank (PDB ID: 1BXS.1. A) with 31.89% sequence identity was used to model the 3D structure of yeast aldehyde dehydrogenase. The quality of protein model was done using ERRAT. Acetaminophen structure was retrieved from Pubchem databases, (CID 1983). In (SDF) format and then converted to Protein Data Bank (PDB) coordinates using the Open Babel (http://openbabel.org). Ligand binding site calculation was performed on BSP-SLIM server (http:// zhanglab.ccmb.med.umich.edu/BSP-SLIM/). The modeled structure of aldehyde dehydrogenase molecule and acetaminophen were loaded on BSP-SLIM server to identify the binding pocket and pose of the ligand. The best pose with docking score of 2.501 was selected. BSP-SLIM is known as a blind docking method, which primarily uses the structural template match to identify putative ligand binding sites, followed by fine-tuning and ranking of ligand conformations in the binding sites through the SLIM-based shape and chemical feature comparisons. The consistency of the docking results was first checked prior to docking of acetaminophen by comparing the best docking poses retrieved from BSP-SLIM server. This was done by removing the ligand from the binding site and subjecting again to re-docking into the binding pocket in the conformation found in the structure retrieved from BSP-SLIM server. Thus, a RMSD of 0.819 Å was obtained signifying that the docking procedure could be relied upon to predict the binding mode of our compounds.
Statistical analysis
All kinetic, statistical and graphical analysis for ALDH-Acetaminophen characterization was performed using KaleidaGraph 4.5 software (Synergy software, Reading, PA, USA) for Macintosh Computer.
Effect of acetaminophen on fluorescence spectrum of ALDH
Fluorescence spectra provide a sensitive and veritable means to characterize the biopolymer and their conformations [33]. ALDH intrinsic aromatic fluorophore was used to obtain information about conformational changes associated with interaction between ALDH and acetaminophen. ALDH has a strong fluorescence emission at 346 nm upon excitation at 280 nm. This is unconnected with to exposed tryptophan fluorescence due to solvent relaxation. So also, the fluorescence emission, peak, shape and intensity of ALDH are not unconnected with microenvironment position of the intrinsic fluorophores due to solvent relaxation [36]. The addition of acetaminophen, as a ligand, caused fluorescence quenching of ALDH fluorescence emission spectra (Fig. 2); and the quenching solely depend on the concentration of the ligand. The quenching was effective with average efficiency above 75%. This observation strongly indicates binding of acetaminophen with ALDH. This is not unusual. ALDH has identified as a major acetaminophen binding protein [28]. Acetaminophen was a non-fluorescent at the 280 nm excitation wavelength and has weak UV absorption at 280 and 346 nm. The inner filter effects caused by the absorption of acetaminophen were corrected. Fluorescence quenching is a decrease of the quantum yield of fluorescence from a fluorophore due to a variety of molecular interactions: excited-state reactions, molecular rearrangements, energy transfer, ground-state complex formation, and collisional quenching [36,37].
The fluorescence quenching were analyzed using the Stern-Volmer equation: where F and F 0 are the intensity of fluorescence intensities with and without quencher, respectively. The k q , K SV , τ°and [Q] are the quenching rate constant of the biomolecule, the quenching constant, the average life time of the biomolecule without quencher and the concentration of quencher, respectively. K SV is the slope of linear regressions and were analyzed at different temperatures (288, 293, 298, 303, 308 K). The Stern-Volmer plots ((F 0 /F against [Q]) were initially linear and later become exponential at above 35 μM (Fig. 3a). An initial linear slope of Stern Volmer plot is generally indicative of a single class of fluorophores, which are all equally accessible to the quencher [38]. The structural vicinity of the acetaminophen -OH (Fig. 1) group might be responsible for the quenching [38]. In order to avoid the inner filter effects [39], the quenching mechanism were analyzed within the linear part of Stern-Volmer dependence (Fig. 3b). The Stern Volmer constant is directly proportional to the temperature, indicating that it was a dynamic quenching mechanism (Fig. 3c). Dynamic quenching refers to a process that the fluorophore and the quencher come into contact during the transient existence of the excited state [36,37]. Dynamic quenching depends upon diffusion. However, at higher concentrations (above 35 μM of acetaminophen), the results depart from the initial linearity and demonstrated both static and dynamic quenching. The quenching rate constants, K q , was calculated using the above equation. The values of K sv and K q are listed in Table 1. Generally, the maximum scatter collision quenching constant, K q , of various kinds of quenchers with biopolymer is 2×10 10 . Excitation wavelength was set at 280 nm and the emmission spectra from 300 to 500 nm. [40]. However, the rate constants for the quenching of ALDH caused by acetaminophen are less than the K q for the scatter mechanism. This demonstrates that the fluorescence quenching is not the result of static collision quenching, rather a consequence of dynamic quenching [41]. It showed that the binding constant between acetaminophen and ALDH increases with the increase of temperature, resulting in a reduction of the stability of the acetaminophen-ALDH complex. It concluded that acetaminophen was a good quencher of ALDH intrinsic fluorophores.
Equilibrium binding stoichiometry and parameters
Using the intrinsic fluorescence decrease, the association constants K a of acetaminophen-ALDH complex at different temperatures and number of binding site can be obtained from the regression of: K a is the effective quenching constant for the accessible fluorophores [42], which are analogous to associative binding constants for the quencher-acceptor system. In the linear range of Stern-Volmer curve, the numbers of binding sites were obtained according to Eq. (3). The fluorescence quenching was mainly a dynamic quenching process. From the slope of the regression curve based on above equation and as shown in Fig. 4. The linearity of the Scatchard plot for acetaminophen-ALDH was obvious. The differences between the values calculated for both non-substrate ligands are within the experimental error. The value of n is approximately 1.18, indicating that there is one type of binding site for acetaminophen in ALDH, and the value of K a is 4.91×10 4 L mol −1 , reflecting a strong interaction between acetaminophen and ALDH. Using Job's plot [43] gave syllogistic evidence that the stoichiometric ratio ALDH-acetaminophen at 25 C and pH 7.4 is 1:1 (Figure not shown). However, the stoichiometric ratio increases to 2:1 when the concentration of acetaminophen was above 35 μM with a slight increase of K a . This glaringly showed the existence of a second binding site of ALDH for acetaminophen. The multiple binding site underscored the exceptional capability of the enzyme as regulator of intracellular and intercellular fluxes.
The interaction of acetaminophen with Human Serum albumin has previously been studied [25]. The results indicated that the interaction of acetaminophen with HSA is stronger than ALDH-acetaminophen complex. The reason might be connected to the structure of the protein.
The distinct K a values of HSA-acetaminophen and ALDH-acetaminophen showed that acetaminophen is loaded more strongly by HSA, which is crucial for transportation than detoxification. The number of binding site in acetaminophen-ALDH and acetaminophen-HSA were similar [25]. A consistent 2:1 stoichiometry. The binding parameters are helpful in the design of dosage forms and pharmacokinetics between therapeutics and toxicity. The stoichiometry of binding apparently varies according to the size of the ligand. Large ligands have less stoichiometry of ligand per bio-macromolecules [44]. At pH 7.4, there is possibility that the conformation of ALDH and ligandligand steric effects might explain the 1:1 binding stoichiometric of acetaminophen to ALDH. However, the issue still remains if the acetaminophen binds distantly from the active site and/or possibly at a site peripheral to the recognized substrate cavity. There is possibility of coordination of phenyl group of acetaminophen with hydrophobic residues of ALDH.
Thermodynamic parameters of Binding modes
Considering the dependence of binding constant on temperature, a thermodynamic process was considered to be responsible for this interaction. Therefore, The thermodynamic parameters dependent on temperatures were analyzed in order to characterize the acting forces between ALDH and acetaminophen during the quenching process. The thermodynamic parameters were calculated from the Van't Hoff plots. The temperatures were ranged between 288 to 313 K. The plot of log K versus 1/T (T, absolute temperature) allows the determination of ΔH and ΔS using Eq. (4): The free energy change (ΔG) was estimated from the following relationship: The enthalpy change (ΔH) was calculated from the slope of the van' t Hoff relationship (Fig. 5) Here, there was a good linear relationship between ln K a and reciprocal absolute temperature, 1/T. R is the universal gas constant (8.314 J mol −1 K −1 ). The thermodynamic values (ΔG, ΔH and ΔS) were obtained from the slopes and the ordinates at the origin of the fitted lines are presented in Table 2. At pH 7.4, the formation of the complex was an exothermic reaction accompanied by negative ΔS value. From Table 3 it can be seen that ΔH and ΔS have negative value (−22.32 kJ/mol) and a negative value (−119.24 J/ mol K), respectively. The positive sign for ΔG means that the binding process was non-spontaneous. The ALDH-acetaminophen, at pH7.4, is enthalpically favourable and entropically unfavorable (negative TΔS). The interaction of drug with protein has been reported to be an entropically unfavorable process in aqueous conditions [27]. The net balance in the solvation free energies of acetaminophen and ALDH from the Acetaminophen-ALDH provides the binding free energy of the acetaminophen with ALDH. There are essentially four types of noncovalent interaction existing between quencher and biological macromolecules; and are hydrogen, Van der Waals and electrostatic and hydrophobic forces [45]. The signs and magnitude of the thermodynamic parameters for proteins reaction can account for the main forces contributing to protein stability. The sign and magnitude of the thermodynamic parameter associated with various individual kinds of interaction that may take place in protein association processes have been characterized [46][47][48]. Here, the negative entropy and enthalpy is frequently taken as evidence for Van der Waals and hydrogen bonding.
pH dependence of acetaminophen-ALDH binding
ALDH must acquire a unique conformation in order to be function- ally effective catalytically. pH change tends to alters the conformation of enzyme and hence could affect the association constant of ligand binding [49]. This will assumed we consequently affect the energetics of binding. The influence of acidic pH (5.0) and alkaline pH (9.0) on the interaction between acetaminophen and ALDH was explored. The result is shown in Table 4. The stoichiometric of binding was altered. The bonding was non-spontaneous at pH 9.0 and is essentially hydrophobic bonding. The bonding did not change as at pH of 5.0 compared to physiological pH of 7.4. The significant change is that it is more enthalpically driven at pH 5.0 compared to the entropically motivation at pH of 9.0. The reason for this is not immediately clear but pH 5.0 is outside the ALDH enthalpy of ionization and its optimum pH. The lowering of the pH which increases the rate of agonist-induced conformational change is consistent with the hypothesis of acidification, and thus presumably protonation of one or more amino acids. This might lessen the responsiveness of ALDH for acetaminophen and thus perhaps reflecting the lower stability of the ALDH. The binding stoichiometric between ALDH-acetaminophen was not affected by the change in pH either to 5 or 9. The co-operativity of the binding or otherwise, when n > 1, was assessed on the assumption that ALDH with equal and independent n site, could have a characteristic association constant, K a , for the acetaminophen, L. Then, the saturation fraction, Y, was expressed as: where ΔF i indicates the fluorescence-quenching change observed at non-saturating ligand concentrations of acetaminophen, and ΔF max is the maximum fluorescence-quenching variation detected at saturating ligand concentration. Where L is the free concentration of Acetaminophen which can be derived from: where [L] t and [ALDH] are the total non-enzyme bound ligand and protein concentrations, respectively. The plot at pH 7.4 (not shown) gives the best fit to the data for a non-cooperative model.
Analysis of synchronous fluorescence
Synchronous fluorescence spectroscopy was used to investigate the acetaminophen-ALDH complex. The synchronous fluorescence spectra of ALDH provide the characteristic information for the Try residues and Trp residues when the wavelength interval Δλ (Δλ =λem−λex) is fixed at 15 and 60 nm, respectively [50]. The synchronous spectra are shown in Fig. 6. The ALDH synchronous fluorescence intensity is affected by acetaminophen concentration. This further demonstrated the occurrence of fluorescence quenching in the binding. It is apparent that the maximum emission wavelength red-shifts (from 284 to 294 nm) at the investigated concentrations range when Δλ =60 nm and red shift (from 295 to 305 nm) when Δλ =15 nm. The red-shifts effect implied that the interaction of acetaminophen affect the microenvironment around the Tyr and Trp residues of ALDH conformation. The polarity around the tryptophan residues was increased and the hydrophobicity was decreased. This exaggerated the results deduced in Fig. 2. ALDH tryptophan residues hydrophobicity was obvious based on maximum emission wavelength (λ max ) and was more sensitive to change while the microenvironment around the tyrosine residues has less discernable change during the binding process. It was apparent that the fluorescence of tyrosine residues was weak. We reckoned that fine linearity of the Stern Volmer quenching and a singular binding site derived from Scatchard plot in the same condition and conclude that acetaminophen would bind a hydrophobic cavity within the vicinity of ALDH tryptophan residue and consequently affect the conformation of ALDH. Fig. 7.
Dissociation constant K d
Ligand-ligandin binding interaction is a kinetically rapid reversible interaction [27]. The reversibility is a function of association constant (K a ), dissociation constant K d and binding free energy (ΔG). The net balance between these dictates the possible ligands/drugs transportation or immobilization, or metabolism or toxicity [27]. Dissociation constant K d was calculated as described elsewhere [49].
The equation was linearized using where ΔF max is the maximum decrease in fluorescence observed when the enzyme is saturated by acetaminophen. The plot of ΔF against the ligand (Acetaminophen) concentration obey a Michaelis-Menten equation at all temperature and pH examined using Eq. (6) and was linear using the Hanes-Woolf plot (Eq. (9)) from the plot of [L]/ΔF against [L]. The concentration of the ligand was ranged to 125 μM. Acetaminophen exhibited a distinct K d value within the concentration range. This thus Table 4 Association constants K a, number of binding sites (n) and relative thermodynamic parameters of the Aldehyde dehydrogenaseacetaminophen system from 15 to 35°C at pH 5.0 and 9.0. pH 5.0 pH 9.0 (4) and (5). The thermodynamics of dissociation were ultimately affected by the change in pH (Table 5). From this, association of ALDH-acetaminophen complex is more favourable than its dissociation.
UV-Visible absorption spectroscopy and Circular Diochroism
UV-Vis absorption spectroscopy and Circular Diochroism were used to further explore protein structural changes. The UV-Vis absorption spectra of ALDH in the absence and presence of acetaminophen is shown in Fig. 8. Complex formed between acetaminophen and ALDH was evident from the data of UV-Vis absorption spectra. ALDH has two absorption peaks, the absorption peak at 208 nm showed the conformation of the peptide bonds, while the peak of 272 nm be evidence of the aromatic amino acids [51]. The maximum peak position of the acetaminophen -ALDH was clearly visible. The red shift indicated acetaminophen changed the peptide strands of the ALDH, the skeleton of acetaminophen became loosen and the hydrophobicity decreased [52]. The absorption peak at about 278 nm can provide us with information about the three buried aromatic amino acids: tryptophan, tyrosine, and phenylalanine. With the concentration of acetaminophen Table 5 The pH-dependence on the relative thermodynamic parameters of the Aldehyde dehydrogenaseacetaminophen system from 15°C-40°C at pH 5.0, 7.4, and 9.0. pH 5.0 pH 7.4 pH 9.0 Changes in the ALDH secondary structure conformation from acetaminophen complexation was explored using CD spectroscopy. The CD measurements were expressed in terms of mean residue ellipticity (MRE) in deg cm 2 dmol −1 , which can be estimated with Eq. (10). The conformational information of ALDH in the absence and presence of acetaminophen is shown in Fig. 9. The two negative bands at 208 and 222 nm as well as a strong positive band at 200 nm indicate a significant amount of both α-helix and β-sheet structures [53]. Obviously, acetaminophen had a marked effect on the ellipticity of ALDH structure. This might not be unconnected to ALDH activity reduction. However, the slight alteration in the CD spectra together with a significant decrease in the fluorescence intensity was observed. Acetaminophen caused a notable increase in the intensity of the bands at 208 and 222 nm. The α-helical content was calculated from Eq. (11) [54,55]: (Fig. 10)
MRE
Observed CD m deg C nl = ( . ) × 10 p (10) where C p is the molar concentration of the protein, n is the number of amino acid residues and l is the path length: where MRE 208 is the observed MRE value at 208 nm, 4000 is the MRE of the β-form and random coil conformation cross at 208 nm, and 33,000 is the MRE value of a pure α-helix at 208 nm. From the above equations, the α-helix content of ALDH in absence and presence of acetaminophen was calculated. The content of α-helix decreased from 41.4% to 36.6% when acetaminophen was added up to 150 μM. The decrease of α-helix content indicates that acetaminophen combines with the amino acid residues of the main polypeptide chain of the protein and alters secondary structure bond filigree [56] and loses native secondary structures as α-helix and β-sheet elements are converted to random coil and/or turn. The protein skeleton of ALDH became looser, the amino acid residues were exposed, and the hydrophobicity decreased. The possibility of ALDH-acetaminophen modifying the kinetic and thermodynamic stability ALDH still remains a mystery.
Energy transfer from ALDH to acetaminophen
Fluorescence resonance energy transfer (FRET) is a convenient 'spectroscopic ruler' for measuring molecular distances in biological and macromolecular systems by exploring the fluorescence emission from a donor to be absorbed by an acceptor [57,58]. Energy transfer is likely to happen consequent upon (1) the donor can produce fluorescence light; (2) fluorescence emission spectra of the donor and UV-Vis absorption spectra of the acceptor have more overlap; (3) the distance between the donor and the acceptor is 2-8 nm [59]. The energy transfer effect is related not only to the distance between the acceptor and the donor, but also to the critical energy transfer distance. The spectral studies have revealed that the ALDH could form a complex with acetaminophen. The distance between the donor (ALDH) and the acceptor (acetaminophen) can be calculated according to the Forster's non-radiative energy transfer theory using this equation [60,61]: where R 0 is the Forster critical distance when the transfer efficiency is 50%, and r is the distance between the donor and the receptor. F and F 0 are the fluorescence intensities of ALDH in the presence and absence of acetaminophen, respectively: K 2 is the spatial orientation factor of the dipole related to the random distribution of the donor and the receptor. N is the refractive index of the medium. F is the fluorescence quantum yield of the donor and J is the overlap integral between the fluorescence emission spectra of the donor and the absorption spectra of the receptor, which can be calculated by the equation: where F(λ) is the donor fluorescence intensity at the wavelength of λ, and ε is the molar absorption coefficient of the receptor at the wavelength of λ. In the above equations, k 2 =2/3, N=1.336, and ϕ=0.15 [62]. From Eqs. (12)- (14), J, R 0 , E and r were calculated and are shown in Table 6. The binding distance was < 7 nm and 0.5 R 0 < r < 2.0 R 0 . The results showed that the non-radiative energy transfer occurred between acetaminophen and ALDH.
Viscosity effect
Studies of the binding of Acetaminophen to ALDH as a function of solution viscosity were carried out to determine the possible contributions from the viscous solution. The effects of Ficoll 400 and Glycerol (viscosity-induced macromolecules) on the binding of Acetaminophen to ALDH was monitored and shown in Fig. 11a and b, respectively and the relative K a values plotted as a function of the relative viscosity. The sensitivity of this binding constant K a to viscosity was calculated from the exponential of such plot. It showed that K a value decreases with increasing viscosity. Plot of the reciprocal of the relative catalytic efficiency [(K a )°/(K a )] as a function of relative viscosity (η rel /η°) exp with glycerol as micro-viscosogen have an exponential value of 0.14. While the use of Ficoll 400, as macro-viscosogen, the exponential value is 0.21. These clearly showed that the second order rate constant, K a is affected by the increased in solution viscosity by 14% and 21% by glycerol and Ficoll 400, respectively [62] . A viscosity dependence for K a observed suggested that acetaminophen binding to ALDH could be partially rate-determining. The viscosity effect of 0 mean the rate of the reaction is completely independent of solvent viscosity whereas the effect of 1 indicate a completely diffusion-limited event while viscosity effect > 1 indicate a conformational change accompanying binding of the substrate. This also demonstrates that the acetaminophen-ALDH bonding is dynamic quenching [36,37].
Molecular docking study of ALDH-Acetaminophen interaction
Molecular docking was employed to simulate the binding mode of the acetaminophen to ALDH. The possible binding mode and pattern is presented in Fig. 12. This revealed that acetaminophen, as a ligand, is a good molecule which docks well with ALDH. The binding region of the acetaminophen on the ALDH is located in the interior hydrophobic cavity of the enzyme. The ALDH-acetaminophen complex is stabilized by the hydrogen and Van der Waals bonding between the drug and the Ile-365, Arg-136, Leu-217, Phe-219, Glu-220 and Gln-321 amino acids within the active site. As calculated by Auto Dock Vina, acetaminophen showed good binding affinity with a minimum binding energy of −5.3 kcal/mol.
Conclusions
Studies on ALDH fluorescence quenching by acetaminophen have been presented. The results show that acetaminophen is a strong quencher and binds to ALDH with high affinity. This study shows that Acetaminophen quenches the intrinsic fluorescence of ALDH through a dynamic quenching mode and the binding of Acetaminophen to ALDH was sensitive to pH and concentration change. The bonding is predominantly Vander Waal force and was not spontaneous. Synchronous fluorescence spectra indicate that the microenvironments of tryptophan remarkably change. Results from UV-visible and CD spectrum suggested that ALDH underwent substantial conformational changes at both secondary and tertiary structure levels. These changes could indicate that the biological activity of ALDH would be weakened in the presence of the drug. . 11. Effect of solution viscosity upon the binding of Acetaminophen to ALDH using (a) Ficoll 400 as macro-viscosogen with exp =0.21 and (b) glycerol as micro-viscosogen with exp =0.14. | 2018-04-03T05:07:02.330Z | 2017-04-06T00:00:00.000 | {
"year": 2017,
"sha1": "49f617e4a433f187b0425423687ebaea367ce295",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bbrep.2017.03.010",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49f617e4a433f187b0425423687ebaea367ce295",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
1346075 | pes2o/s2orc | v3-fos-license | Active hepatitis C infection and HCV genotypes prevalent among the IDUs of Khyber Pakhtunkhwa
Injection drug users (IDUs) are considered as a high risk group to develop hepatitis C due to needle sharing. In this study we have examined 200 injection drug users from various regions of the Khyber Pakhtunkhwa province for the prevalence of active HCV infection and HCV genotypes by Immunochromatographic assays, RT-PCR and Type-specific PCR. Our results indicated that 24% of the IDUs were actively infected with HCV while anti HCV was detected among 31.5% cases. Prevalent HCV genotypes were HCV 2a, 3a, 4 and 1a. Majority of the IDUs were married and had attained primary or middle school education. 95% of the IDUs had a previous history of needle sharing. Our study indicates that the rate of active HCV infection among the IDUs is higher with comparatively more prevalence of the rarely found HCV types in KPK. The predominant mode of HCV transmission turned out to be needle sharing among the IDUs.
Introduction
Hepatitis C is an infectious disease affecting the liver, caused by the hepatitis C virus [1]. Infection with HCV becomes persistent in > 70% of infected people and may be associated with chronic hepatitis, cirrhosis and hepatic cell carcinoma [2]. Approximately a quarter of a million deaths per annum occur due to chronic liver disease associated with HCV [3].
Hepatitis C continues to be a major disease burden on the world. According to the WHO estimates, 3% of the worldwide population is infected with the hepatitis C virus [4]. The prevalence of chronic hepatitis C in the Asia-pacific region is variable between 4% to 12% [5].
The overall observed modes of transmission in Pakistan are multiple use of needles/syringes (61.45%), major/minor surgery/dental procedures (10.62%), blood transfusion and blood products (4.26%), sharing razors during shaving or circumcision by barbers (3.90%), piercing instruments, nail clippers, tooth brushes, and in less than 1% due to needle stick, from infected mother to baby and sexual transmission [6,7].
A growing risk for the transmission of blood borne diseases in Pakistan is related to injection drug use [8]. In Indonesia, China, Vietnam, Eastern Europe and Central Asia outbreaks of HCV have been associated with injection drug use [8][9][10][11][12]. Pakistan is considered to be the main trafficking route for opiates from Afghanistan which produces the largest bulk of opium [13]. A recent report by the United Nations estimated a country-wide annual prevalence of 0.8% of opiate use in Pakistan [14]. Earlier studies have observed high HCV seroprevalence among the Injection drug users in two cities of Pakistan [15,16]. In Khyber Pakhtunkhwa province of Pakistan, IDUs have never been investigated for active HCV infection or prevalent HCV genotypes. As seroprevalence of anti-HCV does not tell us about whether the subjects are actively infected and there was lack of prevalence data on the active HCV infection or prevalent HCV genotypes, therefore we undertook the study to analyze the presence of HCV RNA and HCV genotypes prevalent among the IDUs belonging to various regions of Khyber Pakhtunkhwa province of Pakistan. The study also investigated the most common risk factors for the transmission of HCV among the IDUs.
Sampling
The study included IDUs from various parts of Khyber Pakhtunkhwa including District Peshawar, District Mardan and District Kohat. A proforma was filled by each of the IDUs which contained information about previous history of needle sharing, major or dental surgery, blood transfusion, marital status and age etc. 5 ml of blood was collected in each case in separate disposable syringes and transported to the Institute of Biotechnology and Genetic Engineering, Peshawar where serum separation was carried out. The study had been approved by the board of study of IBGE. All experiments were performed in accordance with the ethical standards mentioned in the declaration of Helsinki.
Immunochromatographic Tests (ICT)
Screening for HCV positive samples was carried out with the help of Immuno-chromatographic tests (Accurate, USA) followed by (Acon, USA). Samples positive by ICT were furthered for next step evaluation.
RNA extraction and Qualitative PCR
HCV RNA was extracted from 100 μl serum by using Ana-gen RNA extraction kit (Ana-gen, USA) according to the manufacturer's instructions. Qualitative detection of serum HCV RNA was performed by Reverse transcription PCR as described previously [17].
HCV Genotyping
Genotyping of HCV was done according to the previously mentioned Type-specific PCR method [18].
All The PCR products were analyzed on 2% agarose gel prepared in 0.5% TBE buffer, stained with ethedium bromide (10 μg/ml) as florescent dye. Gels were photographed using Alpha quant (Alpha Innotech). 100-bp DNA ladder (Gibco BRL) was used as DNA size marker.
Results
It is evident from the previous studies conducted in Pakistan that Injection drug use is a predominant mode of HCV transmission. We analyzed the blood samples of 200 IDUs belonging to various districts of the Khyber Pakhtunkhwa province (Table 1) for the prevalence of active HCV infection and HCV genotypes. Our results indicated that out of the total 200 IDUs, 48 (24%) IDUs had active HCV infection as detected by RT-PCR (Figure 1). Comparatively high percentage (31.5%) of the IDUs had anti HCV in their blood (Table 2). Active HCV was more prevalent in district Peshawar followed by district Mardan and district Kohat ( Table 2). Prevalent HCV genotypes were 2a (35.71%), 3a (28.57%), 4 (14.29%) and 1a (7.14%) while the genotypes in 4 (14.29%) IDUs could not be determined by the assay performed in this study ( Table 3). Majority of the IDUs were married, economically very poor with an income of less than a dollar/day and had primary or middle school level education (Table 1). Needle sharing was observed in 95% cases.
Discussion
The burden of hepatitis C is increasing in Pakistan partly, due to lack of public awareness and poor screening facilities in our health care units. HCV is a bloodborn pathogen and the investigated risk factors in Pakistan include major surgery, dental surgery, barbers, occupational needle pricks and Injection drug use [19][20][21]. Other studies from Pakistan have reported prevalence of anti-HCV antibodies among the IDUs from various parts of the country [15,16]. In this study, we investigated prevalence of active HCV infection among the IDUs and found that 24% of them had HCV RNA in their blood as detected by RT-PCR. The study revealed that a considerably higher percentage (31.5%) of the IDUs had anti-HCV antibodies in their sera ( Table 2). Detection of more anti-HCV cases in this study could partly be attributed to the self limiting nature of the disease or the limitations of the immunochromatographic tests [22,23]. The distribution of HCV genotype 3 and 2 has been reported to be worldwide including Pakistan [17]. All the previous studies conducted in Pakistan have employed type specific PCR for the detection of various HCV genotypes. Earlier studies have reported that HCV genotype 3a is the most abundant genotype among the general population of Pakistan [17,24]. Our analysis indicated that genotype 2a was the most prevalent followed by 3a among the IDUs in Khyber Pakhtunkhwa province of Pakistan. HCV genotype 4 and 1a are only rarely found in Pakistan and have earlier been reported to be prevalent in Middle east, western countries, Australia and the Americas [25] but this study indicated that a considerable number of the IDUs were infected with genotype 4 (Table 3). Former history of the IDUs infected with genotype 4 and 1a revealed that a number of them had a history of residing in the Gulf region or Northerm America.
Injection drug use is uncommon among the female population of Khyber Pakhtunkhwa where social constraints do not allow free mix ups with the males and social interactions among the opposite sexes are limited. In our study, we analyzed 2 females from the entire province for HCV infection and both of them turned out to be negative for anti-HCV.
Demographics of the IDUs (Table 1) indicated that 60% of the IDUs were married. Married IDUs pose a greater risk of transmitting the disease to their spouses or siblings. Education status of 98% of the IDUs was basic (primary or middle school) and only 5% of the IDUs were economically well off. Due to poor economic status, increasing number of the IDUs have resorted to beggary causing serious social problems. We also noticed in this study that IDUs with a prolonged history of Injection drug use were more infected with HCV as compared to the novices. None of the IDUs had a history of major or dental surgery and only 5% had a history of blood transfusion. 95% of the IDUs had a previous history of needle sharing although some of them had quit with the practice. IDUs from Peshawar district were relatively more aware of the risk of needle sharing as compared to IDUs from other districts but non of them were informed about hepatitis C. Our study is in conformity with other studies conducted in Pakistan which had reported increasing trend of Injection drug use [26]. According to the results of National Assessment Study on Drug Abuse Situation in Pakistan, conducted in year 2000 it was estimated that about 60,000 drug addicts were using drugs through injections.
This study suggests that the government and non-governmental organizations should launch projects to educate people about hepatitis C and the transmission of HCV in order to minimize the eminent threat of the spread of the disease especially that of the rare genotypes which are comparatively less responsive to Interferon-based therapies. Policies regarding economic rehabilitation and psychological counseling for the war-affected people should help minimize the practice of Injection drug use.
Conclusion
The study concludes that 24% of the IDUs in KPK province of Pakistan are actively infected with HCV. The prevalent HCV genotypes are 2a, 3a, 4 and 1a. Lack of awareness among the IDUs about needle sharing and increasing trend of Injection drug use due to regional socio-economic and geopolitical situation contributes a great deal towards the spread of HCV. | 2014-10-01T00:00:00.000Z | 2011-06-28T00:00:00.000 | {
"year": 2011,
"sha1": "fdcc5bbaab4f10c9ae154a0d0340d951c992de3c",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-8-327",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1a92851d4dfe170e6a6e42d9b5cd2b9bc02f2be4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2888185 | pes2o/s2orc | v3-fos-license | Interim fluorine-18 fluorodeoxyglucose PET-computed tomography and cell of origin by immunohistochemistry predicts progression-free and overall survival in diffuse large B-cell lymphoma patients in the rituximab era
Supplemental Digital Content is available in the text.
Introduction
Diffuse large B-cell lymphoma (DLBCL) represents the most common subtype of non-Hodgkin's lymphoma (NHL) [1] and corresponds to 49.5% of all NHL in our institution [2]. The WHO classification recognizes various subtypes of DLBCL on the basis of morphology, immunohistochemistry (IHC), and molecular analysis [1]. Although DLBCL is considered a heterogeneous disease, patients have been treated uniformly with anti-CD20 monoclonal antibody (rituximab) and doxorubicinbased chemotherapy regimens [3,4]. Unfortunately, almost half of DLBCL patients remain incurable; thus, it is critical to recognize these patients and improve their prognosis. Before the rituximab era, one of the best ways to identify NHL high-risk groups was the International Prognostic Index (IPI), which is based on clinical features such as age, performance status, stage, number of extranodal sites, and lactate dehydrogenase (LDH) [5]. Although clinical prognostic factors are commonly used, they cannot identify a risk group with a less than 50% chance of cure in the rituximab era [5,6]. By gene expression profile (GEP), Alizadeh et al. [7] showed that DLBCL could be stratified into different risk groups Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website (www.nuclearmedicinecomm.com).
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. independent of IPI. In this study, patients with malignant cells with a gene signature similar to germinal center (GC) cells presented with a better prognosis than patients with signatures similar to activated B cells [7]. Because microarray analysis is unavailable in daily lymphoma practice, IHC algorithms have been proposed by analyzing different proteins such as BCL-6, MUM-1, CD10, and FOXP1 and DLBCL cases can be classified into GClike or nongerminal center (NGC)-like subtypes [8][9][10][11]. However, the use of these prognostic indicators has been questioned in the rituximab era [6,11,12]. Currently, PET-computed tomography (CT) with fluorine-18 fluorodeoxyglucose ( 18 F-FDG) is recommended at diagnosis and at the end of treatment of DLBCL to improve the accuracy of staging and response evaluation, respectively [13]. Recently, however, this technology has been tested as a prognostic marker for DLBCL and some trials showed better survival for PET-CT-negative patients after two out of three cycles than for PET-CTpositive patients [14]. However, the impact of interim PET-CT (iPET-CT) as a prognostic tool for DLBCL remains controversial [15]. Furthermore, iPET-CT should not be used to guide therapy and is not recommended outside clinical trials [13].
In clinical practice, the best way to accurately discriminate different prognostic risk groups in DLBCL is not clear. The primary aim of this prospective cohort study was to investigate the association between the clinical prognostic index by IPI, the image-based response assessed by iPET-CT and DLBCL cell of origin (COO), using the Hans algorithm as prognosis predictors in patients treated with R-CHOP-21. Our initial hypothesis was that these three variables could be useful to identify different risk groups in DLBCL.
Study design and end points
This was a unicentric and prospective study with the primary end point of overall survival (OS). OS was defined as the time from the date of diagnosis until the date of death as a result of any cause or last patient follow-up. The secondary end point was progression-free survival (PFS) and was defined as the time from the date of diagnosis to the date of disease progression, relapse, or death as a result of any cause or last patient follow-up.
Patients
After receiving approval from the Ethics Committees of HC-FMUSP, we prospectively evaluated 147 consecutive de-novo adult DLBCL patients, all treated at the Clinical Hospital/Sao Paulo Cancer Institute of the Medical School of Sao Paulo University (FMUSP), from June 2008 to November 2011. Written informed consent was obtained from all patients. The tumor histology was reviewed by two experts in hematopathology from the Pathology Department at FMUSP. Baseline clinical and disease features, including age, sex, Ann Arbor stage, number of extranodal sites involved in lymphoma, LDH dosage, performance status, B symptoms, and bulky disease (tumor size ≥ 10 cm or cardiothoracic index over 1/3), were obtained from medical records by a specific researcher. Also, hepatitis B virus, hepatitis C virus, HIV serology and kidney, liver, and biochemical exams; ECG; bone marrow biopsy; neck, chest, abdomen, and pelvis CT scan; and whole-body tomography with 18 F-FDG-PET-CT were performed at diagnosis. The IPI was calculated for all patients as originally described [5]. Patients were treated with 6-8 cycles of R-CHOP-21 (rituximab: 375 mg/sqm intravenously day 1, cyclophosphamide: 750 mg/sqm intravenously day 1, vincristine: 1.4 mg/sqm maximum of 2 mg intravenously day 1, doxorubicin 50 mg/sqm intravenously day 1, and prednisone 100 mg/day orally from day 1 to 5). Patients with stage I/II nonbulky disease were treated with four cycles of R-CHOP-21 plus radiotherapy. Patients with bulky disease and involvement of the sinuses, bones, testes, breast, and Waldeyer involvement underwent 3600 cGy radiation at the end of the treatment. Patients with involvement of the testes, ovaries, breast, sinuses, paravertebral region, and high IPI received four intrathecal injections of methotrexate (12 mg) and dexamethasone (2 mg) as prophylaxis against relapse in the central nervous system. Patients were re-evaluated after two cycles of chemotherapy with a PET-CT (iPET-CT), after four cycles with a CT scan, and at the end of treatment with PET-CT and bone marrow biopsy in cases with bone marrow involvement at diagnosis. The response at the end of treatment was categorized as complete remission, partial remission, or progressive disease according to the Cheson criteria [13]. Patients in complete remission were followed every 2 months in the first year, 3 months in the second year, 6 months in the 3rd to 4th year, and once a year for life after 5 years. The refractory and relapsed patients received an IVAC-modified regimen [16] as salvage therapy, followed by autologous stem cell transplantation. Patients with HIV and severe congestive heart failure were not included in the study.
Immunohistochemistry
Patients underwent an incisional or excisional biopsy, and the tumor was classified using hematoxylin-eosin and IHC staining as originally described [1]. Immunohistochemical staining by immunoperoxidase was performed on 4 mm sections from formalin-fixed paraffin-embedded tissue using standard procedures [17]. For CD10 staining, we used clone P1F6 (Novocastra, Newcastle, UK) at a 1 : 1000 dilution. We used clone MUM-1p (Dako, Glostrup, Denmark) diluted 1 : 2000 for BCL6 and clone 56C6 (Novocastra) for MUM1 diluted 1 : 2000. The GC and non-GC phenotypes were defined using the decision tree established by Hans et al. [8] with indicated cutoffs. All cases were centrally reviewed by two experts in hematopathology from the Pathology Department at FMUSP.
F-FDG PET-CT scan protocol
Of the 147 patients, 139/147 (94.5%) underwent staging with a dedicated PET-scan or on an integrated PET-CT scan (baseline PET) before starting any therapy for lymphoma, including corticosteroids. iPET-CT was performed at day 20 after the 2nd cycle of chemotherapy in 111/147 (75.5%) patients. Thirty-six patients did not undergo iPET-CT because 12 of them died before the procedure and 24 of them because of logistical issues. End therapy PET was performed 4-8 weeks after chemotherapy (minimum 12 weeks in cases of radiotherapy) in 122/147 (82.9%) patients.
Patients fasted for at least 6 h before the 18 F-FDG injection, and their serum glucose level was measured before administration to ensure optimal blood glucose levels lower than 180 mg/ml. Each patient was injected intravenously with a standard dose of 5 MBq/kg of 18 F-FDG after resting for 60 min. A whole-body acquisition was performed 60 min after injection. The 18 F-FDG-PET-CT scans were performed in two sites: (a) on an integrated PET/CT system associated with a 16-channel CT (Discovery PET/CT 690; GE Healthcare, Waukesha, Wisconsin, USA) or (b) in a dedicated PETscan equipment (PET Advance Nxi; GE Healthcare), interpreted with concurrently CT scans of the neck, chest, abdomen, and pelvis at the same institution. PET/ CT scans were acquired from the base of the skull to the mid-thigh and dedicated PET scans from the top of the skull to the mid-thigh. Analysis of staging PET-CT was carried out visually by at least one certified nuclear medicine physician and a radiologist with experience in PET/CT interpretation. End of treatment and iPET-CT analysis were reported according to the 5-point scale (5-PS) using the Deauville criteria. Scores 1, 2, and 3 were considered to indicate a complete metabolic response (CMR or PET negative) and scores 4 and 5 were considered to indicate a partial metabolic response (residual metabolic disease or PET positive) [18]. The results of all PET scans were then centrally reviewed by one board-certified nuclear medicine physician (A.M.C.), who was blinded to clinical details and patient outcomes.
Statistical analysis
Statistical analyses were carried out using IBM SPSS Statistics for Windows, version 23.0 (IBM, Armonk, New York, USA). OS was calculated from the date of diagnosis to death or last patient follow-up. PFS was calculated from the date of diagnosis until disease progression, relapse, or death (from any cause) or last patient followup as described previously [13]. Survival curves were estimated using the Kaplan-Meier method and compared using the log-rank test. Multivariate analysis was carried out using a Cox proportional-hazards regression model and a hazard ratio (HR) was calculated.
Differences between the results were considered statistically significant if P value less than 0.05.
Results
The clinical characteristics of 111 available for iPET-CT patients are shown in
DLBCL classification and iPET-CT prognostic value
The iPET-CT and IHC analysis were carried out in 78 patients and showed that OS at 48 months in GC-DLBCL patients was 100% for iPET-CT-negative patients and 61.2% for iPET-CT-positive patients (P = 0.002) (Fig. 1). PFS was 100% for iPET-CTnegative patients and 60.3% for iPET-CT-positive patients (P = 0.001) (Fig. 2). There were no statistically significant differences for OS or PFS in the non-GC-DLBCL subgroup according to interim 18 F-FDG-PET.
Discussion
The aim of this study was to analyze the value of IPI, the COO determined by IHC, and iPET-CT as prognostic tools in DLBCL patients treated homogeneously with R-CHOP in a single center in Brazil. We showed that the GC subgroup, determined by IHC using Hans' algorithm, and a negative interim 18 F-FDG/PET after two cycles of treatment identified a group with a very good outcome. Among the GC-DLBCL patients, the OS at 48 months was 100% when the iPET-CT was negative and 61.2% when it was positive (P = 0.002). Furthermore, there was a better PFS for the GC subgroup when the iPET-CT was negative versus when it was positive (100 vs. 60.3%, P = 0.001).
Lanic and colleagues reported similar results in GC-DLBCL patients who were iPET-CT-negative (OS and PFS of 100% in 2 years), and conversely, a poor prognosis group defined by iPET-CT-positive patients (33% OS and 0% PFS in 2 years). In addition, the subgroup of patients who had signatures similar to activated B cells and were iPET-CT-negative showed an unfavorable outcome with a 2-year OS of 57% when iPET-CT was performed after three of four cycles of R-CHOP-like therapy [19]. It is noteworthy that the authors obtained similar outcomes using GEP or IHC to determine DLBCL origin. In our study, the subgroups of DLBCL were determined only by IHC analysis and Hans' algorithm [8]. This algorithm utilizes CD10, Bcl-6, and MUM-1/IRF4 markers in a hierarchical model with a 30% cut-off for positivity [8]. Although this is the most common criterion used to discriminate GC from non-GC subgroups of DLBCL, its prognostic value has been questioned in the rituximab era [20]. Even though a GEP is the standard used to determine the COO in DLBCL, it is not yet widely available, is time consuming, and expensive [21]. Recently, the feasibility of quantifying GEP using formalin-fixed paraffin-embedded tissue was published by Scott et al. [22] who described a robust method with more than 95% concordance of COO assignment between two independent laboratories. Although the results using Hans's algorithm correlate highly with those from GEP (86%), our findings need to be validated with new methodologies as reported above [21,22].
Before the rituximab era, IPI was the most important predictor of survival and the strongest index to recognize low-risk and high-risk NHL groups. However, this potential has been lost in the rituximab era [6]. Since functional imaging by 18 F-FDG-PET has changed the paradigm of staging and response monitoring in DLBCL [13], iPET-CT after a few cycles of chemotherapy has been studied exhaustively as a prognostic marker to identify patients who could benefit from earlier change in treatment. However, the role of iPET-CT as a real prognostic tool remains unclear. Some studies have reported that a negative iPET-CT is associated with better OS and event-free survival [23]. Yet, other studies have not confirmed these outcomes, in part because they have used different analysis methods for 18 F-FDG-PET imaging and determining response [15,24,25]. Lanic et al. [19], using GEPs in 57 cases of DLBCL, applied a semiquantitative method to interpret the iPET-CT using SUV max reduction with a value less than 70% for slow metabolic responders and higher than 70% for fast responders [19,26,27]. In our trial, a qualitative method was utilized that considered the 5-PS (Deauville criteria) as recommended by the Consensus of the International Conference on Malignant Lymphomas Imaging Working Group [18,28]. The 5-PS is feasible, simple, and has high interobserver agreement, with an improvement in the positive predictive value. Furthermore, it has been validated for use at interim and end of treatment in several trials [28]. Using the 5-PS, we found that 60/111 (54.1%) patients were iPET-CT negative. Nols et al. [29] carried out an iPET-CT qualitative and quantitative analysis and reported that iPET was highly and independently predictive for PFS and OS in DLBCL and its negative predictive value (NPV) was improved by combination with IPI. Silvia and colleagues also observed similar outcomes in terms of OS and PFS using iPET-CT analyzed by qualitative or semiquantitative methods [24]. However, other data showed that a favorable iPET was not associated with improved PFS in DLBCL patients [30].
In our study, we showed that iPET and bulky disease were independently predictive for OS. However, in the subgroup of GC-DLBCL, iPET presented a high rate of NPV. IPI was predictive for OS and PFS only in univariate analysis. Similarly, using a qualitative method, Lanic et al. [19] detected 14/45 (31%) iPET-CT-negative cases, although 36/45 patients (80%) were characterized as slow responders on the basis of semiquantitative criteria. In this study, using qualitative and quantitative methods, the authors identified a favorable group of DLBCL patients, with GC origin and iPET-CTnegative [19].
Our results are in agreement with previous studies that showed that iPET-CT is associated with a high NPV, but low positive predictive value [15,31]. Thus, we believe that a negative iPET-CT result may be used as a prognostic predictor for survival, but not iPET-CT-positive. In the future, these 'favorable' patients could be selected to receive less chemotherapy, especially the most vulnerable and very elderly.
Conclusion
The iPET-CT results and discrimination of DLBCL subgroups, on the basis of Hans' algorithm of IHC, identified the iPET-CT-negative GC subgroup to have a very good prognosis. Further studies are needed to confirm our results. | 2018-04-03T03:22:11.971Z | 2016-06-08T00:00:00.000 | {
"year": 2016,
"sha1": "51451a16f1b428a9edfc10403fbcb4117304d5d1",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc5004620?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "51451a16f1b428a9edfc10403fbcb4117304d5d1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238815239 | pes2o/s2orc | v3-fos-license | PREVALENCE AND FACTORS ASSOCIATED TO HOUSEHOLD FOOD INSECURITY DURING COVID-19 OUTBREAK
COVID-19 outbreak added unprecedented threatens to the food system worldwide. The enactment of social restriction regulation by several provinces in Indonesia may give an impact to household food security. Economical access to food might be compromised since the application of work from home policy, particularly for those who earn income from informal sector jobs. This study aims to determine the proportion of household food insecurity during the COVID-19 pandemic, identify the associated factors, and the strongest predictor of household food insecurity. This cross-sectional study design was carried out in Java and Sulawesi. A Self-administered Google Form Questionnaire was filled out by 191 women of reproductive age as the household food gatekeeper. Household food insecurity was evaluated using the Food Insecurity Experience Scale (FIES) Questionnaire by FAO that consisted of eight gradual questions. Data analysis was performed using statistical software for univariate, bivariate (chi-square), and multivariate (logistic regression). The proportion of food insecurity in the study was 29.8%, encompassed 19.9% mild food insecure, 7.3% moderate food insecure, and 2.6% severe food insecure. Food insecurity was significantly associated with place of residence, family income, and education. Living in urban areas was among the robust predictor of household food insecurity (OR 5.59, CI 95%), meaning living in urban was a risk factor of household food insecurity during the COVID-19 pandemic. Urban living was highly dependent on routine salary since they might not occupy with some sort of alternative source for income like in the rural areas, however, there was income reduction during the crisis. Food insecurity might be a sting in the tail of the COVID-19 pandemic, food policy regarding this matter is urgently required.
Introduction
World Health Organization has declared COVID-19 as a Public Health Emergency of International Concern since early 2020. 1 This formal statement then responded by a number of countries by implementing lockdowns, travel restrictions and social distancing in order to control the virus transmission. 2 The policies aforementioned were also applied in Indonesia with less restrictive lockdown, known as large-scale social restriction (Pembatasan Sosial Berskala Besar) or PSBB. 3 The implementation of PSBB involves the closure of the workplace, worship place and school, and to be conducted at home. 3 The transition caused by the adaptation to this new habit brings dramatic changes to the various sectors including the food system. 2 United Nation mention that the outbreak gives impact to the second goal of SDGs, when the COVID-19 crisis adds the threats to the food system by adding more than 130 million people at risk of suffering hunger. 4 However, there is preceded condition of hunger that may involve larger number of people at risk, which is food insecurity that best describes the iceberg phenomena. 5 Food insecurity is a condition of limited or uncertain availability of nutritious and safe food as social determinants of health that could affect the vulnerable group at the household level such as women and children. 6 While food security as defined by Indonesia Law No.18 is the condition of an individual or a household food fulfillment that is reflected by sufficient food availability either quantitatively or qualitatively, safe, diverse, nutritious, affordable and not contradicted to religion, belief and culture, to be able to live a healthy, active and productive life in a sustainable manner. 7 The Global Hunger Index (GHI) 2020 noted Indonesia ranked 70 th out of 107 countries and had a moderate level of hunger with GHI score of 19.1. 8 Indonesia Statistics noted a declining prevalence of individual food insecurity in 2017 to 2020 (8.66% and 5.12% respectively). 9 However, a secondary analysis of the 2015 nationwide household and expenditure survey (SUSENAS) indicated 20.8% of household food insecurity. 10 Moreover, the 2019 Indonesia Food Insecurity Atlas identify six provinces (Bangka Belitung, West Kalimantan, Maluku, East Nusa Tenggara, West Papua and Papua) with a low score of food security index that indicates a high ratio of consumption to net production per capita, high prevalence of stunted under-five children, and a high proportion of poor people. 11 At the time of the COVID-19 pandemic, Indonesian household consumption is predicted to decline by 8.29%. 12 Food insecurity is the barrier to sustainable development, since it leads an individual to be less productive, more susceptible to disease and earn less. 13 Access refers to how households reach the food, both economic and physical access and utilization related to dietary quality. The fourth pillar is these pillars should stable, which means when one of these pillars is not fulfilled, it will potentially cause food insecurity. 16 Economic and physical access to food might be impaired during the pandemic, since the application of work from home policy.
The economic disruptions during the COVID-19 pandemic and it's negative impacts on households' food security are far more likely to impact the poor and vulnerable groups such as market dependant rural households and low-income urban households. 17 Income decreases, especially for those who earn daily wages from the informal sector jobs. A survey stated that fiftyfive percent of men and 57 percent of women reported no longer working due to the pandemic. Job loss has affected all sectors and makes them more vulnerable to suffer from food insecurity. 18 This study is pivotal to be undertaken since food security positively impacts on education, health, economics and social development. Additionally, food insecurity is an unprecedented tail of the COVID-19 outbreak, it is better to identify the determinants for fast recovery of food system through evidence-based program development and policy advocation. This study aimed to determine the proportion of household food insecurity during the COVID-19 pandemic, identify the associated factors, and identify the robust predictor of household food insecurity.
Method
This cross-sectional study design was conducted in Java and Sulawesi (both applied largescale social restriction policy) from May to August 2020 using a Self-administered Google Form Questionnaire. The questionnaire was broadcasted for two months long through the online platform, such as Whatsapp, Instagram and Facebook to obtain participants, who were married women aged 17-49 years old. In order to reassure the eligible participants, several filtered questions were applied at the beginning of the questionnaire that asked about age, marital status and whether they in charge of food provision at home. Those who were uneligible would be discontinued. In total, there were 191 women who were aged 17-49 years old, married, and able to serve food for the family (food gatekeeper).
The questionnaire consisted of two sections, which were demographic characteristics (age, place of residence, type of family, monthly income, education and occupation, and under-five children) and household food insecurity. All data were obtained online with a validated questionnaire. Demographic characteristics were considered as an independent variable, while food insecurity was considered as a dependent variable. The age of the participants was represented by two categories: younger than and equal to 30 years old and older than 30 years old. Place of residence classified as urban and rural, based on Indonesia Statistics classification. 19 Type of family categorized as nuclear and extended family. Family monthly income is grouped as more than equal to 5 million IDR and less than 5 million IDR. Educational attainment is classified as high school and higher education graduates. Working status was grouped as not working and working. Household food insecurity was assessed using the Food Insecurity Experience Scale (FIES) Questionnaire by FAO that consisted of eight gradual questions. The questionnaire asked about the family experience in the past one month whether they worried to not have enough food, inability to eat healthy food, only have few kinds of food, skipping mealtime, compromising the food portion, ran out of food, hungry with no food and skip eating the whole day all due to lack of money. 20 The scoring system for food insecurity variable was as followed: food secure (score 0), mild food insecure (score 1), moderate food insecure (score 2 to 5), and severe food insecure (score 5 to 8) 20 .
Data were analyzed using statistical software for univariate, bivariate, and multivariate with CI 95%. Univariate analysis to describe the frequencies and proportion of characteristics of the sample. Bivariate analysis was conducted using chi-square to identify any potential associations between variables and household food insecurity. Further, multivariate analysis was performed using logistics regression to assess crude and adjusted odds ratio. Variables that show significant values less than 0.05 are treated as covariates on the analysis.
This research was ethically approved and declared by the Health Research Ethics Committee of the Universitas Pembangunan Nasional Veteran Jakarta (approval number 2617/VI/2020/KEPK). Informed consent was obtained from each participant prior to data collection.
Results
A total of 210 subjects were assessed. From those assessed, 19 were excluded with detail of irrelevant responses to food insecurity questions. The final analytical sample involving 191 respondents. In our analysis, out of 191 women, 29.8% of them were food insecure (95% CI). This food insecurity can be further categorized as mild food insecure, moderate food insecure, and severe food insecure (the proportion were 19.9%, 7.3% and 2.6 respectively). Table 1 shows the subject's characteristics based on several variables. Most respondents were aged less than 30 years, lived in an urban area with a nuclear family, had a monthly income higher than 5 million IDR, were higher education graduates, worked as a civil servants, and had under-five childre Tabel 2 showed a significant association between place of residence, family monthly income, and education with food insecurity (P < 0.01 CI 95%). No significant association was found between age, type of family, under-five children availability on family with food insecurity.
Independent variables that were significant associated with food insecurity based on bivariate analysis, were treated as covariates. Additionally, the type of family was also assessed on multivariate analysis since the size of the family may influence the food distribution at home and affect the household food security.
In logistics regression ( Table 3), place of residence, family monthly income, and education were significantly associated with food insecurity. The odds for having food insecurity is about 5.59 times higher on those who lived in urban compared to rural. Those who earned income less than 5 million IDR have chance of 0.13 times lower to suffer food insecurity as compared to those who earned more. Households with women food gatekeeper who attained primary and high school education had 0.23 times lower chance to suffer food insecurity as compared to those who were higher education graduates. Lower family income and educational attainment were found to be protective factors toward food insecurity. No significant association was found on logistics regression of type of family with food insecurity.
Discussion
No evidence showed that COVID-19 is foodborne, however, the crisis is resulting in a major impact on food security through several dynamics. 21 The High-Level Panel of Experts on Food Security and Nutrition depicted seven dynamics caused by COVID-19 outbreak that affect food security. 15 First is the disruption of food supply chains in the form of declining demand for perishable foods and export-import borders restriction. This dynamic affects food availability, mainly for fruits, vegetables and animal protein since people prefer to stock foods with a longer shelf life that mostly compromised the nutrition value. 22 Second is the economic recession that is compounded with income losses. The reduction of regular income and numbers of employees even laid off affect family purchasing power and economical access to food. In order to cope in such situation, households that even already vulnerable prior COVID-19 outbreak will compromise their eating pattern quantitatively (reducing portion) and qualitatively (compromise diversity). 23 Third is social inequities, the 2020 national online survey undertaken by National Commission on Violence Against Women revealed the effects of the outbreak on women's life, among them are the increase in domestic burden, stress, and domestic violence. 24 SMERU Research Institute reported that COVID-19 has lowered women's labor force involvement since it hits informal sector jobs that mainly engage female workers. Subsequently, there is a decrease in family income, while at the same time women are also the household food gatekeeper. 25 Fourth is disruption on social protection program, as the implementation of PSBB, the government expanse various social protection program for vulnerable families like never before, such as the increased allowance for conditional cash transfers, food assistance, pre-employment card, social assistance, unconditional cash transfer, electricity subsidy and logistics. 26 However, at the end of 2020, Indonesia Corruption Eradication Commission (KPK) revealed social assistance bribery that involved several officials from Ministry of Social Affairs, leaving a significant loss for Indonesia and a deepened dissapointment. 27 Fifth is food environment alteration. During PSBB period, Indonesia government still allow food retails to operate, however in the beginning of the policy announcement, panic buying was done by people especially in the big city like Jakarta. 28 This kind of circumstance affects the whole food environment and food prices. In contrast, some people restrict themself to go further for shopping and prefer the supermarket nearby, as they avoid potential disease transmission. 15 The sixth is food price increases when government regulates the closure of restaurants and market. During the crisis, households with limited economical access prefer staple food that are more filling, yet low in nutrition value, as food prices are conversely linked with nutrition value. 29 The seventh is changes in food production due to the closure of the workplace.
The proportion of food-insecure households in our analysis was 29.8% (95% CI). This consisted of 19.9% mild food insecure which means a family that underwent uncertainty regarding the ability to obtain food. The proportion of 7.3% was moderate food insecure which means a family was compromising food quality, variety, even reducing food quantity and skipping meals.
This can be caused by insufficient money to afford an adequate quality of food. The last one was severe food insecurity (2.6%) which means that a family had no food for a day or more. This can be resulted from running out of food 30 Secondary data analysis of the 2015 nationwide household socioeconomic and expenditure survey (Survei Sosial Ekonomi Nasional) indicated 20.8% of the household were in the food insecure category 10 , lesser than what is found in this study. This can be attributed to the specific circumstances during the COVID-19 outbreak, where work from home policy was applied and may impact economical access to food. 25 The bivariate analysis showed that family size had no association with household food security. However, a Nigeria study noted that the larger the family size, then the lesser food will available for each person within the household. 31 Regarding the association between place of residence and food security on logistics regression, a household in the urban area tend to be more food insecure than those in a rural area, which is contrary with secondary data analysis of the socioeconomic survey, with a point of view was attributed to the wide gap in household income between both areas. Also, the urban area was mostly economically developed rather than urban area 10,25 . However, our study found an inverse result, this can be attributed to urban society that does not have any alternative source of income other than routine work. 32 No significant association was found between the type of family with food security using logistics regression.
However, it showed a significant association in the bivariate analysis. Living with a bigger family size was more likely that the dependency ratio is high. 10 Significant association was also found between education and food security. Higher education made persons had plenty of choices to work and had better income. 10,32 Inverse results also found on the association between family income, education attainment and household food insecurity. The possible explanation for these circumstances can be attributed to the income reduction during the outbreak. Not only because the income reduction, but also those who earn more than 5 million, possibly distributed the income to some credit purchase, so that it may be compounded the food provisioning.
Food insecurity may affect individual nutritional status as it resulted in micronutrient deficiency, starvation, morbidity, and even mortality. 33,34 Four pillars constructed food security, which are food availability, food access, food utility, and food stability. Food availability means the food is available and adequate to be consumed. Food access refers to household affordability to physical access and economical access to food. Food utility means that households or individuals can utilize food as it should be with no pathophysiological condition. Food stability means that households or individuals have access, utilize, and available food anytime without any restriction. 35 Finally, food policies should be advocated. For instance the implementation of targeted social protection by providing adequate food aid and food assistance, ensuring special protection for vulnerable groups, maintaining the food prices, discourage food export and develop a campaign to educate people on healthy eating by consuming localized and affordable foods.
Conclusion
A high prevalence of household food insecurity is revealed during the COVID-19 outbreak (29.8%). Several factors associated with household food insecurity are the place of residence, family income, and educational attainment. Family income and living in an urban area is found to be the risk factor of household food insecurity. The possible explanation for this finding is that the July 2021 151 urban family may do not have an alternative source of income as compare to rural families to be able to cope with the changes in the food system during the pandemic. | 2021-09-01T15:07:47.818Z | 2021-06-28T00:00:00.000 | {
"year": 2021,
"sha1": "b1532904c9954db26dd86a33a7fadea92be6eaac",
"oa_license": "CCBYNCSA",
"oa_url": "https://ejournal.fkm.unsri.ac.id/index.php/jikm/article/download/743/359",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a05e47e9febd73405a8643980ec2f3914ad698ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
230545455 | pes2o/s2orc | v3-fos-license | Modification of a gas exchange system to measure active and passive chlorophyll fluorescence simultaneously under field conditions
Solar-induced fluorescence (SIF) is a promising tool to estimate photosynthesis across scales; however, there has been limited research done at the leaf level to investigate the relationship between SIF and photosynthesis. To help bridge this gap, a LI-COR LI-6800 gas exchange instrument was modified with a visible-near-infrared (VIS-NIR) spectrometer to measure active and passive fluorescence simultaneously. The system was adapted by drilling a hole into the bottom plate of the leaf chamber and inserting a fibre-optic to measure passive steady-state fluorescence ( F t , λ , analogous to SIF) from the abaxial surface of a leaf. This new modification can concurrently measure gas exchange, passive fluorescence and active fluorescence over the same leaf area and will allow researchers to measure leaf-level F t , λ in the field to validate tower-based and satellite measurements. To test the modified instrument, measurements were performed on leaves of well-watered and water-stressed walnut plants at three light levels and a constant air temperature. Measurements on these same plants were also conducted using a similarly modified Walz GFS-3000 gas exchange instrument to compare results. We found a positive linear correlation between F t , λ measurements from the modified LI-6800 and GFS-3000 instruments. We also report a positive linear relationship between F t , λ and normalized steady-state chlorophyll fluorescence ( F t / F o ) from the pulse-amplitude modulation (PAM) fluorometer of the LI-6800 system. Accordingly, this modification will inform the link between spectrally resolved F t , λ and gas exchange—leading to improved interpretation of how remotely sensed SIF tracks changes in the light reactions of photosynthesis
Introduction
Light absorbed by chlorophyll pigments in plants can be utilized in three pathways.When photons excite the electrons in a plant's photosystems, the energy can drive the light reactions of photosynthesis (photochemistry), dissipate as heat (nonphotochemical quenching, NPQ) or re-emit as light at longer wavelengths (650-850 nm, chlorophyll fluorescence).With three alternative pathways, disentangling the distribution of light energy can be difficult unless one of the pathways is inhibited.Most commonly, NPQ is inhibited by pulse-amplitude modulation (PAM) fluorimetry at the leaf-level through rapid saturation of the photosystems with a pulse of light at a time scale where NPQ is negligible (Maxwell and Johnson 2000).As such, leaf-level PAM fluorimetry has become a powerful tool for assessing changes in photosynthetic machinery and plant performance (Krause and Weis 1991;Bilger et al. 1995;Govindjee 1995).The availability and relative ease of operation of commercial handheld fluorometers has made PAM an attractive technique to probe the first steps of photosynthesis.Pulse-amplitude modulation fluorimetry measurements are used to assess the efficiency of photosystem II (PSII) photochemistry, which has been found to correlate strongly with CO 2 fixation under normal conditions (Genty et al. 1989;Edwards and Baker 1993).Such comparisons have been enabled through the use of gas exchange instruments using infrared gas analysers (IRGA), which measure carbon and water fluxes at the leaf-scale (Long and Bernacchi 2003).Gas exchange measurements and PAM fluorescence can be used in a complimentary fashion to understand the relationships between net CO 2 assimilation (A net ), PSII yields and NPQ (e.g.Flexas et al. 1999;Magney et al. 2017).
Chlorophyll fluorescence can be classified as active (PAM) or passive, based on the source of excitation light.However, the direct link between active and passive fluorescence and its functional relationship to carbon uptake via photosynthesis remain unclear (Porcar-Castell et al. 2014).While PAM measurements are versatile and commonplace across plant sciences, they are limited to the individual leaf-scale and a specific point in time.In an effort to scale these measurements from the leaf to the canopy and from seconds to seasons, the solar-induced fluorescence (SIF) technique was derived using the same principle as PAM by measuring radiative loss of energy of absorbed photons by chlorophyll in the 650-to 850-nm range (Meroni et al. 2009).As a result, there has been recent interest in developing techniques to continuously measure chlorophyll fluorescence at increasing spatial scales with ground-based (Porcar-Castell 2011) and remote sensing platforms (Yang et al. 2015;Magney et al. 2019a).Solar-induced fluorescence measurements do not require a modulating light and are considered to be the most closely related remote sensing proxy of photosynthesis from towers and satellites (Mohammed et al. 2019).Several challenges exist with passive retrieval of chlorophyll fluorescence compared to the retrieval of PAM fluorescence.First, the SIF signal is relatively weak (~1-5 % of reflected light) compared to the reflectivity of leaves across the 650-to 850-nm range (Meroni et al. 2009).Additionally, up to 90 % of red fluorescence photons can be re-absorbed by the leaf because of the overlapping nature of the chlorophyll emission and absorption spectra (Gitelson et al. 1998).Lastly, the PAM technique measures a broad spectral region (>650 nm) while remote sensing measurements are made across individual wavelengths, where the spectral shape can be quite dynamic (Magney et al. 2019b).This makes it difficult to directly relate leaf-level PAM measurements to SIF retrieved from remote sensing systems at a range of scales.For these reasons, there has been interest in developing leaf-scale measurement techniques under controlled conditions to simultaneously measure SIF (F t , λ ), PAM fluorescence and gas exchange to validate fluorescence signals from remote sensing platforms.Magney et al. (2017) developed a novel method of connecting passive and active fluorescence measurements at the leaf-level by modifying a Walz GFS-3000 gas exchange system (described in more detail in the Materials and Methods section).This new method was effective but limited to lab measurements due to portability constraints, temperature/humidity sensitivities of the spectrometer and imperfect transmission of incident light through the shortpass filter at the top of the leaf chamber.
Here, we outline a method to address these limitations by modifying a LI-COR LI-6800 gas exchange system in combination with a visible-near-infrared (VIS-NIR) spectrometer that: (i) is suitable for field conditions; (ii) measures gas exchange, passive fluorescence and active fluorescence concurrently; and (iii) measures the same leaf area under a wide range of controlled conditions for in vivo experiments.To accomplish this, a hole was drilled into the bottom chamber plate of an LI-6800 instrument to measure F t , λ from the bottom surface (abaxial) of leaves because the fluorometer on the LI-6800 does not allow spectral measurements from the top.The system was tested by measuring gas exchange and fluorescence in the field with the LI-6800 and comparing to fluorescence measurements of the same plant made by the GFS-3000.
Theory
A schematic of the propagation of emitted red and far-red Chl fluorescence molecules is provided in Fig. 1.Because PAM fluorescence is retrieved from the top of the leaf in the LI-6800, there may be some differences in active fluorescence from the adaxial side (top) of the leaf and passive fluorescence measured on the abaxial (underside) of the leaf.However, we expect that changes in fluorescence yield (normalized for light intensity) will scale similarly.To understand this in detail, it is necessary to distinguish between passive fluorescence measured via spectrometer (F t , λ ) and steady-state active fluorescence from PAM (F t ).F t , λ is the wavelength specific radiant energy flux of chlorophyll fluorescence that can be defined as: where aPAR is the absorbed photosynthetically active radiation by chlorophyll, ΦF is the probability a photon absorbed by a leaf will be fluoresced and ε λ is the probability of the photon escaping the leaf at wavelength λ.F t derived from PAM fluorometry is defined as: where aPAR ML is the absorbed photosynthetically active radiation from the modulating light source and ε p is the integral of ε λ beyond a longpass filter (>650 nm) (Magney et al. 2017).
To compare passive and active fluorescence from the GFS-3000 and the LI-6800, two normalized fluorescence values were computed.F t ,λ yield measured by spectrometer was calculated by the following equation: where F t,740 represents the peak magnitude of fluorescence at 740 nm (Figs 1 and 3B) and R red peak is the maximum value of reflected red energy at 640 nm (Figs 1 and 3A).Passive fluorescence was normalized with the red peak to eliminate possible error from movement of the fibre-optic or differences in leaf thickness or incoming irradiance.Fluorescence yield measured by PAM fluorometry was calculated by the following equation: where F o is the baseline fluorescence of the dark-adapted leaf and F t is steady-state fluorescence in the light (Flexas et al. 2002).
Modified LI-6800 with Flame spectrometer
An LI-6800 Portable Photosynthesis System (LI-COR, Lincoln, NE, USA) was modified slightly to accommodate dual measurements of leaf gas exchange and F t , λ by drilling a hole into the bottom chamber plate (part number 9968-315) of the Multiphase Flash™ Fluorometer measuring head using a precision drill and a 1.5mm drill bit.The spectrometer fibre-optic was fed through the hole and held in place at a set distance from the sample with two O-rings (size 003, Fig. 2) to measure passive fluorescence from the abaxial surface of leaves.To ensure that the chamber was sealed, adhesive putty was applied around the fibre-optic outside of the chamber.
PAM fluorometer and light source. The LI-COR LI-6800 Multiphase
Flash™ Fluorometer with 6-cm 2 circular chamber was used to measure active chlorophyll fluorescence parameters.The PAM fluorometer has red actinic and saturating flash peak wavelength of 625 nm (Fig. 3A).The rectangular saturation pulse emitted 15 000 μmol m −2 s −1 of light for 0.8 s.Three saturating pulses were taken at each light level, and the steady-state fluorescence yield (F t ) was recorded before each pulse.F t values were measured and obtained immediately before saturation pulses when net CO 2 assimilation and stomatal conductance reached steady state (details below).
Gas exchange.The LI-6800 uses two IRGAs to calculate the change in water and carbon dioxide concentrations between a reference chamber and the leaf sample cell chamber.Gas exchange parameters including net CO 2 assimilation (A net ) and stomatal conductance (g s ) were measured at a steady state (no more than a 1 % change in parameters over 1 min) at three PAR levels under a constant temperature.At 50 μmol m −2 s −1 PAR, steady state was achieved in around 10 min.At 500 μmol m −2 s −1 and 1500 μmol m −2 s −1 PAR, steady state was achieved in around 20 min.
Spectrometer.A Flame VIS-NIR Spectrometer (Ocean Optics, Dunedin, FL, USA) was used to measure abaxial fluorescence from the bottom of the LI-6800 chamber.The Flame spectrometer was used for its portability and thermal stability (https://www.spectrecology.com/2015/04/thermal-stabilityof-new-flame-bench/)making it more suitable for field measurements unlike like the QE Pro (Ocean Optics, Dunedin, FL, USA) outlined in Magney et al. (2017).Additionally, the Flame spectrometer is powered using a 5-V USB connection, so there is no longer a need for AC power, as is required by the QE Pro.However, the Flame spectrometer comes at a decreased signal-to-noise ratio and lower spectral resolution than the QE Pro.The Flame spectrometer covers 339-1009 nm with a 1.33-nm full-width half-maximum (FWHM), and a linear silicon CCD array with an entrance slit of 25 μm, a #1 grating and 2048 pixels.Absolute calibration was performed using an integrating sphere and a reference Analytical Spectral Devices (ASD) spectrometer at the Jet Propulsion Laboratory in Pasadena, CA, USA.The spectra were measured with an integration time of 10 ms and data was recorded every 0.2 s (co-adding 20 spectra).Integration time was chosen to avoid optical sensor saturation at the maximum light level of 1500 μmol m −2 s −1 PAR.The 10 ms integration time is the minimum for the Flame spectrometer, but with an increased integration time at lower light levels, an increased signal-tonoise ratio is expected and recommended for sampling under at light intensities <100 μmol m −2 s −1 PAR.A sample light spectrum measured with the Flame spectrometer is shown in Fig. 3, highlighting the incoming LED spectrum in Fig. 3A.
Modified Walz GFS-3000 with QE Pro spectrometer
Adaxial measurements of passive and active fluorescence were done using the instrument and spectral nomenclature described in (Magney et al. 2017).A portable GFS-3000 gas exchange and fluorescence system (Heinz Walz GmbH, Effeltrich, Germany) was used to validate and compare fluorescence measurements from the modified LI-6800 with Flame spectrometer set-up.The GFS-3000 was slightly modified to use a bundled fibre-optic to connect the PAM fluorescence module and QE Pro spectrometer, all details can be found in Magney et al. (2017).
Experimental design and analysis.Eight Juglans regia cv.(grafted on Vlach rootstock) saplings were planted in 5-gallon pots with UC soil mix.44.4 mL of Osmocote® Smart-Release® Plant Food Plus fertilizer were added to each pot.The plants were grown in a UC Davis lath house.Two water treatments (well-watered and water-stressed) were implemented based on soil water mass percentage.The plants (four replicates for each treatment) were watered to maintain around 75 and 25 % of completely saturated soil by weight.Every plant was weighed individually and watered to maintain water treatment status between 4 and 6 PM every day and monitored using a Model 600 Leaf Pressure Chamber (PMS Instrument Company, Albany, OR, USA) to measure midday leaf water potentials.
The modified LI-6800 system was used for gas exchange and active and passive fluorescence measurements in the lath house.The youngest, fully expanded, intact leaf was chosen and dark-adapted for 15 min.While 15 min may not ensure complete dark adaptation, we were not conducting a full physiological evaluation of these parameters, but rather comparing two measurement techniques.Measurements were made at constant temperature (30 °C), constant relative humidity (45 %) and three different PAR levels (50, 500 and 1500 μmol m −2 s −1 ) that were chosen as low, medium and high light levels for a range of comparison.Carbon dioxide concentration was held constant at 420 ppm.The plants were then taken to the lab for comparative measurements with a modified Walz GFS-3000 (Magney et al. 2017).The same temperature, light and environmental settings used with LI-6800 measurements were used for the GFS-3000 system measurements.Measurements were taken between 9 AM and 3 PM.All code for data analysis was developed in Python v 3.7.The scipy.stats package was utilized for simple linear regression.
Results and Discussion
The adapted LI-6800 system effectively measured F t , λ , PAM fluorescence and gas exchange simultaneously.We evaluate the performance of the new system by testing how effectively this array of observations responds to water-deficit treatments.The spectra in Fig. 4 show that both F t , λ magnitude and spectral shape change with water stress for both modified systems (LI-6800 & Flame spectrometer and GFS-3000 & QE Pro spectrometer).The corresponding steady-state F t , λ peaks decrease as a response to water stress, which is similar to responses of other plant species (Ač et al. 2015;Magney et al. 2019b).This result is attributed to the apparent connection between stress-induced changes in the light reactions of photosynthesis and chlorophyll fluorescence (Flexas et al. 2002).Water stress leads to a decline in the rate of photosynthesis due to stomatal and non-stomatal limitations, which affect chloroplast activity (Ogren and Oquist 1985;Flexas et al. 1999Flexas et al. , 2002)).It is important to note that changes in stomata and chlorophyll fluorescence will respond at different time scales, but generally a decline in photosynthetic capacity results in a smaller number of excited electrons in the photosystems that could be fluoresced (Gu et al. 2019).
The new LI-6800 and Flame spectrometer system measures emitted chlorophyll-a fluorescence from the abaxial leaf surface while the GFS-3000 & QE Pro spectrometer measures the fluorescence signal from the adaxial surface (Fig. 1).The abaxial F t , λ signal measured from the modified LI-6800 system is smaller (60.1 ± 7.8 % average decrease) than the signal gleaned from top-of-leaf F t , λ measured with the modified GFS-3000 system (Fig. 4).Additionally, the shape of the fluorescence spectra from the top and bottom of the leaf is slightly different (Fig. 4).The spectra taken from the top of the leaf (modified GFS-3000 system) contain a second smaller peak at 685 nm, typical of top-of-leaf fluorescence spectra.The absence of this second peak in the abaxial fluorescence measurements is explained by the re-absorption of red-emitted fluorescence along the optical path through the leaf (Fig. 1; Vogelmann and Evans 2002;Van Wittenberghe et al. 2015).While there are differences in the spectral shape from the adaxial and abaxial fluorescence measurements, the magnitude of F t , λ sensitivity to water stress is comparable.While it is important to understand the spectral shape of abaxial fluorescence, this should not be an issue for researchers looking to compare the magnitude (not spectral shape changes) of F t , λ with PAM fluorescence and gas exchange parameters.We assume the spectral shape of far-red fluorescence is quite stable (Magney et al. 2019b), so research conducted using the modified LI-6800 to look at spectral variability may not yield clear results regardless of the direction of fluorescence.Additionally, because we measure from the abaxial surface of the leaf, we do not require an optical filter because much of the incoming red light is absorbed on its path through the leaf.According to the manufacturer's specifications of the LI-6800 instrument, the spectral response function of the red LED light source shows no light beyond 700 nm.Therefore, we recommend using far-red F t , λ beyond 700 nm when interpreting measurements from this modified instrument despite the seemingly nominal impact of red light at wavelengths > 670 nm.
Positive linear relationships were observed between adaxial and abaxial F t , λ at both 500 and 1500 μmol m −2 s −1 PAR (Fig. 5).F t , λ measurements at 50 μmol m −2 s −1 PAR were excluded from these results due to very small signals at the low light level (within the noise of dark measurements).Increasing the integration time may increase the signal-to-noise ratio for lower light intensities, but it was held constant during measurements to maintain consistency.Positive linear relationships between F t , λ yield from both measurement techniques suggest that the modified instrument described here is a viable method to measure F t , λ in the field.Further investigation must be done to examine differences in slope and linear fit at different light levels, but one explanation could be related to Beer's law of light attenuation, whereby an exponential decrease in the amount of light transmitted from the top to the bottom of the leaf would also impact the amount of fluoresced photons (Monsi et al. 2005).Additionally, the difference in slope could be attributed to changes in chloroplast positioning at different light intensities (Tholen et al. 2008;Van Wittenberghe et al. 2019), or from gasket compression effects due to taking measurements with both instruments on the same leaf area.
The relationship between SIF (F t, 740) measured with the LI-6800/Flame spectrometer and the net CO 2 assimilation is shown in Fig. 6.In general, F t , λ increases with photosynthesis.In theory and in practice, the reduction of F t , λ will be less than that of photosynthesis at the leaf scale, but previous studies have shown that a decrease in A net typically results in a decrease in SIF (Magney et al. 2017;Gu et al. 2019;Helm et al. 2020).Electron transport rate (ETR) measured with the LI-6800 fluorometer is plotted with F t , λ to show the modified instrument's capability to simultaneously measure passive fluorescence, active fluorescence and gas exchange.
To compare between passive and active fluorescence measurements taken with our modified LI-6800, F t , λ yield was plotted against the fluorescence yield (F t /F o ) from the PAM fluorometer (Fig. 7).A positive linear relationship between F t , λ and PAM fluorescence was observed (r 2 = 0.68).Passive and active fluorescence measurements corresponding to water-stressed walnut trees are lower in magnitude (and more tightly correlated) than those of the well-watered trees.The stronger relationship between passive and active fluorescence measurements of the water-stressed plants could be explained by higher water stress leading to a more homogeneous intercellular environment as the stomata close, thereby reducing the variability in photochemical quenching compared to the well-watered counterpart.M. Momayyezi et al. (University of California, Davis, CA, USA, unpubl.res.) found that water stress decreases leaf thickness and cell packing in J. regia leaves resulting in a proportional decrease of stomatal conductance.We speculate that the resulting increase in porosity creates a more homogenous intercellular environment in the waterstressed leaves.Additionally, because the exact location of the fibre-optic field of view (FOV) was slightly different for the Flame and QE Pro spectrometer methods, differences could be the result of leaf heterogeneity of chloroplast positioning, pigment concentrations and leaf thickness.F t /F o decreases with increasing water stress due to a series of reactions in response to drought-induced stomatal closure.Under extreme drought conditions, NPQ might become the dominant energy pathway as photochemistry and chlorophyll fluorescence become less competitive processes (Flexas et al. 2002).Although the source of excitation light and the interpretation of the signals of passive and active fluorescence measurements are different, both methods measure steady-state chlorophyll fluorescence, so changes in one signal should result in a similar change in the other.Unlike PAM fluorescence, the quantum yield of photochemistry cannot be directly calculated from F t , λ measurements.However, we expect that tools aiming to understand the relationship between F t , λ and PAM will help to disentangle the mechanistic relationship between remotely sensed SIF and photosynthesis (Porcar-Castell et al. 2014).Since SIF is primarily sensitive to the light reactions of photosynthesis, while A net represents actual carbon assimilation and includes information on the carbon reactions of photosynthesis, further research using the proposed method could help to elucidate when the SIF:photosynthesis relationship converges or diverges, and what the drivers of this divergence might be.
Conclusions
The modified LI-6800 system was successful in concurrently measuring the F t , λ , PAM fluorescence and gas exchange of the same leaf area.Unlike previous instruments with these measuring capabilities, this instrument is portable and can withstand changes in temperature and humidity likely to be experienced in the field.We report a F t , λ yield comparison between the abaxial F t , λ measured from the LI-6800/Flame spectrometer system and adaxial F t , λ from GFS-3000/QE Pro spectrometer system to validate the F t , λ signal and its response to changes in light and water stress.We also report the relationship between F t , λ and PAM fluorescence measured with our modified system to better understand how remotely sensed SIF may be related to photosynthesis.Due to the nature of this instrument modification (i.e.F t , λ measured abaxially while PAM fluorescence is measured adaxially), there cannot be a direct comparison between the fluorescence measurement types, but the system still facilitates a comparison to link the relationship between the two signals.This study was limited to a single species (J.regia) grown in a lath house.Future research with this device (or one similar) might look at multiple species aiming to advance the understanding of the biological mechanisms of F t , λ .Additionally, using this device in the field with concurrent tower-based
Figure 1 .
Figure 1.Conceptual figure of an excited chlorophyll molecule from the adaxial side of a J. regia leaf.The J. regia cross-section was imaged using a confocal microscope at 10× (Nikon C2 + , Nikon Instruments Inc., Melville, NY, USA).When a chlorophyll molecule near the top of the leaf is excited, red and far-red photons are emitted in all directions, with red photons being re-absorbed by other Chl molecules within the leaf.Meanwhile, both red and far-red photons are attenuated as they travel through the leaf, resulting in a lower magnitude in abaxial retrieved SIF (F t , λ ) vs. adaxial retrieved SIF (F t , λ ).This is consistent with the absorption of visible and near-infrared light, resulting in reduced transmittance on the abaxial leaf surface.Measurements made from the system described in this paper are made on the abaxial side of the leaf, where we expect a greater reduction in overall F t , λ and in the red:far-red F t , λ ratio.
Figure 2 .
Figure 2. (A) Side-view schematic of the of the modified LI-6800 leaf chamber plate with inserted spectrometer fibre-optic.(B) Top-view schematic of the modified plate, which highlights the orientation of the 1.5-mm drilled hole in reference to the temperature sensor hole.(C) Photo of the modified system with fibre-optic inserted into the leaf chamber.
Figure 3 .
Figure 3. (A) Light spectrum of a walnut leaf at 500 μmol m −2 s −1 PAR measured with the modified LI-6800 and Flame spectrometer.(B) Enlarged spectra showing F t ,λ from all Juglans regia samples of the experiment.
Figure 4 .
Figure 4. F t , λ spectra at 500 μmol m −2 s −1 PAR from (A) a well-watered walnut tree and (B) a water-stressed walnut tree.The plots contain fluorescence from the modified LI-6800, modified GFS-3000 and the sum of the two spectra.
Figure 5 .
Figure 5. Relationship between F t , λ yield measured with the LI-6800 and F t , λ yield measured on the GFS-3000 at 740 nm at (A) 500 μmol m −2 s −1 PAR and (B) 1500 μmol m −2 s −1 PAR.Each point represents F t , λ yield at 740 nm measured on the same leaf area with the respective instrument.
Figure 6 .
Figure 6.Relationship between F t , λ at 740 nm, ETR and net CO 2 assimilation (A net ).Each point corresponds to the average F t , 740 , ETR and A net .
Figure 7 .
Figure 7. Relationship between LI-6800 F t , λ yield and fluorescence yield via PAM fluorometry.Points at both 500 and 1500 μmol m −2 s −1 PAR are included. | 2020-12-17T09:10:12.593Z | 2020-12-06T00:00:00.000 | {
"year": 2021,
"sha1": "32279425b05df2e7e955b3d30cabfe3ea8310402",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/aobpla/article-pdf/13/1/plaa066/40326956/plaa066.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fa421088f52f36c9708b1cd1c663c1510dd07d55",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12601463 | pes2o/s2orc | v3-fos-license | Quantum Perceptron Models
We demonstrate how quantum computation can provide non-trivial improvements in the computational and statistical complexity of the perceptron model. We develop two quantum algorithms for perceptron learning. The first algorithm exploits quantum information processing to determine a separating hyperplane using a number of steps sublinear in the number of data points $N$, namely $O(\sqrt{N})$. The second algorithm illustrates how the classical mistake bound of $O(\frac{1}{\gamma^2})$ can be further improved to $O(\frac{1}{\sqrt{\gamma}})$ through quantum means, where $\gamma$ denotes the margin. Such improvements are achieved through the application of quantum amplitude amplification to the version space interpretation of the perceptron model.
We demonstrate how quantum computation can provide non-trivial improvements in the computational and statistical complexity of the perceptron model.We develop two quantum algorithms for perceptron learning.The first algorithm exploits quantum information processing to determine a separating hyperplane using a number of steps sublinear in the number of data points N , namely O( √ N ).The second algorithm illustrates how the classical mistake bound of O( 1 γ 2 ) can be further improved to O( 1 √ γ ) through quantum means, where γ denotes the margin.Such improvements are achieved through the application of quantum amplitude amplification to the version space interpretation of the perceptron model.
I. INTRODUCTION
Quantum computation is an emerging technology that utilizes quantum effects to achieve significant, and in some cases exponential, speed-ups of algorithms over their classical counterparts.The growing importance of machine learning has in recent years led to a host of studies that investigate the promise of quantum computers for machine learning [1,2,12,13,17,[21][22][23].
While a number of important quantum speedups have been found, the majority of these speedups are due to replacing a classical subroutine with an equivalent albeit faster quantum algorithm.The true potential of quantum algorithms may therefore remain underexploited since quantum algorithms have been constrainted to follow the same methodology behind traditional machine learning methods [2,7,22].Here we consider an alternate approach: we devise a new machine learning algorithm that is tailored to the speedups that quantum computers can provide.
We illustrate our approach by focusing on perceptron training [18].The perceptron is a fundamental building block for various machine learning models including neural networks and support vector machines [20].Unlike many other machine learning algorithms, tight bounds are known for the computational and statistical complexity of traditional perceptron training.Consequently, we are able to rigorously show different performance improvements that stem from either using quantum computers to improve traditional perceptron training or from devising a new form of perceptron training that aligns with the capabilities of quantum computers.
We provide two quantum approaches to perceptron training.The first approach focuses on the computational aspect of the problem and the proposed method quadratically reduces the scaling of the complexity of training with respect to the number of training vectors.The second algorithm focuses on statistical efficiency.In particular, we use the mistake bounds for traditional perceptron training methods and ask if quantum computation lends any advantages.To this end, we propose an algorithm that quadratically improves the scaling of the training algorithm with respect to the margin between the classes in the training data.The latter algorithm combines quantum amplitude estimation in the version space interpretation of the perceptron learning problem.Our approaches showcase the trade-offs that one can consider in developing quantum algorithms, and the ultimate advantages of performing learning tasks on a quantum computer.
The rest of the paper is organized as follows: we first cover the background on perceptrons, version space and Grover search.We then present our two quantum algorithms and provide analysis of their computational and statistical efficiency before concluding.
A. Perceptrons and Version Space
Given a set of N separable training examples {φ 1 , .., φ N } ∈ R D with corresponding labels {y 1 , .., y N }, y i ∈ {+1, −1}, the goal of perceptron learning is to recover a hyperplane w that perfectly classifies the training set [18].Formally, we want w such that y i • w T φ i > 0 for all i.There are various simple online algorithms that start with a random initialization of the hyperplane and make updates as they encounter more and more data [8,11,18,19]; however, the rule that we consider for online perceptron training is, upon misclassifying a vector (φ, y), w ← w + yφ.
A remarkable feature of the perceptron model is that upper bounds exist for the number of updates that need to be made during this training procedure.In particular, if the training data is composed of unit vectors, φ i ∈ R D , that are separated by a margin of γ then there are perceptron training algorithms that make at most O( 1 γ 2 ) mistakes [16], Feature Space Version Space FIG.1: Version space and feature space views of classification.This figure is from [14].
independent of the dimension of the training vectors.Similar bounds also exist when the data is not separated [6] and also for other generalizations of perceptron training [8,11,19].Note that in the worst case, the algorithm will need to look at all points in the training set at least once, consequently the computation complexity will be O(N ).
Our goal is to explore if the quantum procedures can provide improvements both in terms of computational complexity (that is better than O(N )) and statistical efficiency (improve upon O( 1 γ 2 ).Instead of solely applying quantum constructs to the feature space, we also consider the version space interpretation of perceptrons which leads to the improved scaling with γ.
Formally, version space is defined as the set of all possible hyperplanes that perfectly separate the data: VS := {w|y i • w T φ i > 0 for all i}.Given a training datum, the traditional representation is to depict data as points in the feature space and use hyperplanes to depict the classifiers.However, there exists a dual representation where the hyperplanes are depicted as points and the data points are represented as hyperplanes that induce constraints on the feasible set of classifiers.Figure 1, which is borrowed from [14], illustrates the version space interpretation of perceptrons.Given three labeled data points in a 2D space, the dual space illustrates the set of normalized hyperplanes as a yellow ball with unit radius.The third dimension corresponds to the weights that multiply the two dimensions of the input data and the bias term.The planes represent the constraints imposed by observing the labeled data as every labeled data renders one-half of the space infeasible.The version space is then the intersection of all the half-spaces that are valid.Naturally, classifiers including SVMs [20] and Bayes point machines [10] lie in the version space.
We note that there are quantum constructs such as Grover search and amplitude amplification which provide nontrivial speedups for the search task.This is the main reason why we resort to the version space interpretation.We can use this formalism to simply pose the problem of determining the separating hyperplane as a search problem in the dual space.For example given a set of candidates hyperplanes, our problem reduces to searching amongst the sample set for the classifier that will successfully classify the entire set.Therefore training the perceptron is equivalent to finding any feasible point in the version space.We describe these quantum constructs in detail below.
B. Grover's Search
Both quantum approaches introduced in this work and their corresponding speed-ups stem from a quantum subroutine called Grover's search [4,9], which is a special case of a more general method referred to as amplitude amplification [5].Rather than sampling from a probability distribution until a given marked element is found, the Grover search algorthm draws only one sample and then uses quantum operations to modify the distribution from which it sampled.The probability distribution is rotated, or more accurately the quantum state that yields the distribution is rotated, into one whose probability is sharply concentrated on the marked element.Once a sharply peaked distribution is identified, the marked item can be found using just one sample.In general, if the probability of finding such an element is known to be a then amplitude amplification requires O( 1/a) operations to find the marked item with certainty.While Grover's search is a quantum subroutine, it can in fact be understood using only geometric arguments.The only notions from quantum mechanics used are those of the quantum state vector and that of Born's rule q a q a q a q a q a 2 FIG.2: A geometric description of the action of Ugrover on an initial state vector ψ.
(measurement).A quantum state vector is a complex unit vector whose components have magnitudes that are equal to the square-roots of the probabilities.In particular, if v is a quantum state vector and p is the corresponding probability distribution then where the unit column vector v is called the quatum state vector which sits in the vector space C n , • is the Hadamard (pointwise) product and † is the complex conjugate transpose.A quantum state can be measured such that if we have a quantum state vector v and a basis vector w then the probability of measuring , where •, • denotes the inner product.One of the main differences between quantum and classical distributions is that the probability distribution resulting from measurement depends strongly on the basis in which the vector is measured.This basis dependence of measurement is the root of many of the differences between quantum and classical probability theory and also gives rise to many celebrated results in the foundations of quantum mechanics such as Bell's theorem [3].At first glance, introducing the quantum state vector v may not seem to provide any advantages over working with p for the purposes of sampling.More careful consideration reveals that the fact that v is complex valued allows transformations on v to be performed that cannot be performed on p.In particular, we can reflect the quantum state vector about any axis, whereas we cannot do the same to p without violating its positivity.Grover's search, in fact, is a cunning way to perform a series of reflections on v to bias p towards the marked state we wish to find.While such reflections may not make sense from a classical perspective, quantum computers can be used to realize them efficiently.
The key feature of a quantum computer is that it permits any unitary transformation to be performed on the unit vector v, within arbitrarily small approximation error.We define the initial quantum state vector to be ψ and define P to be a projection matrix onto a set of configurations that we want to find.In particular, if we define ν good to be the set of all items that we want the quantum algorithm to find then Here being able to apply P does not imply that ν good is known.Instead, it implies that a subroutine that checks to see if ψ ∈ ν good exists.The fact that P is implemented by a linear transformation of the state vector also allows it to be simultaneously applied to exponentially many a N j=1 a j P v.These two features allow a single application of 1 1 − 2P to be efficiently applied, assuming membership in v good can be efficiently tested, even though ψ is a sum of exponentially many basis vectors.
In order to perform the search algorithm we need to implement two unitary operations: The operators U init and U targ can be interpreted geometrically as reflections within a two-dimensional space spanned by the vectors ψ and P ψ.If we assume that P ψ = 0 and P ψ = ψ then these two reflection operations can be used to rotate ψ in the space span(ψ, P ψ).Specifically this rotation is U grover = U init U targ .Its action is illustrated in Figure 2. If the angle between the vector ψ and P ψ/ P ψ is π/2 − θ a , where θ a := sin −1 (| ψ, P ψ/ P ψ |).It then follows from elementary geometry and the rule for computing the probability distribution from a quantum state (known as Born's rule) that after j iterations of Grover's algorithm the probability of measuring a desirable outcome is It is then easy to see that if θ a 1 and a probability of success greater than 1/4 is desired then j ∈ O(1/ √ θ a ) suffices to find a marked outcome.This is quadratically faster than is possible from statistical sampling which requires O(1/θ a ) samples on average.
As an example, if the initial success probability is 1/4 then θ a = sin −1 (1/2) = π/6.Therefore if we take j = 1 then p(v ∈ ν good |j) = 1.As a result a desirable outcome can be found after only 3 quantum operations whereas 4 samples from the initial distribution would be needed on average to find a marked outcome if quantum methods were not used.
If on the other hand, the success probability were 1/2 then θ a = π/4 and sin 2 ((2j + 1)π/4) = 1/2 for all j.This problem can be easily addressed by doing something that would not make any sense classically: purposefully lowering the success probability to 1/4 by requiring a new event w = (v, u) where we define w to be a good state if the independent variables v is good and u ∼ Bern(1/2) is 0. The independence assumption means that the probability that both conditions are satisfied is 1/4 and hence a good v can be found with certainty by applying amplitude amplification on w.More generally, if θ a is known then this trick can be applied to make θ a → π/(2[2j + 1]) (for positive integer j) which makes the search procedure deterministic.
On the other hand, if θ a is not known then it isn't clear how j should be chosen to make the success probability greater than 1/4.Fortunately, methods are known to deal with such issues [4,5].The simplest one exploits the fact that the average of p over a range of j = 0, . . ., M − 1 can be easily computed: If M ≥ M 0 := 1 sin(2θa) then it is straight forward to see that The average probability is then guaranteed to be at least 1/4 if j is chosen to be drawn uniformly from {0, . . ., M − 1} if M ≥ M 0 .If a lower bound on θ a is known a good sample can be drawn, then an appropriate value of M can be computed.
If no lower bound on θ a is known then a marked element can nonetheless be found with high probability by exponential searching.Exponential searching involves, for step i taking M = c i for some c ∈ (1, 2).After a logarithmic number of applications of amplitude amplification it will attain M ≥ M 0 with high probability.After which the average success probability is known to be bounded below by 1/4 and the algorithm will succeed with high probability in a constant number of attempts.Thus the quadratic speedup holds even if the success probability is not known apriori.
III. ONLINE QUANTUM PERCEPTRON
Now that we have discussed Grover's search we turn our attention to applying it to speed up online perceptron training.In order to do so, we first need to define the quantum model that we wish to use as our quantum analogue of perceptron training.While there are many ways of defining such a model but the following approach is perhaps the most direct.Although the traditional feature space perceptron training algorithm is online [16], meaning that the training examples are provided one at a time to it in a streaming fashion, we deviate from this model slightly by instead requiring that the algorithm be fed training examples that are, in effect, sampled uniformly from the training set.This is a slightly weaker model, as it allows for the possibility that some training examples will be drawn multiple times.However, the ability to draw quantum states that are in a uniform superposition over all vectors in the training set enables quantum computing to provide advantages over both classical methods that use either access model.
We assume without loss of generality that the training set consists of N unit vectors, φ 1 , . . ., φ N .If we then define Φ 1 , . . ., Φ N to be the basis vectors whose indices each coincide with a (B + 1)-bit representation of the corresponding (φ j , y j ) where y j ∈ {−1, 1} is the class assigned to φ j and let Φ 0 be a fixed unit vector that is chosen to represent a blank memory register.
We introduce the vectors Φ j to make it clear that the quantum vectors states used to represent training vectors do not live in the same vector space as the training vectors themselves.We choose the quantum state vectors here to occupy a larger space than the training vectors because the Heisenberg uncertainty principle makes it much more difficult for a quantum computer to compute the class that the perceptron assigns to a training vector in such cases.
For example, the training vector (φ j , y j ) ≡ ([0, 0, 1, 0] T , 1) can be encoded as an unsigned integer 00101 ≡ 5, which in turn can be represented by the unit vector Φ = [0, 0, 0, 0, 0, 1] T .More generally, if φ j ∈ R D were a vector of Measure Ψ, assume outcome is uq.(φ, y) ← U c (q). if fw(φ, y) = 1 then Return w ← w + yφ end if end for end for Return w floating point numbers then a similar vector could be constructed by concatenating the binary representations of the D floating point numbers that comprise it with (y j + 1)/2 and express the bit string as an unsigned integer, Q.The integer can then be expressed as a unit vector Φ : [Φ] q = δ q,Q .While encoding the training data as an exponentially long vector is inefficient in a classical computer, it is not in a quantum computer because of the quantum computer's innate ability to store and manipulate exponentially large quantum state vectors.
Any machine learning algorithm, be it quantum or classical, needs to have a mechanism to access the training data.We assume that the data is accessed via an oracle that not only accesses the training data but also determines whether the data is misclassified.To clarify, let {u j : j = 1 : N } be an orthonormal basis of quantum state vectors that serve as addresses for the training vectors in the database.Given an input address for the training datum, the unitary operations U and U † allow the quantum computer to access the corresponding vector.Specifically, for all j Given an input address vector u j , the former corresponds to a database access and the latter inverts the database access.
Note that because U and U † are linear operators we have that U N j=1 u j ⊗ Φ 0 = j u j ⊗ Φ j .A quantum computer can therefore access each training vector simultaneously using a single operation, while only requiring enough memory to store one of the Φ j .The resultant vector is often called in the physics literature a quantum superposition of states and this feature of linear transformations is referred to as quantum parallelism within quantum computing.
The next ingredient that we need is a method to test if the perceptron correctly assigns a training vector addressed by a particular u j .This process can be pictured as being performed by a unitary transformation that flips the sign of any basis-vector that is misclassified.By linearity, a single application of this process flips the sign of any component of the quantum state vector that coincides with a misclassified training vector.It therefore is no more expensive than testing if a given training vector is misclassified in a classical setting.We denote the operator, which depends on the perceptron weights w, F w and require that where f w (φ j ) is a Boolean function that is 1 if and only if the perceptron with weights w misclassifies training vector φ j .Since the classification step involves computing the dot-products of finite size vectors, this process is efficient given that the Φ j are efficiently computable.We apply F w in the following way.Let F w be a unitary operation such that F w is easy to implement in the quantum computer using a multiply controlled phase gate and a quantum implementation of the perceptron classification algorithm, f w .We can then write Classifying the data based on the phases (the minus signs) output by F w naturally leads to a very memory efficient training algorithm because only one training vector is ever stored in memory during the implementation of F w given in (10).We can then use F w to perform Grover's search algorithm, by taking U targ = F w and U init = 2ψψ † − 1 1 with ψ = Ψ := 1 √ N N j=1 u j , to seek out training vectors that the current perceptron model misclassifies.This leads to a quadratic reduction in the number of times that the training vectors need to be accessed by F w or its classical analogue.
In the classical setting, the natural object to query is slightly different.The oracle that is usually assumed in online algorithms takes the form U c : Z → C D where U c (j) = φ j . (11) We will assume that a similar function exists in both the classical and the quantum settings for simplicity.In both cases, we will consider the cost of a query to U c to be proportional to the cost of a query to F w .We use these operations in Algorithm 1 to implement a quantum search for training vectors that the perceptron misclassifies.This leads to a quadratic speedup relative to classical methods as shown in the following theorem.
Theorem 1.Given a training set that consists of unit vectors Φ 1 , . . ., Φ N that are separated by a margin of γ in feature space, the number of applications of F w needed to infer a perceptron model, w, such that P (∃ j : f w (φ j ) = 1) ≤ using a quantum computer is N quant where whereas the number of queries to f w needed in the classical setting, N class , where the training vectors are found by sampling uniformly from the training data is bounded by We assume in Theorem 1 that the training data in the classical case is accessed in a manner that is analogous to the sampling procedure used in the quantum setting.If instead the training data is supplied by a stream (as in the standard online model) then the upper bound changes to N class ∈ O(N/γ 2 ) because all N training vectors can be deterministically checked to see if they are correctly classified by the perceptron.A quantum advantage is therefore obtained if N log 2 (1/ γ 2 ).In order to prove Theorem 1 we need to have two technical lemmas (proven in the appendix).The first bounds the complexity of the classical analogue to our training method: Lemma 1.Given only the ability to sample uniformly from the training vectors, the number of queries to f w needed to find a training vector that the current perceptron model fails to classify correctly, or conclude that no such example exists, with probability 1 − γ 2 is at most O(N log(1/ γ 2 )).
The second proves the correctness of Algorithm 1 and bounds the complexity of the algorithm: Lemma 2. Assuming that the training vectors {φ 1 , . . ., φ N } are unit vectors and that they are drawn from two classes separated by a margin of γ in feature space, Algorithm 2 will either update the perceptron weights, or conclude that the current model provides a separating hyperplane between the two classes, using a number of queries to F w that is bounded above by O( √ N log(1/ γ 2 )) with probability of failure at most γ 2 .
After stating these results, we can now provide the proof of Theorem 1.
Proof of Theorem 1.The upper bounds follow as direct consequences of Lemma 2 and Lemma 1. Novikoff's theorem [6,16] states that the algorithms described in both lemmas must be applied at most 1/γ 2 times before finding the result.However, either the classical or the quantum algorithm may fail to find a misclassified vector at each of the O(1/γ 2 ) steps.The union bound states that the probability that this happens is at most the sum of the respective probabilities in each step.These probabilities are constrained to be γ 2 , which means that the total probability of failing to correctly find a mistake is at most if both algorithms are repeated 1/γ 2 times (which is the worst case number of times that they need to be repeated).
The lower bound on the quantum query complexity follows from contradiction.Assume that there exists an algorithm that can train an arbitrary perceptron using o( √ N ) query operations.Now we want to show that unstructured search with one marked element can be expressed as a perceptron training algorithm.Let w be a known set of perceptron weights and assume that the perceptron only misclassifies one vector φ 1 .Thus if perceptron training succeeds then w the value of φ 1 can be extracted from the updated weights.This training problem is therefore equivalent to searching for a misclassified vector.Now let φ j = [1 ⊕ F (j), F (j)] T ⊗ χ j where χ j is a unit vector that represents the bit string j and F (j) is a Boolean function.Assume that F (0) = 1 and F (j) = 0 if j = 0, which is without loss of generality equivalent to Grover's problem [4,9].Now assume that φ j is assigned to class 2F (j) − 1 and take This perceptron therefore misclassifies φ 0 and no other vector in the training set.Thus updating the weights yields φ j , which in turn yields the value of j such that F (j) = 1, and therefore Grover's search reduces to perceptron training.Since Grover's search reduces to perceptron training in the case of one marked item the lower bound of Ω( √ N ) queries for Grover's search [4] applies to perceptron training.Since we assumed that perceptron training requires o( √ N ) queries this is a contradiction.Thus the true lower bound must be Ω( √ N ).We have assumed that in the classical setting that the user only has access to the training vectors through an oracle that is promised to draw a uniform sample from {(φ 1 , y 1 ), . . ., (φ N , y N )}.Since we are counting the number of queries to f w it is clear that in the worst possible case that the training vector that the perceptron makes a mistake on can be the last unique value sampled from this list.Thus if the query complexity were o(N ) there would be a contradiction, hence the query complexity is Ω(N ) classically.
IV. QUANTUM VERSION SPACE PERCEPTRON
The strategy for our quantum version space training algorithm is to pose the problem of determining a separating hyperplane as search.Specifically, the idea is to first generate K sample hyperplanes w 1 , . . ., w K from a spherical Gaussian distribution N (0, 1 1).Given a large enough K, we are guaranteed to have at least one hyperplane amongst the samples that would lie in the version space and perfectly separate the data.As discussed earlier Grover's algorithm can provide quadratic speedup over the classical search consequently the efficiency of the algorithm is determined by K. Theorem 2 provides an insight on how to determine this number of hyperplanes to be sampled.
Theorem 2. Given a training set that consists of d-dimensional unit vectors Φ 1 , . . ., Φ N with labels y 1 , . . ., y N that are separated by a margin of γ in feature space, then a D-dimensional vector w sampled from N (0, 1 1) perfectly separates the data with probability Θ(γ).
The proof of this theorem is provided in the supplementary material.The consequence of Theorem 2 stated below is that the expected number of samples K, required such that a separating hyperplane exists in the set, only needs to scale as O( 1 γ ).Thus if amplitude amplification is used to boost the probability of finding a vector in the version space then the resulting quantum algorithm will need only O( 1√ γ ) quantum steps on average.Next we show how to use Grover's algorithm to search for a hyperplane that lies in the version space.Let us take K = 2 m , for positive integer m.Then given w 1 , . . ., w K be the sampled hyperplanes, we represent W 1 , . . ., W K to be vectors that encode a binary representation of these random perceptron vectors.In analogy to Φ 0 , we also define W 0 to be a vector that represents an empty data register.We define the unitary operator V to generate these weights given an address vector u j using the following In this context we can also think of the address vector, u j , as representing a seed for a pseudo-random number generator that yields perceptron weights W j .Also let us define the classical analogue of V to be V c which obeys V c (j) = w j .Now using V (and applying the Hadamard transform [15]) we can prepare the following quantum state which corresponds to a uniform distribution over the randomly chosen w.Now that we have defined the initial state, Ψ, for Grover's search we need to define an oracle that marks the vectors inside the version space.Let us define the operator Fφ,y via This unitary operation looks at an address vector, u j , computes the corresponding perceptron model W j , flips the sign of any component of the quantum state vector that is in the half space in version space specified by φ and then uncomputes W j .This process can be realized using a quantum subroutine that computes f w , an application of V and V † and also the application of a conditional phase gate (which is a fundamental quantum operation that is usually denoted Z) [15].
Measure Ψ, assume outcome is uq.w ← V c (q). if fw(φ , y ) = 0 for all ∈ {1, . . ., N } then Return w end if end for end for Return w = 0 The oracle Fφ,y does not allow us to directly use Grover's search to rotate a quantum state vector that is outside the version space towards the version space boundary because it effectively only checks one of the half-space inequalities that define the version space.It can, however, be used to build an operation, Ĝ, that reflects about the version space: Ĝ[u j ⊗ W 0 ] = (−1) 1+(fw j (φ1,y1)∨•••∨fw j (φ N ,y N )) [u j ⊗ W 0 ]. ( The operation Ĝ can be implemented using 2N applications of Fφ as well as a sequence of O(N ) elementary quantum gates, hence we cost a query to Ĝ as O(N ) queries to Fφ,y .We use these components in Algorithm 2 to, in effect, amplify the margin between the two classes from γ to √ γ.
We give the asymptotic scaling of this algorithm in the following theorem (see appendix for proof).
Theorem 3. Given a training set that consists of unit vectors Φ 1 , . . ., Φ N that are separated by a margin of γ in feature space, the number of queries to Fφ,y needed to infer a perceptron model with probability at least 1 − , w, such that w is in the version space using a quantum computer is N quant where Proof.The proof of the theorem follows directly from bounds on K and the validity of Algorithm 2. It is clear from previous discussions that Algorithm 2 carries out Grover's search, but instead of searching for a φ that is misclassified it instead searches for a w in version space.Its validity therefore follows by following the exact same steps followed in the proof of Lemma 2 but with N = K.However, since the algorithm need is not repeated 1/γ 2 times in this context we can replace γ with 1 in the proof.Thus if we wish to have a probability of failure of at most then the number of queries made to Ĝ is in This also guarantees that if any of the K vectors are in the version space then the probability of failing to find that vector is at most .Next since one query to Ĝ is costed at N queries to Fφ,y the query complexity (in units of queries to Fφ,y ) becomes O(N √ K log(1/ )).The only thing that then remains is to bound the value of K needed.The probability of finding a vector in the version space is Θ(γ) from Theorem 2. This means that there exists α > 0 such that the probability of failing to find a vector in the version space K times is at most Thus this probability is at most δ for It then suffices to pick K ∈ Θ 1 γ log(1/δ) for the algorithm.The union bound implies that the probability that either none of the vectors lie in the version space or that Grover's search failing to find such an element is at most + δ ≤ .Thus it suffices to pick ∈ Θ( ) and δ ∈ Θ( ) to ensure that the total probability is at most .Therefore the total number of queries made to Fφ,y is in O( N √ γ log 3/2 (1/ )) as claimed. | 2016-02-15T20:45:35.000Z | 2016-02-15T00:00:00.000 | {
"year": 2016,
"sha1": "ec6b4c5c9ccc9b5e9614e8cff8f2f53c05466f02",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8c1d4e58cd738426541369f61f8812625527589d",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
56009984 | pes2o/s2orc | v3-fos-license | Sensitivity of modelled sulfate aerosol and its radiative effect on climate to ocean DMS concentration and air–sea flux
Dimethylsulfide (DMS) is a well-known marine trace gas that is emitted from the ocean and subsequently oxidizes to sulfate in the atmosphere. Sulfate aerosols in the atmosphere have direct and indirect effects on the amount of solar radiation reaching the Earth’s surface. Thus, as a potential source of sulfate, ocean efflux of DMS needs to be accounted for in climate studies. Seawater concentration of DMS is highly variable in space and time, which in turn leads to high spatial and temporal variability in ocean DMS emissions. Because of sparse sampling (in both space and time), large uncertainties remain regarding ocean DMS concentration. In this study, we use an atmospheric general circulation model with explicit aerosol chemistry (CanAM4.1) and several climatologies of surface ocean DMS concentration to assess uncertainties about the climate impact of ocean DMS efflux. Despite substantial variation in the spatial pattern and seasonal evolution of simulated DMS fluxes, the global-mean radiative effect of sulfate is approximately linearly proportional to the global-mean surface flux of DMS; the spatial and temporal distribution of ocean DMS efflux has only a minor effect on the global radiation budget. The effect of the spatial structure, however, generates statistically significant changes in the global-mean concentrations of some aerosol species. The effect of seasonality on the net radiative effect is larger than that of spatial distribution and is significant at global scale.
Introduction
The global shortwave radiation budget is influenced by sulfate aerosols in two ways: directly via scattering and indirectly through changes to the radiative properties of clouds (as sulfate droplets act as cloud condensation nuclei, CCN) (Charlson et al., 1987;Andreae and Crutzen, 1997).An important natural source of atmospheric sulfate is the oxidation of biogenic dimethylsulfide (DMS) which has outgassed from the ocean surface (Andreae and Raemdonck, 1983;Bates et al., 1992).Particular interest in the role of DMS in the atmospheric sulfur cycle arose following the hypothesis by Charlson et al. (1987) of a negative feedback on ocean surface temperature changes mediated by cloud albedo and phytoplankton productivity: the so-called "CLAW hypothesis".However, subsequent studies have suggested that the influence of DMS on CCN formation may be weak (Quinn and Bates, 2011;Woodhouse et al., 2010Woodhouse et al., , 2013) ) and that the associated albedo changes are uncertain (Stevens and Feingold, 2009).Furthermore, a comprehensive understanding of the physical and biogeochemical processes that control the production of DMS and its removal from the ocean has not yet been established.The production and consumption of DMS in the water column involve a range of biotic and abiotic processes (Stefels et al., 2007).While outgassing of DMS from the ocean surface is of interest because of its climatic influence, it is a relatively minor term in the ocean DMS budget.Potentially as little as 1-10 % of ocean DMS production reaches the atmosphere (Malin et al., 1992;Bates et al., 1994).While some model experiments have found evidence of enhanced DMS fluxes under Published by Copernicus Publications on behalf of the European Geosciences Union.
global warming (Cameron-Smith et al., 2011;Gabric et al., 2004Gabric et al., , 2005)), others have suggested that the changes are weak (Bopp et al., 2003;Vallina et al., 2007) or might actually be negative (Kloster et al., 2007;Six et al., 2013).While the strength and character of the influence of DMS on global climate are uncertain, little work has been done to quantify the contribution of individual components of this uncertainty.The present study uses a comprehensive global atmospheric circulation model to quantify the uncertainty associated with surface concentration fields of DMS and air-sea flux parameterizations.Kettle et al. (1999) (K99) compiled a global DMS database for the development of DMS climatologies and of parameterizations for use in modelling studies (Halloran et al., 2010).However, spatial and temporal variations in DMS concentration are not well constrained by this database, because the number of available observations is still relatively small.There are large temporal and spatial variations in the sea surface concentration of DMS (Asher et al., 2011;Tortell, 2005;Tortell et al., 2011), and the current observational dataset provides only sparse information from wide expanses of the ocean.In the absence of measurements uniformly distributed in space and time to fully characterize the spatial and temporal variability of DMS, interpolation and extrapolation schemes are required to construct continuous, observationally based global fields of DMS concentration (Kettle et al., 1999;Lana et al., 2011).While the estimates generally indicate continuously elevated concentrations in tropical latitudes in contrast to low winter and high summer concentrations in middle and high latitudes, these fields remain highly uncertain due to inadequate sampling.For example, observationally based climatologies such as those of K99 andLana et al. (2011) (L10, released in 2010) show "bulls-eye" maxima that likely do not reflect the real distribution of DMS.The range of possible surface DMS fields increases when climatologies based on diagnostic or prognostic models are considered (Tesdal et al., 2016).
The parameterization of air-sea fluxes is also uncertain.Several different parameterizations of the piston velocity in terms of wind speed have been used in modelling studies (e.g., Liss and Merlivat, 1986;Wanninkhof, 1992;Nightingale et al., 2000), leading to substantially different flux fields for a given concentration field (Tesdal et al., 2016).Furthermore, it has been found that neglect of air-side resistance in the flux formulation (as is often done) can change estimates of fluxes by about 10 % (McGillis et al., 2000;Tesdal et al., 2016).The large differences in DMS sea surface concentration fields between different climatologies and in flux parameterizations can cause substantial variation in estimated fluxes (Tesdal et al., 2016).
An important question is how the uncertainty in fluxes translates into an uncertainty in the climate response.Although DMS fields show large differences in spatial pattern and seasonality, the differences in global-and annual-mean fluxes are considerably smaller.As well, the climatic sig-nificance of relatively small-scale concentration features remains uncertain, given the large-scale structure of the winds which drive the fluxes and the subsequent transport and oxidation to sulfate aerosol.Hereafter, net changes in the energy budget at the top of the atmosphere (TOA) due to changes in concentrations of DMS-derived sulfate will be referred to as the radiative effect, which is sometimes also referred to as radiative forcing.
Comprehensive atmospheric general circulation models (AGCMs) are the natural tool for assessing the uncertainty in the climatic influence of oceanic DMS fluxes.Using different DMS concentration fields as boundary conditions, the resulting changes in the atmospheric burden of sulfur species and radiative effect can be assessed.Previous modelling studies have focused on the effect of DMS on aerosol, CCN and TOA radiation budget by scaling a single DMS field (e.g., Gunson et al., 2006;Thomas et al., 2011).Using a coupled atmosphere-ocean model, Grandey and Wang (2015) estimated a radiative perturbation of −1.3 to −1.5 W m −2 when increasing global total DMS flux from 18 to 46 TgS yr −1 .There has not been much discussion of the climatic effect of differences in the spatial and temporal structure of DMS flux (Woodhouse et al., 2013).
A recent study demonstrated substantial impacts on cloud properties and the radiation budget by using the updated L10 climatology versus the previous K00 (Mahajan et al., 2015).Even though L10 represents a substantial extension of K00, the spatial and temporal patterns are still fairly similar compared to most other available DMS reconstructions (Tesdal et al., 2016).However, the substitution of L10 for K00 results in increases in the aerosol effect on net TOA radiation of about 20 %.The potentially large sensitivity of climate response to DMS climatology demonstrated by Mahajan et al. (2015) motivates the question of how important spatial and temporal variations are to atmospheric properties and radiative effects relative to changes in the global total DMS flux.This question is addressed in the present study using the fourth-generation Canadian Atmospheric Global Climate Model (CanAM4.1).
Previous simulations with the Canadian atmosphere model used ocean emissions of DMS calculated from one specific climatology (K99) and one gas flux parameterization, that of Liss and Merlivat (1986) (LM86).In this study, we assess the uncertainty in the climatic influence of DMS with simulations with CanAM4.1 using different surface concentration climatologies and flux parameterizations.As our baseline reference, we use the recently developed observationally based climatology of L10.These simulation results are compared to those obtained using three different climatologies: K99, an updated version of K99 from Kettle and Andreae (2000) (K00), and the empirical model of Anderson et al. (2001) (AN01).AN01 was shown by Tesdal et al. (2016) to produce global-mean DMS fluxes similar to those associated with the observationally based climatologies.To further assess the importance of spatial and temporal structure in the 4 ), SO 2 , and DMS.SO 2 is emitted from volcanos, fires, and anthropogenic sources.DMS is mainly emitted from the oceans, but there are also some terrestrial sources.DMS is oxidized to SO 2 by OH during the day and by NO 3 during the night.SO 2 is oxidized to sulfate both within clouds and under clear-sky conditions.In-cloud oxidation of sulfur and wet deposition is treated separately for layer (stratiform) and convective clouds.For both types of clouds, oxidation occurs via ozone (O 3 ) and hydrogen peroxide (H 2 O 2 ).Oxidation rates depend on the pH of the cloud water, which depends on the concentrations of nitric acid (HNO 3 ), ammonia (NH 3 ), and carbon dioxide (CO 2 ).DMS concentration fields, simulations were carried out with the L10 climatology replaced with its spatial mean (retaining month-to-month changes) and with its annual mean (retaining spatial variability).Two flux parameterizations are considered: LM86 and that of Nightingale et al. (2000) (N00).Section 2 describes the AGCM and the details of the numerical experiments.The results of the simulations are presented in Sect.3, followed by a discussion of the results in Sect. 4. Conclusions are presented in Sect. 5.
Model description
All model simulations presented in this study were made with CanAM4.1, the atmosphere component of the Canadian Earth System Model.CanAM4.1 is a slightly newer version of CanAM4 (von Salzen et al., 2013) with improved diagnostic capabilities.Model dynamics are computed spectrally with a horizontal resolution of T63, equivalent to a 128 × 64 linear grid.The model has 49 layers in the vertical extending from the surface to 1 hPa, with a spacing of about 100 m at the surface and increasing monotonically at higher altitudes.
Figure 1 presents a schematic of the sulfur cycle and the radiative effects of sulfate aerosols as represented in CanAM4.1.The ocean efflux of DMS is a source of aerosols via oxidation to sulfur dioxide (SO 2 ), which in turn is oxidized to form sulfate (SO 2− 4 ).The air-sea gas transfer of DMS is calculated with wind speed from the model, while ice cover and sea surface temperature (SST) are specified using a climatological dataset from the Atmospheric Model Intercomparison Project (AMIP) (Hurrell et al., 2008).In addition to the ocean source, the model also accounts for DMS fluxes from the terrestrial biosphere using specified monthly mean fields (Spiro et al., 1992).Besides DMS, the model also includes additional terrestrial sources of sulfur to the atmosphere: monthly mean emissions of gas phase SO 2 from fires (i.e., biomass burning) and anthropogenic sources, as well as volcanic emissions (Dentener et al., 2006).Anthropogenic aerosol and aerosol precursor emissions are used based on Representative Concentration Pathway (RCP) 4.5 from the fifth phase of the Coupled Model Intercomparison Project (CMIP5; Lamarque et al., 2010;Moss et al., 2010).
Transport, dry and wet deposition, and chemical transformations of sulfur species are all accounted for in CanAM4.1 (von Salzen et al., 2013).DMS is oxidized to SO 2 by hydroxyl radicals (OH) during daylight hours and by nitrate radicals (NO 3 ) at night.Sulfate aerosol (SO 2− 4 ) production is modelled by in-cloud and gas-phase (clear-sky) oxidation of SO 2 .In-cloud production is treated differently in layer and convective clouds.The presence of ozone (O 3 ) and hydrogen peroxide (H 2 O 2 ) as oxidants is a requirement for both types of clouds, and oxidation rates are modelled as pH-dependent (von Salzen et al., 2000).The in-cloud oxidation rate in deep convective clouds is calculated in proportion to the cloud fraction, which is determined based on Slingo (1987).As CanAM4.1 does not have a fully interactive chemical transport module, it uses specified oxidant concentrations (OH, NO 3 , O 3 , H 2 O 2 ) from the Model for Ozone and Related Chemical Tracers (MOZART;Brasseur et al., 1998).Ammonia (NH 3 ) and ammonium (NH + 4 ) concentration fields are also specified (Dentener and Crutzen, 1994).
The removal of sulfate aerosol takes place through wet and dry deposition.The dry deposition flux of sulfate simply depends on the concentration within the model layer adjacent to the surface along with a defined dry deposition velocity (Lohmann et al., 1999).Wet deposition, as with the in-cloud oxidation outlined above, is treated separately for layer and convective clouds.Within convective clouds, scavenging is modelled as a function of precipitation (von Salzen et al., 2000).Wet deposition fluxes from in-cloud scavenging of aerosols in layer clouds depend on local rates of conversion of cloud water to rainwater (Croft et al., 2005).Scavenging by falling rain droplets beneath convective clouds is parameterized using a mean collection efficiency (Berge, 1993).
CanAM4.1 accounts for sulfate aerosol, organic carbon aerosol, black carbon, sea salt, and dust as separate species using a bulk aerosol scheme (Lohmann et al., 1999;Croft et al., 2005).In the CanAM4.1 version used in this study, the cloud droplet number concentration (CDNC) depends only on the local concentration of sulfate.The empirical parameterization of Dufresne et al. (2005) is used.This parameterization relates CDNC to the concentration of sulfate as where CDNC is in number cm −3 and [SO 2− 4 ] is the sulfate concentration in µg m −3 .For this relationship, a lower bound on CDNC of 1 cm −3 is used.
CanAM4.1 calculates the direct radiative effect of scattering by aerosols and the first indirect radiative effect in which cloud optical properties are influenced by aerosol concentrations.Effects of aerosols on the conversion of cloud water to precipitation (second indirect effect) are not considered in the current version of CanAM4.1.Direct effect calculations account for scattering and absorption using Mie theory.These processes depend on aerosol mass and relative humidity: sulfate aerosols scatter radiation more efficiently at higher relative humidity as they swell in size to establish thermodynamic equilibrium according to Raoult's law.The overall efficiency of the scattering effect also varies with wavelength and aerosol concentration.The first indirect effect is computed by determining the effective radius of cloud droplets based on the relationship between sulfate aerosol and CDNC described above.Smaller droplets are more efficient at scattering solar radiation than larger droplets.Given the much greater cloud fraction of layer (stratiform) clouds compared to convective clouds, the indirect effect is only applied in layer clouds.Within each model grid cell, the cloud forcing is determined as the difference between the net radiative fluxes for all-sky and clear-sky only.
The bulk scheme considered in this study is simpler than approaches that consider aerosol microphysics in detail (e.g., Bellouin et al., 2013).However, based on the few available studies comparing results from different models with bulk and microphysics schemes, we do not see evidence of considerable improvement in radiative forcing estimates based on simulations with microphysics schemes relative to bulk schemes (Schulz et al., 2006;Koch et al., 2009;Quaas et al., 2009).
Description of the model experiments
A series of model experiments was conducted to investigate the effects of different sea surface DMS climatologies and gas transfer formulations on net TOA radiation and the atmospheric burdens of DMS, SO 2 , and sulfate aerosol.These experiments are listed in Table 1.The surface concentration fields considered are the observationally derived K99, K00, and L10 and the empirical algorithm AN01, which computes DMS concentration from chlorophyll, nutrient concentrations, and solar irradiance (Anderson et al., 2001).Of the various diagnostic and prognostic models of DMS used in global models, AN01 was found to produce global-mean DMS fluxes closest to L10 (although the spatial structures of the fluxes differ considerably; Tesdal et al., 2016).As well, we consider simulations with the L10 climatology replaced by its spatial mean (but retaining the seasonal cycle) and with L10 replaced by its annual mean (retaining the spatial structure).
The L10 climatology is an update to the K99 and K00 climatologies, incorporating a larger set of DMS observations.By comparing these three climatologies, we can assess the consequences of using an updated DMS concentration climatology for air-sea fluxes and climate response (Mahajan et al., 2015), helping quantify the importance of continued improvements to estimates of the seawater DMS field.Furthermore, K99 and K00 have been used in a number of previous studies (e.g., Thomas et al., 2010Thomas et al., , 2011;;Woodhouse et al., 2010Woodhouse et al., , 2013)).By including K99 and K00, we allow for the comparison of our results with those of previous studies.
Because the wind speed and DMS concentration are correlated, the fluxes associated with the temporally invariant or spatially uniform concentration fields do not equal the global-mean flux associated with the spatially and temporally varying concentration.Because we wish to distinguish the direct climatic influence of spatial and temporal structure in DMS fluxes from the global-mean flux, the temporally or spatially uniform DMS concentration fields were rescaled to produce the same total flux as the reference simulation.The scaling factors were determined with offline calculations using ERA-Interim reanalysis wind, sea ice, and SST (Dee et al., 2011).For the temporally invariant run, a single scaling factor was determined, while for the spatially uniform case scaling factors were determined for each monthly field.Two additional simulations were conducted with spatial and temporal patterns given by climatologies other than L10 (K99 and AN01) but scaled to have the same global-mean flux as L10 (Table 1).
The DMS flux formulations considered are L86 and N00 (Liss and Merlivat, 1986;Nightingale et al., 2000).For N00, we conducted simulations with and without air-side resistance (γ a ) accounted for in the flux formulation.A detailed discussion of different DMS concentration climatologies and flux formulations is presented in Tesdal et al. (2016).
The control simulation (L10 & N00 & γ a ) was carried out using the L10 DMS concentration field with the N00 wind parameterization scheme and accounting for air resistance (Nightingale et al., 2000;Tesdal et al., 2016).The L10 climatology was used for the control simulation as it is in closest agreement with the observational database, including observations made since it was developed (Tesdal et al., 2016).
All DMS concentration fields were prepared offline before model simulations were carried out.The AN01 climatology was constructed using observed chlorophyll, light, and nutrient fields (as outlined in Tesdal et al., 2016).Differences between the model runs result from differences in DMS concentration fields, flux parameterizations, and internal variability in the model.Other aspects of the model, such as oxidation pathways and cloud microphysics, are the same for all model experiments.In order to assess internal variability, an ensemble of three 5-year-long runs were produced for each model configuration.Each ensemble member uses the exact same model configuration, but in each a different seed was used in the random number generator used in the radiation code.Ensemble averages are statistically more robust estimates of the climate influence of DMS than any individual member of the ensemble.The spread among ensemble members indicates the magnitude of the response to changes in DMS fluxes relative to internal variability.All simulations are carried out for the period from January 2003 to December 2008, with the first year discarded as a spin-up period.
Assessment of simulated sulfate aerosol
The assessment of simulations of natural sulfate aerosol in the marine troposphere is challenging given a lack of chemical observations in remote regions of the ocean where the contribution of DMS oxidation to sulfate aerosol concentration is most significant.Even where chemical measurements exist, the relative contribution of ocean DMS emissions to net sulfate production cannot be directly observed.In an attempt to assess model results for sulfate, observed sulfate concentrations from ship-based measurements were compiled from available datasets obtained from the NOAA PMEL Atmospheric Chemistry Data Server (http://saga.pmel.noaa.gov/data).Figure 2 shows a map of the cruise transects from which observational datasets were drawn.The datasets contain multiple types of sulfate, but only total non-sea-salt sulfate (nss-SO 2− 4 ) was considered (sum of all size fractions present).
For comparisons with these datasets, simulated sulfate concentrations from the control run were compared to the measurements, matching simulated and observed nearsurface concentrations according to nearest location and month of the year.This yields a correlation of 0.57 between simulated and observed concentrations (Fig. 3).The mean simulated concentration in Fig. 3 of 0.96 ± 0.86 µg m −3 for all available data is lower than the corresponding observed value of 1.84 ± 2.31 µg m −3 (i.e., model underestimate of 48 %).
In an attempt to characterize the impact of anthropogenic pollution on the results, the data were grouped by latitude (Fig. 3a).Simulated and observed concentrations between the Equator and 50 • N are relatively high, which can be partly explained by contributions of anthropogenic emissions to sulfate concentrations.Without these data, the agreement between mean simulated and measured concentrations improves noticeably (0.49 ± 0.13 µg m −3 in the model vs. 0.72 ± 0.58 µg m −3 in observations, an underestimate of 32 %, Fig. 3a).
To further characterize the impact of DMS emissions on atmospheric sulfate concentration, the fraction of sulfate from ocean DMS emissions was diagnosed based on model simulations with and without ocean DMS emissions.The diagnosed fraction from these simulations is in good overall agreement with results from Gondwe et al. (2003), which show a large influence of DMS emissions on near-surface sulfate concentration in the Southern Hemisphere and at high latitudes in the Northern Hemisphere.As shown in Fig. 3b, low sulfate concentrations tend to be associated with a high fraction of sulfur originating from DMS emissions.There is good agreement between mean simulated and observed concentrations where this fraction exceeds 40 % (0.48 ± 0.26 µg m −3 in the model versus 0.72 ± 0.55 µg m −3 in observations, Fig. 3a), with a Pearson correlation coefficient of 0.62.
Comparisons in Fig. 3 are influenced by the use of climatological instead of actual emissions in the model.In addition, differences between climatological and actual meteorological situations and the relatively low spatial resolution of the model need to be considered when interpreting simulated sulfate concentrations.These factors may largely ex- plain the lower variability in simulated concentrations compared to observations.However, even if local differences in spatial or temporal variability in sulfate concentrations exist they are unlikely to greatly influence the global climate, based on an analysis of results with variable DMS emissions in Sect.3.4.
In summary, the analysis of sulfate concentrations over the ocean confirms that a substantial fraction of the sulfate concentrations in observations and the model are related to emissions of sulfur from the ocean, with good overall agreement in regions that are most strongly affected by DMS emissions.This provides evidence of realistic simulations of atmospheric DMS sources and aerosol removal processes in the marine atmosphere.In addition, simulated relationships between sulfate aerosol concentrations and simulated cloud microphysical properties in the model agree well with relationships that are based on observed cloud properties over the ocean (Ma et al., 2010;von Salzen et al., 2013), which provides evidence for realistic responses of simulated radiative effects of sulfate aerosol to DMS emissions in the model.
Comparison between model and reanalysis flux estimates
Before analyzing the climatic influence of differences in DMS fluxes, we will compare the global-and annual-mean DMS fluxes in the different simulations to the fluxes calculated with the ERA-Interim reanalysis SST, sea ice, and wind speed fields over the same time period as the model simulations (Table 2).The global-and annual-mean flux is generally higher in the CanAM4.1 simulation than when reanalysis fields are used: it is 22-24 % larger with N00 (with or without air resistance) and 14 % larger with LM86.These differences must result primarily from differences in the wind fields, because SST and sea ice cover are specified in all simulations with AMIP boundary conditions and are very similar to the ERA-Interim fields.The winds are overall somewhat stronger in the model than in the reanalysis product: the annual-mean surface wind speed is 17 % higher on average in CanAM4.1.The frequency distribution and seasonality of the winds also differ slightly between the model and observations (not shown).Fluxes are particularly sensitive to high wind speeds, and slight changes in the wind distribution can be magnified in the DMS flux.Consistent with the results of Tesdal et al. (2016), the DMS flux calculated with the L10 DMS concentration field is higher than that calculated with K99 or K00, independent of which gas transfer formulation is used.
Because the DMS fluxes computed with the model wind fields differ substantially from those computed by reanalysis winds, we expect the simulated climatic influence of DMS to be biased.However, as our focus is on the sensitivity of the climatic influence of DMS to changes in DMS fluxes rather than the absolute strength of the effect, this model bias is not expected to affect our results.
Fluxes and atmospheric sulfur burdens
Changes in the DMS concentration climatology and the flux formulation result in substantial changes in the global-mean flux (including both ocean and terrestrial sources; Table 3).The change relative to the control simulation ranges from a 37 % reduction using K99 and LM86 to an 8 % increase when neglecting air-side resistance.The largest ensemble spread in DMS emissions among the simulations is less than 0.06 µmol m −2 d −1 , which is negligible compared to the overall range of DMS emissions of the different model runs (3.15 µmol m −2 d −1 ).By construction, the difference from the reference simulation is negligible in the temporally invariant and spatially uniform simulations and in the simulations with the rescaled concentration fields K99* and AN01* (Table 1).
The magnitudes of the simulated sulfur sources, sinks, and atmospheric burdens are also presented in Table 3.The budgets of sulfur species are very close to equilibrium in all simulations (sources approximately equal sinks).The reduction in DMS emission for simulations using K99 relative to those using L10 results in a reduction in daytime oxidation by OH, while nighttime oxidation by NO 3 does not change much.In contrast, both daytime and nighttime oxidation rates are affected equally when L10 is replaced with K00.The responses of oxidation rates to changes in DMS concentration patterns likely result from the distribution of the oxidants OH and NO 3 , which are specified in CanAM4.1.
The relationship between changes in the simulated atmospheric burdens of sulfur species and changes in DMS flux is approximately linear (Table 3).The largest changes occur in the DMS burden: the difference of ∼0.1 TgS (61 %) between L10 & N00 and K99 & LM86 is close to the difference in DMS flux (68 %) between these two simulations.The relative changes of SO 2 and sulfate burdens are smaller than those of DMS because of the large background value for SO 2 and sulfate from other sources (anthropogenic and volcanic).
The relationships of DMS, SO 2 , and SO 2− 4 burdens with DMS flux are illustrated in Fig. 4.There are two distinct groups of simulations, depending on which DMS field is used.Regression lines computed for simulations with L10 (blue) and with K99 (purple) are almost parallel, indicating an approximately constant offset in burden between the K99 and L10 simulations.The sensitivity of atmospheric burdens of sulfur species to the spatial and temporal structure of DMS concentration is much smaller than to the global-mean flux.
Relationship between radiative effects, sulfate, and DMS
To a first approximation, the relationship between TOA net radiation and the global-mean flux of DMS is linear (Fig. 5).
Deviations from that linear relationship can be attributed to differences in spatial and temporal distribution among the DMS fields or internal variability.As with atmospheric SO 2 and SO 2− 4 burden, the relationship between the radiation fields and DMS flux can be divided into two classes of simulations using K99 or L10.The response of the radiative effect to differences in flux is smaller for K99-based simulations than for those based on L10.K99 generally has a larger radiative effect relative to the better-constrained L10, and this difference increases with increasing flux (i.e., with increasing wind speed and/or gas exchange coefficient).
Figure 5 shows that there is considerable variation in TOA net radiation depending on the strength of the ocean DMS source.Across the experiments, the range is 0.67 W m −2 (among ensemble means).The sensitivity to air-sea flux parameterization is particularly strong: the difference between LM86 and N00 in average flux (and thus in radiative effect) is greater than the difference among DMS concentration fields considered.The difference in net radiation between K00 and L10 is 0.33 W m −2 , very similar to the value of 0.3 W m −2 estimated by Mahajan et al. (2015).
The spread of the individual ensemble members in Fig. 5 indicates the uncertainty in the radiation budget resulting from model internal variability over the 5-year period of the simulations, independent of the boundary conditions.This spread is on average 0.12 W m −2 (ranging from 0.04 to 0.19 W m −2 ) (compared to a range of 0.67 W m −2 across experiments).
The DMS concentration fields considered in this analysis are a relatively narrow subset of the observationally based or modelled climatologies considered in Tesdal et al. (2016).Use of some of these very different concentration fields would be expected to result in substantially different effects on the atmospheric radiation budget.A linear regression model constructed from the subset of simulations using N00 & γ a was used to obtain an estimate of the possible range of radiative perturbations corresponding to the entire range of DMS climatologies (Fig. 6).Offline reanalysisbased DMS fluxes were used in the estimation of DMS ra- diative effects for those climatologies for which DMS fluxes from CanAM4.1 were not available.The range of perturbations to net TOA radiation across the different DMS climatologies with the same flux formulation is 0.75 W m −2 , with L10 at the lower end since it produces the largest flux.
A similar estimate can be made for variation among the available piston velocity schemes, constructing the linear regression with model runs that have the same DMS field but different air-sea flux parameterizations (not shown).Using L10 as the DMS field and considering flux estimates obtained using N00, LM86, and a third parameterization of Wanninkhof (1992) Irrespective of differences in the spatial and temporal patterns of the DMS concentration field, the relationship between net TOA radiation and atmospheric sulfate burden is close to linear (Fig. 7).There is no evidence of distinct relationships depending on use of the L10 or K99 climatologies as seen in the relationship between DMS flux and TOA radiation.Evidently, these differences are associated with spatial and temporal differences in the oxidation of DMS to sulfate (Fig. 4).
Global means of individual radiation fields (shortwave cloud forcing, TOA clear-sky reflected flux, and TOA total reflected flux) are plotted against global-mean DMS flux and global-mean sulfate burden in Fig. 8. TOA clear-sky reflected flux represents the direct aerosol radiative effects, while shortwave cloud forcing represents the first indirect effect.In these simulations, the direct and first indirect effects are approximately equally sensitive to changes in DMS flux (or sulfate burden).The response of all-sky TOA total reflected flux to changes in global-mean DMS flux and atmospheric sulfate burden (Fig. 8) is similar to the total radiative effect (Figs. 5 and 7), and the range in total reflected flux is as large as that of total forcing among the different simulations.As the total radiative effect includes variation in longwave radiation while the reflected solar flux accounts only for shortwave radiation, our results confirm that the radiative effects associated with DMS are primarily in the shortwave.
www.atmos-chem-phys.net/16/10847/2016/The internal variability in either cloud forcing or clearsky reflected flux is generally larger than in the total reflected flux (which is approximately the sum of the first two) (Fig. 8).While the overall radiative effect of DMS fluxes in the model is estimated with reasonable precision with these experiments, larger ensembles or longer integrations may be required to achieve the same level of precision for the different components.
The effect of DMS spatial and temporal structure on aerosols and radiative effects
Suppressing either the spatial or temporal variability in ocean DMS concentration changes the concentration of sulfate aerosol and its effect on the TOA radiation budget (Figs. 5 and 8).While these changes are small, the ensemble spreads indicate that in some cases they are robust.The changes in global-mean DMS flux, oxidation rates, sulfur species burdens, and components of the TOA radiation budget between the control run and model runs with temporally invariant and spatially uniform DMS fields are shown in Fig. 9.For comparison, the changes from a simulation using L10 and N00 but neglecting the air resistance term are also shown.2016) and the control simulation plotted against the global ocean efflux of DMS.DMS fluxes were computed offline using fields from the ERA-Interim reanalysis with N00 & γ a as the air-sea transfer scheme (large filled circles).A linear regression for these runs only (grey dashed line) is used to derive estimates for other experiments (small red dots on regression line).The different climatologies considered are listed in Table 4.The global-mean burden of a species in the atmosphere over a given time period is determined by the efficiency of internal sources and sinks, and indirectly by the transport.The time-mean state is effectively in equilibrium (Table 3), so global sulfur budgets are a simple sum over all internal sources, sinks, and fluxes between sulfur species.However, a balanced budget can be achieved with different values of the individual source and sink terms.The global rates of individual flux or sink processes are determined by the spatial and temporal relationships among the chemical species involved.By construction, the spatially uniform and temporally invariant DMS concentration fields yield global-mean DMS fluxes that differ only slightly from the control simulation.However, there are substantial and statistically robust changes in the sink strengths (Fig. 9).The absence of spatial or temporal structure in the DMS concentration fields has different effects during day and night: daytime oxi-dation of DMS by OH is decreased in these simulations, balanced by an increase in nighttime oxidation by NO 3 .The simulation without air resistance shows an increase in global-mean DMS flux compared to the control of about 0.40 µmol m −2 d −1 , which is balanced by an increase in both oxidation rates.
The atmospheric burdens of all sulfur species increase significantly in the simulation without air resistance.As for the simulations with spatially or temporally averaged DMS concentrations, only the spatially uniform DMS simulation results in a change in oxidation patterns resulting in statistically robust increases in the burdens of both DMS and SO 2 .Interestingly, the increase in SO 2 in these simulations is associated with a decrease in SO 2− 4 .A similar decrease in SO 2− burdens is also seen in the simulations with temporally invariant DMS concentration fields, although neither the DMS nor SO 2 burdens show statistically robust changes.
For all three of these sets of simulations, there is a much stronger response in the clear-sky reflected flux than in the shortwave cloud forcing.The changes in total reflected solar flux are statistically robust for both the simulations with temporally invariant surface concentration of DMS and those without air-side resistance.In all of these simulations, the effect on TOA cloud forcing is not significantly different from zero.
Taken together, these results indicate that the spatial and temporal distribution of DMS flux affects the aerosol direct radiative effect primarily by influencing the efficiency of oxidation of DMS to SO 2 and SO 2− 4 .The effect on reflected solar fluxes of changes in SO 2− 4 is larger for simulations with temporally invariant DMS concentration than for spatially uniform concentration, despite the change in SO 2− 4 being larger in the latter case.This will be addressed in more detail in Sect. 4.
Note that the magnitudes (but not the signs) of changes in SO 2− 4 resulting from suppressing spatial or temporal structure in the DMS concentration fields are the same as neglecting the air-side resistance term in the DMS flux formulation.Air-side resistance is often ignored in calculations of air-sea DMS fluxes.Our results indicate that the effect of neglecting this term is comparable in magnitude to the seemingly more dramatic change of entirely eliminating temporal or spatial structure in the DMS concentration fields.
Discussion
The results presented in Sect. 3 demonstrate that, while the magnitude of the spatial and temporal mean DMS flux is linearly related to the mean DMS burden to a good first approximation, there are deviations from this linear relationship.A simple expression for the global spatial-and temporal-mean DMS budget is where the angle brackets denote global space-and timeaverages, E is the emission field, and O is the oxidation rate field (per unit of DMS concentration).At equilibrium, the rate of change vanishes, and The upper three panels of Fig. 9 present simulated values of E and O × DMS for three sets of simulations (spatially uniform, temporally invariant, and neglecting air-side resistance).
If O and DMS did not depend on space or time, then we could decompose O × DMS as O × DMS , and an exactly linear relation between global-mean flux and globalmean atmospheric burden would exist.The deviations from this relationship evident in Fig. 4 result from spatial and temporal correlations between the distribution of DMS and its sinks.Similarly, deviations from a purely linear relationship between spatial-and temporal-mean atmospheric burdens of SO 2 and SO 2− 4 result from correlations between SO 2 and its oxidation rate.Atmospheric transport contributes to spatial and temporal correlations between atmospheric distributions of sulfur species and their sources and sinks.For example, some DMS emitted in the tropics will be transported by convective processes to the upper troposphere, where sinks are weaker.Similarly, the lifetime of sulfate transported to the upper troposphere is extended, as its primary sink is in lowto mid-tropospheric clouds.A detailed analysis of the spatial relationships among these processes is outside the scope of the present study.
As with the atmospheric burdens of sulfur species, the response of net TOA radiation to changes in mean DMS flux is linear to a first approximation (Fig. 5), with some scatter around this relationship resulting from model internal variability and differences in the spatial and temporal structure of the DMS fluxes.Our model simulations allow us to assess the relative sizes of three sources of uncertainty in the radiative effect of DMS emissions: (1) uncertainty in total emissions, (2) uncertainty in spatial/temporal pattern of fluxes, and (3) internal variability.Figure 5 indicates that, for the range of DMS climatologies and flux formulations considered, the size of the first of these uncertainties is about 0.7 W m −2 , while the second and third are smaller (about 0.2 W m −2 ).While internal variability and uncertainty in spatial and tem-poral structure in DMS flux contribute to the overall uncertainty in the net radiation budget, our study shows that uncertainty in the global-mean flux is the dominant contributor.Uncertainties associated with model representations of atmospheric chemistry, cloud microphysics, and radiative transfer cannot be assessed using a single AGCM.Comparison of the magnitudes of these uncertainties to those we have considered is an interesting direction of future study.
The reduction in the radiative effect of DMS emissions resulting from suppressing the seasonal cycle in L10 is larger than that resulting from suppressing spatial variability (Fig. 5).This is consistent with the fact that DMS concentrations in L10 tend to be higher in summer (when changes to shortwave fluxes are particularly important) at mid-to high latitudes.As atmospheric residence times of sulfur species are on the order of a few to several days and their transport is primarily zonal, DMS emitted in the mid-or high latitudes will have its strongest effect in these latitude bands, and there will be a spatial correlation between DMS-derived sulfate aerosol concentration and aerosol radiative effects.These results suggest that, for global-mean responses, resolving the correct seasonal distribution of DMS fluxes is more important than resolving the spatial distribution, although neither is as important as the global-mean flux.However, we also note that the ensembles of the spatially uniform and temporally invariant simulations slightly overlap and it is possible that the difference between the two is a result of internal variability.
The fact that the deviations of TOA net radiation and reflected solar flux are similar in absolute value (Figs. 7 and 8) demonstrates that the climate response to DMS is dominated by shortwave fluxes.A weak response in the longwave may exist, but comparison of Figs. 7 and 8 suggests that it is smaller than internal variability.Furthermore, the strongly linear relationship between the atmospheric burden of SO 2− 4 and total radiative effect (Fig. 7) demonstrates that simulated reductions in net TOA radiation are a direct response to increases in the atmospheric sulfate burden.Further statements about the causal relationship between changes in DMS flux and the global radiative effect are difficult because of the broad range of processes and feedbacks in the model.
Rough estimates of the range in net TOA radiation given the possible range in DMS flux are 0.75 W m −2 (among the range of available DMS fields) and 1.04 W m −2 (among all different flux parameterizations considered).Contrasting these uncertainties with the well-constrained radiative forcing of +1.82 ± 0.19 W m −2 due to the increase in atmospheric CO 2 from 1750 to 2011 (Myhre et al., 2013) emphasizes the degree of uncertainty in DMS-derived aerosol forcing and the need to better constrain this quantity.Previous studies have found a relatively weak link between DMS fluxes and climate (e.g., Woodhouse et al., 2010;Kloster et al., 2007;Vallina et al., 2007).However, these studies may have a "weak effect" bias because of a low bias in DMS flux (Fig. 6), which would translate into a low bias in the radia-tive effect of DMS.The results of the current study show that there is a systematic deviation from the control run of up to 0.75 W m −2 for some DMS models and algorithms.
The uncertainty in DMS concentration estimates contributes substantially to uncertainties in present-day aerosol radiative forcing (Dentener et al., 2006;Carslaw et al., 2013), defined as the difference between present-day and preindustrial due to anthropogenic changes in the atmospheric aerosol burden (Myhre et al., 2013).While observationally based estimates can be made for the present day, these are not available for preindustrial conditions.Current understanding of the natural sulfur cycle indicates that most preindustrial sulfate aerosol originated from DMS and volcanic emissions (Carslaw et al., 2013).Uncertainty in estimates of these fluxes, which must be based on models in the absence of direct observations, will impact forcing estimates.The large uncertainty in DMS flux translates into uncertainty in preindustrial aerosol concentration, regardless of whether one assumes that DMS sources remain the same as or similar to preindustrial conditions.As DMS emissions may have changed from the preindustrial state, using fluxes estimated from present-day conditions increases this uncertainty.
Our estimates of climatic effects of DMS obtained using CanAM4.1 could be biased due to idealized assumptions about aerosol processes and the absence of a processbased representation of the indirect aerosol effect.These biases would be expected to be especially pronounced in the parts of the atmosphere least affected by anthropogenic emissions, such as the Southern Hemisphere.It is possible that sensitivity to the spatial and temporal distribution of DMS would change with an improved representation of cloud microphysics.Furthermore, instead of using specified atmospheric concentrations of the oxidants, a comprehensive tropospheric chemistry scheme could be used to achieve a more realistic modelling of atmospheric DMS oxidation.
This study did not investigate climate sensitivity to DMS flux in a coupled model; all model simulations were atmosphere-only.These experiments could be repeated in a coupled model setting which would allow for the feedbacks central to the CLAW hypothesis.Furthermore, a coupled model setup would allow for the evaluation of prognostic DMS modules, as opposed to using specified (climatological) fields.Such an analysis would allow exploration of the sensitivity of radiation and climate to specific parameters or mechanisms within the prognostic DMS formulations and to distinguish this from sensitivity to other aspects of the model.Two caveats regarding such an analysis are that DMS concentration fields resulting from existing prognostic models differ substantially from observations (Tesdal et al., 2016) and that internal variability would increase due to the longer timescales of oceanic variability.
The focus of our analysis has been the influence of DMS emissions on sulfate aerosol and its radiative effects, which can be used to estimate changes in global energy budgets.These measures provide a simple basis for quantifying as-pects of the climate response to imposed forcing agents, especially global-mean temperature, and hence are widely used in the scientific community (Myhre et al., 2013).We did not attempt to analyze regional and sub-annual variations in radiative effects of aerosols which are more difficult to analyze in a statistically robust way because internal variability is much larger relative to the forced response on regional scales.In general, regional relationships between aerosols, radiation, and temperature response can be complex and nonlinear.While these relationships are beyond the scope of the present study, we consider our estimates of global-scale effects to be robust and relatively insensitive to regional-scale processes.
Conclusions
Despite more than 30 years of concerted research on the issue, fundamental uncertainties remain regarding the spatial and temporal structure of surface ocean DMS concentrations and how best to model DMS fluxes (Tesdal et al., 2016).In this study, we have used the atmospheric component of a state-of-the-art global climate model (CanAM4.1) to assess the uncertainty in atmospheric sulfur burdens and their effect on the planetary radiation budget associated with uncertainties in DMS concentration fields and air-sea flux formulations.Our results indicate that, to a first approximation, the global spatial and temporal mean effect of DMS on net TOA radiation scales linearly with the spatial and temporal mean flux.Spatial and temporal correlations between model sulfur species (DMS, SO 2 , and SO 2− 4 ) and their sinks result in deviations from this linear relationship that exceed internal variability, but these deviations are relatively small.This result suggests that on a global scale, it is most important to have an accurate estimate of the global DMS flux, while resolving the exact spatial and temporal distribution is of less importance.Neglect of air-side resistance in the flux parameterization was shown to have a comparable (or even larger) effect on net TOA radiation than suppressing spatial or temporal structure in the DMS concentration field.From the perspective of global climate, accurate formulation of surface fluxes is as or more important than accurate representation of sea surface DMS concentrations.
A comprehensive view of the global-scale uncertainties is important for understanding the role of DMS in the climate system.Uncertainty about the global DMS concentration translates to uncertainty about global estimates of DMS flux, aerosol burdens, and their radiative effects.These uncertainties limit the confidence with which we can make statements about the importance of DMS in the climate system, and leave open the possibility that changes in DMS fluxes could alter future climate in as-yet-unexpected ways.
Figure 1 .
Figure 1.Schematic representation of the sulfur cycle and radiative effects of sulfate aerosols in CanAM4.1.In each grid cell, the model accounts for sources and sinks of sulfate aerosol (SO 2−4 ), SO 2 , and DMS.SO 2 is emitted from volcanos, fires, and anthropogenic sources.DMS is mainly emitted from the oceans, but there are also some terrestrial sources.DMS is oxidized to SO 2 by OH during the day and by NO 3 during the night.SO 2 is oxidized to sulfate both within clouds and under clear-sky conditions.In-cloud oxidation of sulfur and wet deposition is treated separately for layer (stratiform) and convective clouds.For both types of clouds, oxidation occurs via ozone (O 3 ) and hydrogen peroxide (H 2 O 2 ).Oxidation rates depend on the pH of the cloud water, which depends on the concentrations of nitric acid (HNO 3 ), ammonia (NH 3 ), and carbon dioxide (CO 2 ).
Figure 3 .
Figure 3. Correlation between observed and modelled nss-SO 2− 4 using the control run (L10 & N00 & γ a ).Modelled nss-SO 2− 4 are derived from mean results of the simulation during the time period from 2004 to 2008.Observed concentrations are from 1995 to 2012 and matched to the nearest grid point and month of the model results.Comparison is done for (a) different latitude bands and (b) different fractions of sulfate that is produced by DMS emissions.
Figure 4 .
Figure 4. Scatter plots of atmospheric burdens of sulfur species vs. other species and ocean DMS emissions.Each dot represents the global and annual mean of individual ensemble members from the model experiments summarized in Table 1.Crosses indicate ensemble means for simulations with the original, unscaled flux fields.The regression lines were computed from the individual ensemble members corresponding to these unmodified DMS flux fields.Open circles denote ensemble mean of simulations with seasonality (red) or spatial pattern (yellow) removed.Open diamonds denote ensemble averages of simulations with DMS fields different from L10 but scaled to give the same globalmean flux.The first column shows atmospheric burdens of sulfur species (SO 2− 4 , SO 2 , DMS) against ocean emission of DMS, the second column shows atmospheric burdens of SO 2−4 and SO 2 against DMS burden, and the third column shows atmospheric burden of SO 2− 4 against the SO 2 burden.
Figure 5 .
Figure 5. Change in global-and annual-mean net radiation at the top of the atmosphere between model experiments and control experiment relative to the global-and annual-mean flux of ocean DMS.Crosses represent the ensemble means of simulations with unmodified DMS fields.Open circles denote ensemble mean of simulations with seasonality (red) or spatial pattern (yellow) removed.Open diamonds denote simulations with DMS fields different from L10 but scaled to give the same global-mean flux.Individual ensemble members for each experiment are shown as dots of the same colour.Only data from individual runs with unmodified K99 (purple) or L10 (blue) DMS emissions are used for the corresponding regression lines.
Figure 6 .
Figure 6.Estimated difference in global-and annual-mean net radiation at the top of the atmosphere (TOA) for the different climatologies considered in Tesdal et al. (2016) and the control simulation plotted against the global ocean efflux of DMS.DMS fluxes were computed offline using fields from the ERA-Interim reanalysis with N00 & γ a as the air-sea transfer scheme (large filled circles).A linear regression for these runs only (grey dashed line) is used to derive estimates for other experiments (small red dots on regression line).The different climatologies considered are listed in Table4.
of SO (µmol m −2 ) Deviation of net radiation at TOA from control (W m −2
Figure 7 .
Figure 7. Deviation in global-and annual-mean net radiation at TOA from control plotted against the global-and annual-mean atmospheric burden of SO 2− 4 .Symbols are as in Fig. 5.All data points are used for the linear regression (grey dashed line).
Figure 8 .
Figure 8. Deviation in global means of cloud forcing (upper panels), clear-sky reflected (middle panels), and total reflected irradiance (lower panels) at TOA from control plotted against global-and annual-mean ocean DMS flux (left) and global-and annual-mean atmospheric burden of SO 2− 4 (right).Symbols are as in Fig. 5.
Figure 9 .
Figure 9. Absolute differences in global-mean flux, oxidation rates, sulfur burdens, and radiation between the control run and model runs with seasonally invariant (red) or spatially uniform (yellow) DMS concentration, as well as the L10 & N00 model experiment (blue).Fluxes and oxidation rates of DMS are shown in the upper panels.The global-mean DMS flux includes terrestrial sources to ensure mass balance.The only sink for DMS is oxidation to SO 2 , which is shown for both oxidation pathways (oxidation by OH and NO 3 radicals).Absolute changes in the atmospheric sulfur burdens of DMS, SO 2 , and SO 2− 4 are shown in the middle panels.Bottom panels show absolute changes in cloud forcing, clear-sky reflected and total reflected shortwave flux.Total reflected flux is the sum of cloud and clear-sky reflected flux.To derive the error estimates, all treatments (control, temporally invariant, spatially uniform, and no air-side resistance) were pooled after their separate means were removed; error bars are ± 2 standard deviations of the pooled data (n = 12).Statistical significance is determined by comparing the mean differences among the model runs with the corresponding error bars.
Table 1 .
List of model sensitivity experiments.
Table 2 .
Ocean emissions of DMS from CanAM4.1 and offline calculations with reanalysis fields.DMS flux is derived for the time period of the model simulations (January 2004 to December 2008).Quantities in parentheses are percentage changes relative to the reference run (L10 & N00 & γ a ). | 2018-12-07T16:07:34.343Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "aa85a06ca1ff49840c1c8f136c50551102a95e2f",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/16/10847/2016/acp-16-10847-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f899f4286cf3345342a141285d34f55a9f6edbf",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
44130147 | pes2o/s2orc | v3-fos-license | Re-assessing ZNF331 as a DNA methylation biomarker for colorectal cancer
We have previously shown that aberrant promoter methylation of ZNF331 is a potential biomarker for colorectal cancer detection with high sensitivity (71%) and specificity (98%). This finding was recently confirmed by others, and it was additionally suggested that promoter methylation of ZNF331 was an independent prognostic biomarker for colorectal cancer (n = 146). In the current study, our initial colorectal cancer sample series was extended to include a total of 423 cancer tissue samples. Aberrant promoter methylation was found in 71% of the samples, thus repeatedly suggesting the biomarker potential of ZNF331 for detection of colorectal cancer. Furthermore, multivariate Cox’s analysis indicated a trend towards inferior overall survival for colorectal cancer patients with aberrant methylation of ZNF331.
Introduction
In cancer, increased promoter DNA methylation is a frequent event commonly occurring early in tumor development. Methylated DNA sequences may serve as tumor biomarkers in liquid biopsies for detecting cancer and for predicting patient prognosis [1].
In 2011, we filed a patent application covering methylation of ZNF331 (Zinc finger protein 331) as a biomarker for gastrointestinal cancers [2]. ZNF331 was shown by Yu et al. to be inactivated by promoter methylation in gastric cancer, providing the cancer cells with increased growth potential and invasiveness [3]. We also found a high methylation frequency in patients with gastric cancer (80%) and to a lesser extent in patients with pancreatic cancer (40%) and cholangiocarcinomas (26%) [4]. Most importantly, we reported high sensitivity (71%) and specificity (98%) for ZNF331 methylation in colorectal cancer early 2015, strengthening the potential of ZNF331 as a biomarker for colorectal cancer detection [4]. Interestingly, these findings were recently confirmed, further supporting the biomarker potential of ZNF331 in colorectal cancer [5]. The same study also suggested aberrant promoter methylation of ZNF331 as an independent prognostic marker for colorectal cancer, analyzing 146 samples [5]. In the present study, we analyzed the effect of ZNF331 methylation on overall survival, including altogether 423 colorectal tissue samples.
Results and discussion
Methylation of the ZNF331 promoter was found in 71% (301/423) of the patients with colorectal cancer and was associated with localization in the right colon, microsatellite instability (MSI), and the BRAF V600E mutation. Furthermore, ZNF331 methylation was strongly associated with CpG island methylator phenotype (CIMP) and MLH1 methylation ( Wang et al. [5] further reported that patients with ZNF331 promoter methylation had a worse prognosis than patients with unmethylated promoters. Our results were in accordance with their study, although statistical significance was not reached in the multivariate Cox regression model adjusting for age and stage (HR = 1.44 (0.97-2.14), P = 0.069; Table 2). The univariate model is presented in Fig. 1 (P = 0.143).
In conclusion, in an extended series of colorectal cancer samples, we have showed the potential of promoter methylation of ZNF331 as a biomarker for colorectal cancer detection. We have further provided data indicating a trend towards poorer prognosis for patients with ZNF331 methylation.
Colorectal cancer tissue samples
This study included 423 colorectal cancer tissue samples. Fifty-nine of the samples were obtained from several different hospitals in the southeast region of Norway in the period 1987-1989 (Oslo 3 series; described in [6]), and 364 of the samples were obtained from patients undergoing surgical resection at the Oslo University Hospital-Aker from 2005 to 2011 (Oslo 2 series; described in [7,8]). Survival data was available for 419 patients (Oslo 3, n = 59; Oslo 2, n = 360). DNA from cancer tissue samples were bisulfite treated using the EpiTect Bisulfite Kit (Qiagen), and the samples were purified using the QIAcube (Qiagen). Quantitative methylation-specific PCR (qMSP) was used to analyze the methylation of the ZNF331 promoter (NM_018555), with primers and probe sequences as reported earlier [4]. The method was performed as previously described [4,9], with the ALU-C4 element as a normalization control [10]. As described in ref. [4], samples with percent methylated reference (PMR) values ≥ 1 were considered methylated. Information about MSI, CIMP, MLH1 methylation, and BRAF mutation status were available from previous studies [11,12].
Statistical analyses
Associations between ZNF331 methylation and clinicopathological data were analyzed by Pearson chi-square or Fisher's exact tests. For all analyses, patients were divided into three age groups (< 60 years, 60-74 years, and ≥ 75 years). Breakpoints were chosen as previously described [11]. Overall survival was used as endpoint in the survival analyses and was calculated from time of surgery until death of any cause. Cases were censored at last follow-up. The univariate effect of ZNF331 on survival was modeled by the Kaplan-Meier method and compared using the log-rank test. A multivariate Cox's proportional hazard model was generated by a stepwise selection procedure (backward likelihood model) in order to identify a subset of relevant predictor variables from the set of available clinicopathological data (series, age, stage, gender, CIMP-, MSI-, BRAF-, and ZNF331 methylation status). Hazard ratios (HRs) and 95% confidence intervals (CIs) were derived from the model, and significance of the parameters was assessed using Wald's test. To evaluate the assumption of proportionality, a chi-square test was performed. A P value < 0.05 was considered statistically significant. The analyses were performed using IBM SPSS Statistics 21 and R version 3.4.1.
CIMP: CpG island methylator phenotype
Funding This work was supported by grants from the South-Eastern Norway Regional Health Authority (project number 2016071 to G.E. Lind, funding HM Vedeld as a postdoc).
Availability of data and materials
The datasets generated and analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.
Authors' contributions GEL contributed to the conception and design. HMV, AN, and RAL contributed to the acquisition of data. HMV, AN, RAL, and GEL contributed to the analyses and interpretation of the data. HMV contributed to the drafting of the manuscript. All authors were involved in the revision of the manuscript and have approved the final version. Fig. 1 Effect of ZNF331 promoter methylation on overall survival modeled by the Kaplan-Meier method and compared using the log-rank test | 2018-05-30T01:58:42.383Z | 2018-05-29T00:00:00.000 | {
"year": 2018,
"sha1": "1f2493a5661a8973b0e3b97488a81bd5ada540fa",
"oa_license": "CCBY",
"oa_url": "https://clinicalepigeneticsjournal.biomedcentral.com/track/pdf/10.1186/s13148-018-0503-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "805eac746fdd109268dc1684124acd21a24d812a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235458350 | pes2o/s2orc | v3-fos-license | Multi-Modal Prototype Learning for Interpretable Multivariable Time Series Classification
Multivariable time series classification problems are increasing in prevalence and complexity in a variety of domains, such as biology and finance. While deep learning methods are an effective tool for these problems, they often lack interpretability. In this work, we propose a novel modular prototype learning framework for multivariable time series classification. In the first stage of our framework, encoders extract features from each variable independently. Prototype layers identify single-variable prototypes in the resulting feature spaces. The next stage of our framework represents the multivariable time series sample points in terms of their similarity to these single-variable prototypes. This results in an inherently interpretable representation of multivariable patterns, on which prototype learning is applied to extract representative examples i.e. multivariable prototypes. Our framework is thus able to explicitly identify both informative patterns in the individual variables, as well as the relationships between the variables. We validate our framework on a simulated dataset with embedded patterns, as well as a real human activity recognition problem. Our framework attains comparable or superior classification performance to existing time series classification methods on these tasks. On the simulated dataset, we find that our model returns interpretations consistent with the embedded patterns. Moreover, the interpretations learned on the activity recognition dataset align with domain knowledge.
Introduction
Multivariable time series classification aims to leverage the measurement of multiple variables over a period of time to assign data points to classes. As such, these problems arise naturally across various domains, including biology [1], finance [2], and activity recognition [1]. Along with their increased prevalence, multivariable time series classification datasets have increased in size and complexity, making their analysis challenging [3]. Internet-of-Things technologies, for example, allow for the connection of multiple sensing devices [4]. These device networks generate datasets and classification tasks involving a large number of sensors, many with markedly different characteristics [5]. Similarly, developments in biosensing technologies have given rise to the multi-modal biosensing problem, where time series readings from a diverse set of biosensors are used to characterize the mental and physical state of a subject. Biosensing has been applied in sleep staging [6], emotion recognition [7,8], stress detection [9], attention studies [10,11], brain-computer interfaces [12,13], and wellness monitoring [14,15]. These problems predominantly arise in research settings, with the goal of deriving scientific knowledge from the data. As such, model interpretability is of central importance.
Due to its wide applicability, multivariable time series classification is an area of active research and a multiplicity of approaches exist. These approaches can be divided into deep-learning based methods and feature-based methods. Many feature-based time series classification methods utilize hand-crafted feature extraction techniques. For example, in [1], discrete features learned from the multivariable time series are used to construct a bag-of-patterns model for classification. Deep learning based methods do not rely on hand-crafted features, instead learning an end-to-end model. In [16] for example, a combination of recurrent and convolutional neural networks are proposed for the problem of EEG classification in a brain-computer interface setting. Deep learning techniques are an increasingly powerful tool for dealing with the rising complexity of multivariable time series classification problems. Although deep learning methods are able to learn complex relationships, it is difficult to reveal these relationships in a human-understandable way. This is primarily due to the high number of parameters of deep learning models, and the large space of functions that they could represent [17,18].
There are two major challenges in interpretable multivariable time series classification: heterogeneity in the variables and cross-variable patterns. Many multivariable time series classification datasets are comprised of variables with vastly different noise levels, time scales, and feature domains. For example, in the multi-modal bio-sensing problem, signals representing vastly different biological processes are combined into a single dataset. Moreover, in multivariable settings, it is generally necessary to combine information from multiple variables to arrive at a final classification. As a result, a successful interpretation framework must reveal both the relevant patterns at the single-variable level and the patterns present across variables. This paper introduces a modular, multi-layer prototype learning framework for interpretable classification on multivariable time series data. Our framework provides model-based interpretability, as it is designed to produce classifications transparently [19]. In the framework, single-variable encoders are trained by contrastive learning to extract meaningful features from each variable. Next, single-variable prototype layers represent sample points in terms of their similarity to a set of learned single-variable prototypes. The single-variable prototype similarities are concatenated over all variables, yielding an inherently interpretable representation of the interactions between variable level patterns and a second prototype layer identifies meaningful rules in the form of multivariable prototypes. Taken together, the single-variable and multivariable prototypes show how patterns at the variable level combine together to characterize classes.
The contributions of this work are three fold. First, we present a novel interpretable method for multivariable time series classification which explicitly models both individual variable patterns as well as cross variable patterns. Second, we provide a step-wise training procedure for effectively training the multi-level prototype model. Finally, we design a synthetic dataset with heterogeneous variables and cross variable patterns to verify the validity of the interpretations, as well as demonstrating the applicability of our method on a real-world human activity recognition problem.
Related Literature
Multivariable time series classification has been studied extensively and various classification techniques have been proposed in different domains. In this section, we introduce some of these related works with a specific emphasis on interpretability. We divide these prior works into feature based approaches and neural network based approaches.
Feature Based Approaches Feature based methods generally rely on manual feature extraction methods and discretization to extract features from time series streams. An important class of these methods are Bag-of-Pattern (BOP) methods. BOP methods generate symbolic features from time series sub sequences and represent signals using a bag-of-words like symbol histogram. In [1], Symbolic Fourier Approximation (SFA) is applied to each dimension of a multivariable time series classification problem. The univariate symbolic representations are then combined to generate a multivariable Bag-of-Patterns representation. In [20], a random-forest learner is used to generate symbols for a multivariable bag-of-patterns method. Another important class of feature based approaches are Shapelet methods. These approaches involve extracting time series subsequences that are highly relevant for classification, called shapelets. [21] presents a more efficient method for extracting shapelets from large multivariable datasets by pruning similar shapelet candidates. In [22], multivariate shapelets are used for early classification of signals. Notably, shapelet based methods often have favourable interpretability properties, as the shapelets themselves are directly interpretable as characteristic patterns. However, both shapelet and discretization approaches can be slow and generate potentially very large feature spaces.
Neural Network Approaches Neural network based approaches harness the ability of neural networks to learn complex functions as a feature extraction technique. Some of these methods, however, use neural networks in conjunction with discretization and feature extraction approaches. In [23], SFA features are fed through a neural network to project them into an encoded space. Prototype learning on the encoded space is used for the final classification. In [24], a recurrent neural network encoder produces feature vectors for a prototype learning classifier. In this method, the learned prototypes are used for interpretability purposes. [25] uses random dimension permutation with an attentional prototype network for multivariable time series classification but does not address interpretability. Finally, [26] learns interpretable multivariate shapelets in an embedded space over all dimensions of the multivariable classification problem. Our method builds upon these methods through two levels of prototype learning. This allows for more complex cross variable interactions to be represented, as well as explicitly modelling the informative variation in each variable.
Model
We propose a two-level encoder-prototype learning method for multivariable time series classification.
Problem Formulation Consider a multivariable time series classification task containing d dimensions and n time samples per dimension and a dataset D = {(X (i) , y i )}, where y i ∈ Z denotes the label for training example i and X (i) ∈ R n×d is a matrix for time series sample point i. Each row of X (i) is a time point and each column represents a variable. In particular, we can write Figure 1: Schematic of the proposed framework. The time series corresponding to each variable are separated and input to their respective single-variable encoding layers. These encoded representations are compared to a set of learned prototypes in the single-variable prototype layer, generating a prototype similarity vector for each variable. A multivariable representation of the time series is created by concatenating the prototype similarity vectors from all the variables. The multivariable representation is then compared to a set of multivariable prototypes, yielding a multivariable prototype similarity vector. This multivariable prototype similarity vector is input to a fully connected softmax network to generate the final classification.
k ∈ R n -represents the time series corresponding to the k-th variable. The multi-variable time series classification problem aims to learn a function f : R n×d → Z, which maps instances of the multi-variable time series to their correct class.
Single-Variable Encoder First, our method performs feature extraction independently for each variable. Feature extraction is accomplished by a set of single-variable encoder networks each of which maps a particular variable's time series signal to an encoded vector representation. In this work, we utilize a LSTM encoder network for time series feature extraction, however the choice of encoding network is entirely flexible and can be tuned using domain knowledge. Concretely, our model learns a set of encoder functions {E 1 , ..., E d }, E k : R n → R f k mapping raw time series data for variable k into a feature space of dimension f k . We denote the encoding for variable k for some Single-Variable Prototype Matching Layer Following the single-variable encoding step, our method learns informative patterns in each variable and represents data points in terms of their similarity to these informative patterns. This is achieved by way of a single-variable prototype matching layer. We define a prototype matching layer for the k-th variable as a function P k : R f k → R n k parametrized by a set of n k prototype vectors {p n k } in the feature space for variable k. P k maps vectors from variable k's feature space to a vector of similarity scores with each of the prototype vectors. Explicitly, for the encoded vector e k , we have that P k (e k ) = sim(e k , p (k) 1 ), ..., sim(e k , p (k) n k ) , for a similarity function sim. In this work, we consider Multivariable Prototype Matching Layer Using the single-variable similarity vectors, our model constructs an inherently interpretable representation of a multivariable time series instance in terms of the learned single-variable prototypes. This is achieved by concatenating together the prototype similarity vectors across all the variables, thus showing the combination of variable level patterns. Formally, a multivariable time series sample point X ∈ R n is represented as a multivariable representation vector m = P 1 (e 1 ) ⊕ ... ⊕ P k (e k ) where ⊕ denotes the concatena- . As a result of this construction, the multivariable representation can be viewed as a series of blocks corresponding to each variable. We introduce an additional prototype learning layer on the multivariable representation space in order to learn prototypical rules detailing the interactions between variable level prototypes. The multivariable prototype learning layer, denoted as P M : R 1 , m) . We produce predictions by passing the multivariable prototype similarity vector through a fully connected neural network with softmax activation.
Objective and Regularization
Our model utilizes Cross Entropy loss as the objective for the classification task. However, additional regularization must be added in order to ensure that the learned prototype vectors at both the single and multivariable levels are meaningful and easy to interpret. We make use the following regularization terms which were introduced in [24]: Prototype Diversity We utilize the prototype diversity loss shown in Equation 1 in order to ensure that the prototypes learned by the prototype matching layers represent unique points in the encoded space. This is important for intepretability as duplicate prototypes do nothing to enhance the quality of interpretation, while also damaging the stability of the interpretations. The utilization of the logarithm in this loss function ensures that the penalty does not quickly vanish. This function yields a high penalty to close by prototypes and a low penalty to those that are farther away [24].
Prototype Similarity As in related works on time series classification prototype learning, such as [26] and [23], prototype vectors are interpreted in this work by projection onto the training set.
In order to ensure that this projection step results in a meaningful representation of the prototype, it is important that each prototype is similar to one of the encoded training examples. We include the regularization term in Equation 2 in order to penalize the distance between each prototype and its closest training example. This encourages every prototype to be close to at least one training example, such that that training example can be considered a meaningful representation of the prototype during projection [24].
L similarity = m j=1 min i∈ [1,n] ||p j − f (x i )|| (2) Encoded Space Coverage The interpretability and classification performance of prototype learning hinges on all data points being well-represented by the learned prototype set. Thus we include the coverage regularization term in Equation 3, in order to ensure that the learned prototypes adequately cover the entirety of the encoded training points. This term, which penalizes the distance between every encoded training example and its closest prototype, penalizes learned prototype sets which neglect certain regions of the encoded space [24].
Training Procedure
Encoder Pretraining In order for meaningful prototypes to be selected for each variable, the encoder should be able to extract information from the samples that is meaningful in the classification task. To this end, we propose a pre-training step for the encoders based on contrastive learning [27].
Single-Variable Module Training In the next part of the training process, the single-variable prototype matching layers are trained, resulting in a set of representative prototypes being learned for each variable. Although it seems appropriate to train both the single-variable prototype matching layers and the multivariable prototype matching layer together, this end to end approach yields poor results. Thus, the training of the two units is completed separately. In order to train the single-variable prototype matching layers, we construct a new network where the outputs of all the prototype matching layers are concatenated and fed through a fully connected layer to determine a final classification output. This new network is then trained with the cross entropy loss objective, along with the appropriate regularization terms applied to the single-variable prototype matching layers. This new network allows the single-variable prototype matching layers to be trained taking into account the interactions between variables, while avoiding the challenges posed by end to end training.
Multivariable Module Training
In the final stage of training, the multivariable prototype matching and classification layers are trained on the cross entropy loss, along with interpretability regularizers applied to the multivariable prototype matching layer. As was noted previously, only the parameters of the multivariable prototype matching layer and classification layer are subject to update in this stage, all other parameters are frozen.
Evaluation
We evaluate our framework on the following datasets to verify its ability to both classify with high performance and reveal meaningful interpretations. For these experiments, we utilize the PyTorch deep learning library [28]. All experiments were conducted on an Apple 2015 MacBook Pro with an Intel i7 processor.
Generation of the Dataset Our simulated dataset contains four variables. Three of these variables are designed to contain meaningful information for the classification task, whereas one of the variables is designed to be irrelevant noise. In order to test the flexibility of our framework in processing time series signals, each of the three relevant variables contains a different type of time series pattern. The first variable contains patterns that are localized to a subsection of the total time series, but are shift invariant -the location of the pattern sub sequence relative to the entire time series should not affect classification. The second variable contains patterns that are also restricted to a sub sequence of the time series but are shift variant -the location of the pattern sub sequence with respect to the entire time series is meaningful. Finally, the last relevant variable contains a frequency domain pattern. Each of the three meaningful variables has four patterns states that it can exhibit, as shown in the Figure 2. We refer to these states as variable-level patterns.
A particular class in the simulated dataset is determined by the combination of the patterns found in the three variables. We create a class corresponding to each combination of variablelevel patterns in the relevant variables, resulting in 4 3 = 64 classes. As a result of this design, the contributions of all three relevant variables are necessary to correctly classify a point. During sample generation, 100 points are generated per class and appropriate types of noise are added to each variable. In addition, the irrelevant variable is randomly sampled from a set of patterns, independently of the class. Single-Variable Encoded Spaces and Prototypes On the simulated dataset, our framework is able to consistently achieve 1.00 hold-out test set classification accuracy, as is expected from the prescribed structure present in the dataset. In order to investigate interpretability, we begin by visualizing the encoded space corresponding to each variable using UMAP [29] in Figure 3.A. The encoded spaces for the three relevant variables each contain 4 clusters and each cluster corresponds to one of the variable-level patterns implanted in the dataset. The irrelevant variable, on the other hand, does not exhibit the same structure because it does not contain any relevant information for classification.
In the simulated dataset, the time series signal in any given variable alone is insufficient to classify the data point. As such, it is notable that the contrastive loss training step, which only takes into account the classes of the data points and treats each variable independently, is able to learn about the underlying patterns present in each variable. This demonstrates the effectiveness of the contrastive loss training even in the case where classes can only be determined by integrating information across the several variables.
For the simulated dataset, the number of single-variable prototypes was set to four by validation. The result of the single-variable prototype learning is shown in the Figure 3.B. As is seen by comparing Figure 3.B and Figure 2, in each relevant variable, there is one prototype corresponding to each of the implanted variable-level patterns. Figure 4.A, where each row of the heat map shows one prototype. Our framework learns one prototype corresponding to each class. In the blocks corresponding to the three relevant variables, the prototype vector for a given class gives a one-hot encoded representation of the single-variable patterns used in constructing that class. On the other hand, the block corresponding to the irrelevant variable shows low, uniform values, indicating that there is no preference for a particular pattern in this variable given the class. Thus, we observe that the implanted patterns in this dataset are explicitly recovered in an easy to understand way by our framework. Figure 4.B shows examples of using the multivariable prototypes in conjunction with the single-variable prototypes for interpretation.
Epilepsy Dataset
In order to validate the method on a real-world classification problem, we selected the Epilepsy dataset in the UEA Time Series repository [30]. The dataset consists of triaxial accelerometer data collected from the dominant wrist of subjects completing four activities: running, walking, sawing, and seizure mimicking [31]. Each activity lasts for 30 seconds and the accelerometer sampling rate is 16 Hz [31]. The classification objective is to predict the activity from the accelerometer readings.
Encoded Space Visualization and Single-Variable Prototypes Our framework is able to achieve 0.94 accuracy with a standard deviation of 0.04 on the hold-out test set, comparable to existing time series classification methods [30]. To examine the single-variable encoded spaces and prototypes, we use the UMAP technique, as shown in Figure 5. As seen in the figure, each class has a cluster in variables 2 and 3, indicating that the classes are easily distinguishable in these variables. In variable 1, the running and walking class data points lie in their own clusters, while the epilepsy and sawing classes overlap. Moreover, the epilepsy class is split into two clusters which indicates that there are multiple patterns in the first variable associated with epilepsy.
The number of single-variable prototypes for this task was set to 6 by validation and their locations are depicted in Figure 5 by magenta stars. We find that each cluster has at least one prototype assigned to it and that clusters that are more spread out have more prototypes assigned to them. These observations validate the ability of our framework to identify a diverse set of single-variable prototypes that represent the total variation present in each variable.
Multivariable Prototypes
We set the number of multivariable prototypes in this task to 4, based on the number of classes. Visualizations of the learned prototypes are presented in Figure 6. Overall, our interpretations clearly reveal that frequency and amplitude of the signals distinguish the different classes. For example, the sawing class exhibits signals with high frequency but low and uniform amplitudes, consistent with what is expected for a repetitive and controlled movement. The running class, on the other hand shows signals with similarly high frequencies, but larger and varying amplitudes. The slowest activity, walking, is seen to have low frequency oscillations and low amplitudes.
For the running, sawing, and walking classes, the multivariable prototype vectors take the form of one-hot encoded vectors, selecting one prototype from each variable. This reveals that these classes are consistently associated with only one single-variable prototype per variable and that there is no overlap in patterns among these classes, they are completely distinguishable in all variables. For the epilepsy class, on the other hand, there are two entries in the first variable block that are non-zero. This indicates that data points of the epilepsy class show one of two patterns in the first variable, which is verified by the UMAP visualizations in Figure 5. Moreover, we observe that one of these prototypes is also selected in the Sawing class. This reveals these two activities generate similar patterns in the first variable and that the first variable may not be as useful in distinguishing these classes. In this way, we see that our method can provide fine-grained Figure 6: Interpretation of multivariable prototypes for the epilepsy classification task. The multivariable prototype vectors are split into the blocks corresponding to each variable. For each variable block, we represent the single-variable prototype with the highest similarity score by its projection onto the training set. feature importance information, revealing the importance of particular variable with respect to distinguishing individual classes. Notably, the discovery of multiple patterns corresponding to the epilepsy class, as well as the similarities between the sawing and epilepsy classes are a direct result of the inherently interpretable multivariable prototypes introduced in this work.
One limitation of our current work is the need to manually determine the number of single and multivariable prototypes. Future work will examine methods to automate or make this choice more systematic. Another potential area for future work is enhancing the complexity of cross variable patterns that can be represented by our framework. This could potentially be achieved by introducing metric learning in the multivariable representation space, instead of using our current similarity measure. Finally, for more complex or noisy variables, the single variable prototypes alone could be insufficient to identify the meaningful features of the time series. Thus, interpretability could be enhanced by introducing the attention mechanism to the single-variable encoders, potentially providing fine-grained identification of meaningful time points and subsequences within the single-variable prototypes.
We predict that this work will have a net beneficial impact on society. As previously noted, multivariable time series classification problems are ubiquitous in high-impact areas such as medicine and scientific research. In this work, we have provided a method for addressing these problems that explains its predictions. Although nearly all machine learning models can be harmful given biases in their training data, the explanations provided by our method can aid in detecting these biases. Ultimately, we expect that our framework will serve as a powerful tool for researchers in extracting scientific information from large and complex datasets. | 2021-06-18T01:15:54.660Z | 2021-06-17T00:00:00.000 | {
"year": 2021,
"sha1": "9561d3a1f575900037bf9a3831f447dd641d55c9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9561d3a1f575900037bf9a3831f447dd641d55c9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
6714862 | pes2o/s2orc | v3-fos-license | Dopamine Regulation of Social Choice in a Monogamous Rodent Species
There is growing appreciation that social decision making in humans is strongly influenced by hedonic and emotional processing. The field of social neuroeconomics has shown that neural systems important for reward are associated with social choice and social preferences in humans. Here, we show that the neurobiology of social preferences in a monogamous rodent species, the prairie vole, is also regulated by neural systems involved in reward and emotional processing. Specifically, we describe how mesolimbic dopamine transmission differentially mediates the formation and maintenance of monogamous pair bonds in this species. Thus, reward processing exerts tremendous regulation over social choice behaviors that serve as the foundation of a rather complex social organization. We conclude that prairie voles are an excellent model system for the neuroscience of social choice and that complex social decision-making can be robustly explained by reward and hedonic processing.
. We then highlight data from several recent studies that describe the regulation of prairie vole social behavior by neural transmission important for emotion and reward processing, dopamine (DA) signaling within the nucleus accumbens (NAc) (Aragona and Wang, 2007;Aragona et al., 2003Aragona et al., , 2006. Finally, we compare these fi ndings to studies that have examined the neural regulation of social decision-making in humans Kosfeld et al., 2005;Rilling et al., 2002). These comparisons reveal striking similarities between the neuroscience of social choice behaviors between humans and prairie voles, suggesting that prairie voles are an excellent model system for the study of social decision-making. Moreover, the fact that a rather large extent of the social organization of prairie voles can be largely explained by rather simple choice behaviors regulated by emotional processing may have very interesting implications for the study of social neuroeconomics (Cacioppo et al., 2000;Lee, 2008).
THE PRAIRIE VOLE MODEL
Prairie voles are small rodents (∼40 g) ( Figure 1A) distributed primarily in the grasslands of the central United States (Cushing et al., 2001;Hall, 1981;Hoffmann and Koeppl, 1985). These rodents are among the minority of mammalian species (3-5%) that show a monogamous social organization (Dewsbury, 1987). The foundation of this social organization is the 'pair bond' , which is defi ned as the stable relationship between members of a breeder pair that share common territory and parental duties . This species was initially identifi ed as monogamous by fi eld studies which showed that male-female pairs travel together (Getz et al., 1981), share a nest with one or more litters of pups (Getz and
INTRODUCTION
In social contexts, decision-making is signifi cantly infl uenced by positive or negative concern for the welfare of others (Fehr and Camerer, 2007). Humans display strong social preferences that are revealed through choice behavior in which people behave altruistically, act on a strong sense of fairness, and have tremendous capacities to trust (Krueger et al., 2007;Sanfey, 2007;Tankersley et al., 2007;Zak et al., 2004). Indeed, social decision-making in humans is so complex that it can appear to be the result of social cognition that is exclusive to our species (Skuse and Gallagher, 2009). However, from an evolutionary perspective, pro-social behaviors such as cooperation and trust are only ostensibly irrational or selfless (Rilling et al., 2002;Sanfey, 2007). Such behaviors are the result of selection processes that favored reciprocity among close social groups, in which it was adaptive for individuals to spend relatively small amounts of energy to help unrelated members of the group in order to receive relatively large benefi ts of the resulting social organization (Pfeiffer et al., 2005;Rutte and Taborsky, 2007;Trivers, 1971). From this perspective, we can expect analogous pro-social behaviors to be expressed by other species that can serve as effective laboratory models and thus allow the investigation of the neural mechanisms of social choice behavior and decision-making.
Here, we describe how the use of one such model system, the socially monogamous prairie vole (Microtus ochrogaster), has signifi cantly advanced our understanding of the neural regulation of social choice behavior Dewsbury, 1987;Getz and Carter, 1996;Young and Wang, 2004). We fi rst provide a brief overview of prairie vole behavior and suggest that the complex social organization of this species can be largely achieved by two 'choice' behaviors: the initial preference of a familiar mate and the decision to avoid or aggressively reject potentially new mates (Carter their territory (Getz, 1978). Further, male prairie voles show high levels of parental care (Getz and Carter, 1996;Thomas and Birney, 1979) and it has been suggested that both parents are necessary for pup survival which selected for highly enduring pair bonds (Emlen and Oring, 1977;Kleiman, 1977;McGuire et al., 1993;Wang and Novak, 1992). Indeed, the pair bond is so stable that a surviving member of the pair will not accept a new mate even if the other member of the bond is lost (Getz and Carter, 1996;Thomas and Wolff, 2004). This represents a strong example of behavior that is not in the self-interest of the animal and is therefore in confl ict with classic economic models of rational decision-making.
Importantly, the monogamous behaviors observed in nature are also reliably expressed under laboratory conditions (Carter and Getz, 1993;Carter et al., 1995). For instance, prairie voles preferentially mate with a familiar partner versus a novel conspecifi c (Dewsbury, 1975(Dewsbury, , 1987Gray and Dewsbury, 1973). After mating, prairie voles remain together during gestation (McGuire and Novak, 1984;Thomas and Birney, 1979) and this facilitates a successful pregnancy (McGuire et al., 1992). As in their natural environment, male prairie voles show very high levels of parental care in the lab (Oliveras and Novak, 1986). Most importantly, pair bonding can be reliably assessed in the lab by measuring social preferences inferred from choice behaviors associated with the formation and maintenance of the pair bond (Williams et al., 1992;Winslow et al., 1993;Young and Wang, 2004).
LABORATORY TESTS OF PAIR BOND FORMATION AND MAINTENANCE
This review will focus on data collected from male subjects (Aragona and Wang, 2007;Aragona et al., 2003Aragona et al., , 2006. However, there has been extensive work conducted on female prairie voles (Cho et al., 1999;Fowler et al., 2002;Insel and Hulihan, 1995;Williams et al., 1992;Witt et al., 1991) and it will be noted when data were collected using female subjects. A necessary fi rst step in pair bond formation is that males must prefer their familiar partner over new mates, which is very unusual for males in most mammalian species since they reliably prefer to mate with novel females (Fiorino et al., 1997). However, male prairie voles prefer to mate with a familiar female (Dewsbury, 1987) and the presentation of new females does not induce copulation in sexually satiated male prairie voles (Gray and Dewsbury, 1973).
In addition to choosing to mate with a familiar female, pair bonding also requires that males choose to cohabitate with their familiar partners. This is determined in the lab by a simple social choice test referred to as the 'partner preference test' (Williams et al., 1992). For this test, a subject is placed into a three-chambered apparatus and is free to move about the chambers ( Figure 1B). The familiar mate (partner) and an unfamiliar female (stranger) serve as stimulus animals that are tethered in separate cages ( Figure 1B). Subjects initially explore the apparatus and interact with both stimulus animals and then lay down beside either the partner or the (B) Cartoon of partner preference apparatus. Each cages is identical and food and water are available ad libitum throughout the 3-h test. (C) Male prairie voles paired with an estrogen-primed female for 24 h show a robust partner preference, i.e. spend signifi cantly more time in side-by-side contact with their familiar mates (partners) compared to novel females that are also estrogen primed (strangers). (D) Male prairie voles paired with an ovariectomized female that is not estrogen primed for only 6 h do not show partner preferences; i.e. they display non-selective side-by-side contact. Error bars = standard error and * indicates groups are signifi cantly different as determined by a t-test.
stranger (Williams et al., 1992;Winslow et al., 1993). If subjects spend signifi cantly more time in side-by-side contact with partners over strangers (assessed by a t-test) then the group is said to show a partner preference Curtis and Wang, 2005;Liu et al., 2001).
Many studies have demonstrated that male prairie voles paired with an estrogen-primed female for 24 h of mating reliably show partner preferences (Aragona et al., 2003;Lim and Young, 2004;Liu et al., 2001) (Figure 1C). However, if male subjects cohabitate with females for only 6 h without mating, subjects show non-selective side-by-side contact and thus fail to show partner preferences (Aragona and Wang, 2007;Curtis and Wang, 2005;Liu et al., 2001) ( Figure 1D). Thus, we utilize the '24 h mating' paradigm to reliably induce partner preferences in control conditions and examine if pharmacological manipulations can prevent mating-induced pair bond formation. Additionally, we use the '6-h cohabitation' paradigm to examine if pharmacological manipulations can induce partner preferences in the absence of mating Young and Wang, 2004).
While a partner preference is necessary for a pair bond, it is not suffi cient for its long-term maintenance. Pair bonded males also choose to aggressively reject potentially new mates (Aragona et al., 2006;Gobrogge et al., 2007). This is referred to as 'selective aggression' and is studied in the lab using a resident-intruder test in which the subject is exposed to novel conspecifi cs and aggressive behavior is quantifi ed (Wang et al., 1997;Winslow et al., 1993). While 24 h of mating increases selective aggression (Wang et al., 1997;Winslow et al., 1993), aggressive behavior is increased much more toward male intruders (compared to novel females) and male subjects do not chase or bite female intruders following 24 h of mating (Wang et al., 1997). Conversely, following an extended cohabitation (2 weeks) in which females become pregnant, males become extremely aggressive toward novel females (showing high levels of chasing and biting) (Aragona et al., 2006;Gobrogge et al., 2007) and this decision to aggressively reject potentially new mates is critical for the stable maintenance of the pair bond.
In this review, we will consider the extent to which the monogamous social organization of prairie voles can be explained by (1) the initial choice to breed with a single female, the 'partner preference' and (2) the subsequent choice to reject potential new mates, selective aggression. Having these well-established laboratory indices allows detailed examination of the neurobiology underlying these behaviors. As pair bonding involves a myriad of cognitive and psychological processes, it is not surprising that a wide range of neural systems are important for its regulation including: oxytocin (Bales et al., 2007;Bamshad et al., 1993;Insel and Shapiro, 1992;Liu and Wang, 2003;Witt et al., 1990), vasopressin (Bamshad et al., 1994;Hammock and Young, 2005;Lim et al., 2004b;Liu et al., 2001;Winslow et al., 1993), corticosterone (DeVries et al., , 1996Lim et al., 2007), estrogen (Cushing and Wynne-Edwards, 2006), glutamate and GABA (Curtis and Wang, 2005). This list will certainly grow as more experiments are conducted and almost nothing is known about how these systems interact to regulate pair bonding. Thus, an extraordinary amount of work remains. However, we have recently conducted a series of studies demonstrating the signifi cant involvement of mesolimbic DA transmission in pair bond formation and maintenance in male prairie voles (Aragona and Wang, 2007;Aragona et al., 2003Aragona et al., , 2006.
NUCLEUS ACCUMBENS DOPAMINE AND PAIR BOND FORMATION
Pair bond formation is a naturally occurring association formed between monogamous mates (Aragona et al., 2006;Wang and Aragona, 2004;Young and Wang, 2004) and associative learning is signifi cantly regulated by mesolimbic DA transmission (Di Chiara and Bassareo, 2007;Kelley, 2004;Wise, 2004). In particular, DA transmission within the NAc is critical for important aspects of reward processing (Berridge and Robinson, 2003;Everitt and Robbins, 2005;Roitman et al., 2005Roitman et al., , 2008Salamone and Correa, 2002;Wheeler et al., 2008) that may underlie cost-benefi t analyses related to choice behavior and decision-making (Phillips et al., 2007). Therefore, we conducted a series of studies that investigated the regulation of partner preference formation by DA transmission within the NAc (Aragona and Wang, 2007;Aragona et al., 2003Aragona et al., , 2006. Similar to other rodent species (Jansson et al., 1999), prairie vole NAc is densely innervated by dopaminergic terminals arising from the ventral midbrain (Figure 2A) (Aragona et al., 2003;Curtis and Wang, 2005;Gobrogge et al., 2007). Also consistent with studies conducted in rats (Becker et al., 2001;Pfaus et al., 1995;Robinson et al., 2002), microdialysis measures indicate that mating increases extracellular DA concentration within the NAc of female prairie voles (Gingrich et al., 2000) and tissue extraction studies show that mating also increases dopamine transmission (as indicated by dopamine turnover) in male prairie voles ( Figure 2B) (Aragona et al., 2003). These studies suggest that mating evokes modest increases in DA concentration within the NAc during copulation in prairie voles.
We hypothesized that mating-evoked increases in DA transmission were necessary for partner preference formation (Aragona et al., 2003). To test this, we fi rst examined if blockade of DA receptors within the NAc prevented mating-induced partner preferences ( Figure 2C). Consistent with previous studies (Williams et al., 1992;Winslow et al., 1993), control animals that received microinfusions of artifi cial cerebrospinal fl uid (CSF) within the NAc prior to the 24-h cohabitation period (with mating) showed robust mating-induced partner preferences ( Figure 2C). However, blockade of DA receptors with the non-selective DA receptor antagonist (haloperidol) prior to the mating period, abolished mating-induced partner preference formation ( Figure 2C). Importantly, DA receptor blockade did not alter locomotor activity or mating behavior, indicating that DA transmission within the NAc during mating directly infl uenced social choice that was a consequence of mating (Aragona et al., 2003).
We next tested if pharmacological activation of DA receptors within the NAc was suffi cient to induce partner preference formation in the absence of mating (Aragona et al., 2003). As previously described (Williams et al., 1992;Winslow et al., 1993), control subjects that received CSF infusions into the NAc prior to the 6-h cohabitation period did not show partner preferences ( Figure 2D). However, low dose infusion of the non-selective DA agonist (apomorphine) induced a signifi cant partner preference, whereas high dose infusion of apomorphine did not ( Figure 2D). These data show that pharmacological activation of DA receptors within the NAc is suffi cient to facilitate choice of familiar partners.
OPPOSING REGULATION OF PAIR BOND FORMATION BY D1 AND D2 RECEPTOR SIGNALING PATHWAYS IN THE NAc SHELL
Facilitation of partner preferences by low dose apomorphine is indicative of the receptor specifi c mechanism underlying DA regulation of this behavior. There are two families of DA receptors: D1-like (D1 and D5 receptors) and D2-like (D2, D3, and D4 receptors) (Neve et al., 2004). While apomorphine binds both D1 and D2-like receptors, it binds D2-like receptors with a much greater affi nity (Missale et al., 1998). Thus, we hypothesized that low dose apomorphine preferentially activated D2-but not D1like receptors and therefore induced partner preference formation via a D2-mediated mechanism in male prairie voles (Aragona et al., 2006). Additionally, the failure of high dose apomorphine to induce partner preferences suggests that activation of D1-like receptors within the NAc actually prevents pair bond formation. These hypotheses were evaluated by testing the effects of receptor specifi c dopaminergic drugs on our two established paradigms to examine partner preference formation.
Consistent with data from female prairie voles (Gingrich et al., 2000;Wang et al., 1999), specifi c activation of D2-like receptors within the NAc shell (but not the NAc core) induced partner preferences in the absence of mating ( Figure 3A). Activation of D1-like receptors within the NAc shell not only failed to induce partner preferences, but also prevented partner preferences induced by D2-like activation (i.e. when D1 and D2 agonists were co-infused) ( Figure 3A). Importantly, D1-like activation within the NAc shell also blocked mating-induced partner preferences ( Figure 3B). Together, these data demonstrate that activation of D1-like receptors within the NAc shell prevents the formation of partner preferences. D1 and D2-like receptors have the opposite effects over cAMP signaling (Neve et al., 2004). D2-like receptors activate inhibitory G-proteins which prevents conversion of ATP to cAMP by adenyl cyclase (Missale et al., 1998). Conversely, activation of D1like receptors activates stimulatory G-proteins which increases cAMP production and thus activation of protein kinase A (PKA) (Missale et al., 1998). Decreased cAMP production can be studied by pharmacological blockade of cAMP binding sites on PKA using a cAMP analogue (Rp-cAMPS) whereas increased cAMP production is assessed using a cAMP analogue that binds PKA and releases its regulatory subunits (Sp-cAMPS) (Lynch and Taylor, 2005;Self et al., 1998).
Given that D2-like activation within the NAc shell mediates partner preference formation, we hypothesized that reduced PKA activity would also facilitate this behavior. Consistent with D2 regulation of pair bond formation, decreasing the activity of PKA (using Rp-cAMPS) induced partner preferences in the absence of mating ( Figure 3C). Conversely, increasing activation of PKA (using Sp-cAMPS) failed to induce partner preferences ( Figure 3C). As expected, decreased PKA activity did not alter mating-induced pair bond formation ( Figure 3D). However, consistent with D1-like activation preventing pair bond formation, increased activation of PKA prevented mating-induced pair bond formation ( Figure 3D). Together, these data indicate that pair bond formation is facilitated by D2-like activation and subsequent decreased activity of the cAMP-signaling pathway. Conversely, D1-like activation and subsequent increased activation of PKA prevent pair bond formation.
UP-REGULATION OF D1-LIKE DA RECEPTORS WITHIN THE NAc OF PAIR BONDED ANIMALS
There are dramatic behavioral alterations as male prairie voles transition from sexually naive to fully pair bonded . Specifi cally, sexually naive males primarily show pro-social behaviors toward novel females, whereas pair bonded males avoid or attack novel females. Given the signifi cant role of DA transmission within the NAc in partner preference formation, we expected that alterations in this DA signaling system were associated with behavioral alterations associated with pair bonding (Aragona et al., 2006). We used receptor autoradiography to compare DA receptor density between sexually naive male prairie voles and males that were paired with a female for 2 weeks. During this extended cohabitation males and females shared a nest and the females became pregnant (Aragona et al., 2006). Representative examples of receptor binding clearly demonstrate that D1-like receptors ( Figure 4A) but not D2-like receptors ( Figure 4B) are substantially increased within the NAc in pair bonded males. Quantitative data show that D1like receptor binding was signifi cantly increased within the NAc in pair bonded males compared to that of sibling-paired controls ( Figure 4C). A separate control group showed that mating alone was not suffi cient to increase D1-like receptor binding (Aragona et al., 2006). Thus, pair bonded animals have an enhanced D1-like signaling system within the NAc and since this system is antagonistic to partner preference formation, we next tested if this neural restructuring is responsible for pair bond maintenance.
NEURAL REORGANIZATION WITHIN THE NAc UNDERLIES PAIR BOND MAINTENANCE
Given that pair bonded animals have increased D1-like receptor expression within the NAc and show high levels of aggression toward novel females, we tested if this neural restructuring was associated with increased aggression. Specifi cally, we used a resident-intruder test to determine if up-regulation of D1-like receptors within the NAc mediates the aggressive rejection of potentially new mates, i.e. selective aggression (Gobrogge et al., 2007;Wang et al., 1997;Winslow et al., 1993). In this test, the female partner was removed from the home cage and both affi liative ( Figure 5A) and aggressive ( Figure 5B) behavior of the male subject was examined following introduction of an 'intruder' female (Wang et al., 1997;Winslow et al., 1993). Pair bonded males showed signifi cantly higher levels of affi liative behavior toward their familiar partners compared to that shown by sexually naive males presented with a novel female ( Figure 5C). While pair bonded males show almost no affi liative behavior toward novel females (strangers) (Figure 5C), affi liative behavior is returned to levels expressed by sexually naive subjects if either D2 or D1-like receptors were blocked within the NAc (Figure 5B).
Neither sexually naive males presented with a novel female nor pair bonded males presented with their partner showed aggressive behavior ( Figure 5D). However, pair bonded males were extremely aggressive when presented with novel females (strangers), showing a signifi cant increase in the numbers of attacks (Figure 5D). Aggressive behavior was abolished by blockade of D1-like (but not D2-like) receptors within the NAc (Figure 5D). These data show that the up-regulation of D1-like receptors described above (Figure 4) mediates selective aggression. Thus, plasticity within the mesolimbic DA system underlies the decision to reject potentially new mates and thus maintains the initial pair bond.
SUMMARY OF DOPAMINE REGULATION OVER PAIR BOND FORMATION AND MAINTENANCE
Mesolimbic DA regulation of pair bonding may have implications for cognitive and psychological processes associated with social choice and decision-making. DA transmission that mediates partner preference formation occurs specifi cally within the rostral portion of the NAc shell (Aragona et al., 2006) (Figure 6A). This sub-region is critical for processing positive affect and unconditioned aspects of associative learning (Di Chiara and Bassareo, 2007;Ikemoto, 2007;Pecina et al., 2006). Thus, DA transmission within the NAc shell may regulate partner preference formation through enhanced reward processing or incentive motivation (Berridge, 2007;Di Chiara and Bassareo, 2007). Additionally, DA transmission within the NAc shell is also important for mother-offspring bonds, which is an inherently rewarding social attachment (Champagne et al., 2004;Li and Fleming, 2003;Numan et al., 2005). Together,
Aragona and Wang
Dopamine regulation of social choice these data suggest that reward processing is a critical component of partner preference formation in prairie voles. Within the NAc shell, DA regulation of partner preference formation is highly specifi c. Mating-induced DA release selectively activates D2-like receptors and decreases cAMP signaling to promote pair bond formation ( Figure 6B). Conversely, activation of D1-like receptors and increased activation of cAMP signaling prevents pair bond formation ( Figure 6C). These data indicate that, under natural circumstances, DA transmission is not uniformly increased as it is under certainly laboratory conditionings (Schultz, 2002). Rather, the pair bonding studies suggest that prairie vole social interactions result in modest increases in extracellular DA concentration that selectively activate high affi nity D2-like receptors while not activating low affi nity D1-like receptors (Richfi eld et al., 1989). However, it will be necessary for future studies to test this by measuring real-time DA transmission (Aragona et al., 2008;Day et al., 2007;Phillips et al., 2003) during prairie vole social interactions to determine if in vivo DA transmission is consistent with the behavioral pharmacology described in this review.
Compared to their basal state (Figure 6D), pair bonded males show a robust increase in the surface expression of D1-like receptors within the NAc (Figure 6E). We have suggested this may be a compensatory increase following the lack D1-like receptor activation during social interactions that promote pair bond formation (Aragona et al., 2006). Since pair bonded males show an up-regulation in D1-like receptors within the NAc and activation of these receptors prevents pair bond formation, we have suggested that when pair bonded males in their natural environment encounter a novel female, DA is released in very high concentration (Robinson et al., 2002) suffi cient to activate low affi nity D1-like receptors (Richfi eld et al., 1989), especially since there appear to be a greater number of antagonistic D1-like receptors in pair bonded voles. This promotes the aggressive rejection of potentially new mates and thus represents an elegant mechanism for maintenance of the initial pair bond. Taken together, these data demonstrate that DA transmission with the NAc differentially mediates initial partner preference formation and the subsequent rejection of potentially novel mates. This is achieved, at least in part, by neuroplasticity (up-regulation of D1-like receptors) within this mesolimbic DA signaling system. This represents a powerful example in which a complex monogamous social organization can be signifi cantly accounted for by two rather straightforward choice behaviors that are both mediated by emotional/reward processing by mesolimbic DA signaling. presented with unfamiliar females (strangers). Blockade of either D1-or D2-like receptors restores affi liative behavior in pair bonded males to levels expressed by sexually naive males being exposed to a female for the fi rst time. (D) While sexually naive (presented with a female) and pair bonded males (presented with their partners) show no aggressive behavior, pair bonded males show signifi cantly greater levels of aggression when presented with a novel female (stranger). Selective aggression is blocked by D1-like (but not D2-like) receptors within the NAc.
Aragona and Wang
Dopamine regulation of social choice mechanism of DA-oxytocin interactions is unknown, selective lesions of dopaminergic terminals in prairie voles did not reduce oxytocin receptor expression within the NAc (Lim et al., 2004a). This indicates that oxytocin receptors in this region are postsynaptic. Further, since oxytocin and D2-like receptors are both coupled to inhibitory G-protein signaling molecules (Burns et al., 2001), activation of both types of receptors may facilitate partner preference formation by inhibition of cAMP signaling pathways (Aragona and Wang, 2007). While existing data suggest that pair bond formation is mediated by co-activation of both oxytocin and D2-like DA receptors (Gingrich et al., 2000;Liu and Wang, 2003;Young et al., 2001), it is possible that they represent parallel systems that co-exist within the NAc. Future studies are needed to understand if DA and oxytocin receptor systems directly interact, and if so, determine if these interactions occur on the same or connected cells. Still, additional studies are required to understand DA interactions with the signaling systems critical for pair bonding but located outside of the NAc (such as vasopressin within the ventral pallidum; Lim et al., 2004b).
DOPAMINE-OXYTOCIN INTERACTIONS AND PARTNER PREFERENCE FORMATION
Despite the critical role of DA in pair bonding, DA interacts with multiple neuropeptide systems in its regulation of this behavior (Lim et al., 2004bYoung and Wang, 2004). In particular, DA interactions with oxytocin receptors within the NAc are essential for pair bond formation . Activation of D2-like receptors within the NAc facilitates partner preference formation in the absence of mating, however, blockade of oxytocin receptors within this region (by co-infusion of an oxytocin receptor antagonist and a D2-like receptor agonist) prevents partner preferences induced by D2 activation . Further, facilitation of partner preference formation by activation of oxytocin receptors is not effective if D2like receptors are blocked . Importantly, this study was conducted in female prairie voles , however, we have also shown that oxytocin receptors within the NAc are critical for partner preference formation in males (M. Smeltzer and Z. Wang, unpublished observations). While the
COMPARISON BETWEEN NEURAL REGULATION OF SOCIAL REWARD IN PRAIRIE VOLES AND HUMANS
Interestingly, the neural regulation of mate choice in humans also involves DA signaling systems . Specifi cally, presentation of a picture of one's partner increases activation of dopaminergic circuitry in a similar manner as that caused by monetary reward Zald et al., 2004). Thus, mate choice in humans may involve primary motivational or rewarding processes that are consistent with those observed in prairie voles. As such, the neural basis of partner preferences in prairie voles represents an excellent model for these aspects of mate choice in humans. Moreover, these fi ndings suggest that understanding the neurobiology of reward processing is critical for understanding the neurobiology of social choice and decisionmaking (Loewenstein et al., 2008;Sanfey, 2007;Zak, 2004). Indeed, it has been suggested that pro-social behaviors may be achieved by activation of reward circuitry that promote cooperative behavior, in part, by facilitating positive emotions (Harbaugh et al., 2007), including feelings of trust (Rilling et al., 2002).
Trust is an essential component of human social organization and recent studies have shown that one neuropeptide critical for NAc regulation of pair bonding in voles, oxytocin, is critical for trust behavior in humans . The involvement of oxytocin in trust behavior was examined using a trust game, in which one player acts as an 'investor' that must choose whether or not to give money to a second player. If the 'investor' gives money to the second player, the amount of money in the game is increased and the 'investor' hopes that (during the second player's turn) the second player will reciprocate, giving the investor back more money than originally invested (Kosfeld et al., 2005). This is a one trial game so there is nothing to stop the second player from simply keeping all of the money. Thus, there is signifi cant cost for the fi rst player to trust that the second player will reciprocate. Interestingly, intra-nasal administration of oxytocin increased the ability of the 'investor' to overcome the risk associated with trust and increased the amount of money that the 'investor' gives to the second player (Kosfeld et al., 2005). Therefore, oxytocin appears to play a critical role in pro-social behavior in both humans and prairie voles.
CONCLUSION
The current review emphasizes some striking similarities between the neurobiology underlying pro-social behaviors in humans and prairie voles. As such, the prairie vole model is likely to be a powerful tool to investigate the neural regulation of social choice in more invasive ways that are not possible when using human subjects. While the prairie vole fi eld is still in its infancy, experiments using this species clearly demonstrate that mesolimbic DA transmission is essential for social choice. Given that this system mediates aspects of reward and emotional processing, its involvement in social decisionmaking among humans may explain why humans often display strong social preferences rather than always acting out of pure selfregard (Camerer and Fehr, 2006;Fehr and Camerer, 2007;Sanfey, 2007). As the fi eld of social neuroeconomics advances, it continues to consider whether social decision-making is best conceptualized as rational decision-making that is complicated because it involves more than one agent and thus requires more sophisticated learning alorithms (Lee, 2008), or if it is more informative to regard social decision-making as largely guided by emotional social motivation and hedonic processing (Sanfey et al., 2003;Skuse and Gallagher, 2009). While social decision-making certainly involves both reasoning as well as emotional processing, data from the prairie vole model demonstrate how a complex social organization can be achieved by a relatively small number of rather simplistic choice behaviors that are signifi cantly mediated by reward processing. This supports the view that selection favored organisms that dealt with complex decisions by acting according to the degree of pleasure or displeasure likely to be associated with their behavioral response (Cabanac et al., 2009). Thus, while brains appear to be capable of an impressive capacity for logic and reason, very complex phenomena, such as social decision-making and cognition, can be also be robustly explained by hedonic and emotional processing. | 2016-06-17T09:33:02.425Z | 2009-05-15T00:00:00.000 | {
"year": 2009,
"sha1": "c0e80942e27a37397e6388da65aa378247403a75",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/neuro.08.015.2009/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0e80942e27a37397e6388da65aa378247403a75",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
91864372 | pes2o/s2orc | v3-fos-license | Changes in social groups across reintroductions and effects on post-release survival
Reintroductions are essential to many conservation programmes, and thus much research has focussed on understanding what determines the success of these translocation interventions. However, while reintroductions disrupt both the abiotic and social environments, there has been less focus on the consequences of social disruption. Therefore, here we investigate if moving familiar social groups may help animals (particularly naïve juveniles) adjust to their new environment and increase the chances of population establishment. We used social network analysis to study changes in group composition and individual sociality across a reintroduction of 40 juvenile hihi (Notiomystis cincta), a threatened New Zealand passerine. We collected observations of groups before a translocation to explore whether social behaviour before the reintroduction predicted associations after, and whether reintroduction influenced individual sociality (degree). We also assessed whether grouping familiar birds during temporary captivity in aviaries maintained group structure and individual sociality, compared to our normal translocation method (aviaries of random familiarity). Following release, we measured if survival depended on how individual sociality had changed. By comparing these analyses with birds that remained at the source site, we found that translocation lead to re-assortment of groups: non-translocated birds maintained their groups, but translocated juveniles formed groups with both familiar and unfamiliar birds. Aviary holding did not improve group cohesion; instead, juveniles were less likely to associate with aviary-mates. Finally, we found that translocated juveniles that lost the most associates experienced a small but significant tendency for higher mortality. This suggests sociality loss may have represented a disruption that affected their ability to adapt to a new site.
help animals (particularly naïve juveniles) adjust to their new environment and increase the chances of 23 population establishment. We used social network analysis to study changes in group composition and 24 individual sociality across a reintroduction of 40 juvenile hihi (Notiomystis cincta), a threatened New 25 Zealand passerine. We collected observations of groups before a translocation to explore whether social 26 behaviour before the reintroduction predicted associations after, and whether reintroduction influenced 27 individual sociality (degree). We also assessed whether grouping familiar birds during temporary 28 captivity in aviaries maintained group structure and individual sociality, compared to our normal 29 translocation method (aviaries of random familiarity). Following release, we measured if survival 30 depended on how individual sociality had changed. By comparing these analyses with birds that 31 remained at the source site, we found that translocation lead to re-assortment of groups: non-32 translocated birds maintained their groups, but translocated juveniles formed groups with both familiar 33 and unfamiliar birds. Aviary holding did not improve group cohesion; instead, juveniles were less likely Introduction 38 39 Reintroduction, returning species to parts of their range where they have become extinct (IUCN/SSC, 40 2013), is important for many conservation programmes (Armstrong and Seddon, 2008). The process of 41 moving animals to a new site ("translocation" (IUCN/SSC, 2013)) and overcoming post-release effects To understand how group structure and familiarity impacts on translocation success, we therefore first 68 need to determine if groups remain together when they are moved to a new site. One challenge in wild 69 animal groups is there may be limited knowledge of familiarity before translocation. For example, studies 70 in New Zealand bird species (tīeke/saddleback, Philesturnus carunculatus rufusater; toutouwai/North 71 Island robin, Petroica longipes) and howler monkeys (Alouatta seniculus) found that pre-capture 72 familiarity was not maintained over translocation (Armstrong, 1995;Armstrong and Craig, 1995; 73 Richard-Hansen, Vié and De Thoisy, 2000). However, these species are territorial, and the studies also 74 defined familiarity from short-term binary measures (individuals in the same place upon capture were 75 "familiar", versus "non-familiar"). When longer-term measures of familiarity have been used for more 76 social groups (such as families or colonies) there is evidence that group composition remains similar 77 before and after reintroduction (Clarke, Boulton and Clarke, 2003;Shier, 2006; Pinter-Wollman, Isbell 78 and Hart, 2009) and that maintaining groups results in higher post-release survival (Shier, 2006).
79
Therefore, capturing group familiarity over a longer time period for social species may be required to 80 assess the importance of maintaining or disrupting relationships over translocations.
131
Here, we use a translocation of hihi (stitchbird, Notiomystis cincta) to test fitness effects of network 132 structure, and assess whether maintaining sociality can improve the outcome of a translocation. This 133 species is a threatened New Zealand passerine (Birdlife International, 2017) which was once 134 widespread across the North Island. Following the introduction of non-native predators when humans 135 arrived in New Zealand, hihi became restricted to a single off-shore island (Hauturu-o-Toi/Little Barrier 136 Island). Since the 1980s a major aim for conservation of this species has been to establish re-introduced 137 populations in predator-controlled areas, and the most recent hihi translocations have involved moving 138 juvenile birds. This cohort appears to be particularly social: juveniles form groups for several months at 139 the end of the breeding season and interact, for example with "play"-like behaviour and allopreening.
140
However, it is unknown whether translocation alters these social groups or what the consequences may 141 be for establishment of populations. We used the opportunity of a translocation in 2017 to test our 142 predictions that: (1) translocated hihi will group with more familiar individuals from either before the 143 translocation, or based on who they were held with during temporary captivity; (2) individuals will remain 144 consistent in their sociality before and after translocation; and (3) any changes in social behaviour will 145 affect survival after translocation. In 2017 we reintroduced hihi to Rotokare Scenic Reserve ("release site", 39°27'15.4"S 174°24'33.0"E) 152 from Tiritiri Matangi Island ("source site", 36°36'00.7"S 174°53'21.7"E). The source site is a 220ha island 153 scientific reserve of replanted and remnant native fauna which is free of non-native mammalian 154 predators. Hihi were reintroduced to the island in 1995 (Armstrong and Ewen, 2001), and the population (numbering c. 270 in 2017) is now the main source of birds for ongoing translocations to other sites. The 156 release site (230ha, including a 17.8ha lake) is a mainland site of old-growth native forest surrounded 157 by a fence that excludes non-native mammalian predators. Hihi had been locally extinct at this site and 158 in the surrounding region for c.130 years prior to the reintroduction (Angher, 1984).
166
This ensured we observed associations among juveniles commonly seen at group sites and also 167 associations with the few juveniles that did not frequent these sites (17/108 juveniles were never seen 168 at group sites). During each one-hour survey we recorded the identities of all juveniles seen within a 10-169 metre radius of the observer (VF). All hihi have an individual combination of coloured leg rings (applied 170 to nestlings during routine nest monitoring) so each could be identified by sight. We assigned juveniles 171 to the geographical location where they were observed: 40 birds were only ever recorded in the 172 northernmost groups ("north"), 16 at the southern end of the island ("south") and the remaining 49 mixed 173 between the two (mixed).
175
Next, we constructed a "group-by-individual" (GBI) matrix where a group comprised any juveniles seen 176 within 15 minutes of the preceding bird. If we did not see any birds during this time, we considered the 177 next juveniles encountered to be part of a new group. This "gambit of the group" approach (Whitehead, provided a more detailed measure of familiarity rather than binary familiar/unfamiliar: each "edge" 183 connecting two juveniles represented at least one co-occurrence in a group, so repeated co-occurrences 184 (and stronger edge weights) would indicate that juveniles were more familiar. We detected "communities" of frequently co-occurring individuals in the network using the community detection 186 algorithm of Clauset et al. (2004) implemented with the "fastgreedy.community" function (igraph R 187 package version 1.0.9, (Csárdi and Nepusz, 2006)). We ensured that assigned communities were robust visual contact between aviaries (aviaries were therefore not in auditory isolation from each other or free-206 living birds). Each juvenile was assigned to an aviary based on its community in the network before 207 translocation: one aviary contained birds from one community only ("familiar" group), while the remaining 208 two aviaries contained birds from all communities ("mixed" groups, the normal management used in 209 previous hihi translocations). We ensured that mixing juveniles from different communities also included 210 spatially-separated birds (i.e. only detected in northern or southern survey locations) that had little 211 chance to interact prior to capture.
213
All birds for translocation were caught within 24 hours, then kept in the aviaries for four further days 214 while samples were processed for disease screening. Each aviary held equal numbers of birds. During holding we provided supplementary food twice daily, using the same range of food used in previous 216 successful hihi translocations (Ewen et al., 2018). On the evening of the 1 st April, hihi were re-caught 217 from the aviaries, given standard health checks, and transferred to translocation boxes (five hihi per 218 box). We transported all birds at the same time from the source site to the release site, overnight (by 219 boat then van) to minimise stress for the birds. All hihi were released successfully the following morning 220 (2 nd April).
244
For each translocated hihi, we created encounter histories which represented each bird's presence ("1", seen) or absence ("0", not seen) in each successive survey or "time point". All individuals were assigned
295
To assess whether maintaining familiar groups during capture for translocations affected individual 296 sociality, we calculated each translocated juvenile's change in degree rank after translocation compared 297 to before translocation (bound between -1 and 1; a negative value represented a decrease in social 298 rank; a positive value was a rank gain). We used a Linear Model (LM) with rank change as the response.
299
Our predictors included the aviary type each bird was housed in ("familiar" or "mixed" aviary) in 300 interaction with degree before translocation (effects of aviary could depend on sociality), and sex. For 301 this analysis, we included number of observations both before and after translocation as fixed effects, 302 because change in rank score (our response) could be dependent on variation in both number of 303 observations. Again, we assessed significance of both analyses using data-stream permutations.
314
We did not include covariates in this starting model as there is currently no method for GOF testing with Figure 2). Additionally, translocated juveniles did not associate more 352 strongly if they had shared an aviary, even when they had been familiar at the source site; in fact, there 353 was a tendency for a weak disassociation by aviary (Table 2c; r = -0.09, Prand = 0.04, Figure 2
407
Individual sociality was not consistent: more social juvenile hihi before translocation were not more social 408 after the translocation at either the source site or release site (Table 3a, Figure 3a). Post-translocation 409 social ranks did not differ between males and females (Table 3a) and also did not vary depending on 410 how many times a bird was re-sighted any more than expected by random chance (Table 3a). Among 411 translocated hihi, some birds experienced greater degree rank changes than others (greatest rank gain 412 = +0.59; greatest rank loss = -0.68) but this was not predicted by their degree rank before translocation 413 (both more-and less-sociable individuals were equally likely to change rank; Table 3b, Figure 3b).
414
Individual degree rank was not preserved by holding a juvenile with its familiar group-mates in an aviary 415 during the translocation (no significant difference in degree rank change between birds housed in 416 familiar and mixed aviaries; Table 3b, Figure 3b). Finally, the extent of rank change was not significantly 417 different between males and females (Table 3b), and again was not significantly affected by re-sighting 418 before or after translocation compared to permuted networks (Table 3b). 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 Table 3. Results of (a) GLM analysing variation in post-translocation degree ranks and (b)
545
Associates may be particularly important when individuals need to rely on social information more: for 546 example, when they have little personal information, such as following reintroduction to a new site
550
Importantly, pre-and post-event sociality was not consistent for each foal, and post-event sociality was 551 especially important for survival, which suggests the current social environment conferred the strongest 552 advantages (Nuñez, Adelman and Rubenstein, 2015). In hihi, we found similar patterns as relative pre-553 and post-translocation sociality did not remain consistent for both translocated individuals, and birds that 554 remained in the source environment (but did experience social disruption through the removal of peers).
555
In our study, however, changes in sociality only had costs for survival when additionally associated with | 2019-04-03T13:07:31.479Z | 2018-09-29T00:00:00.000 | {
"year": 2018,
"sha1": "1f8dafec4388624aacf6cd6f7ae3e7c0f0ba3681",
"oa_license": "CCBY",
"oa_url": "https://zslpublications.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acv.12557",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "6fc0525848c9a54a2866110652dbd3711ae79f4d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology"
]
} |
246858425 | pes2o/s2orc | v3-fos-license | Polymer Based Devices for Photonics on the Chip
Integrated optical structures are the fundamental building blocks for application in optical communications and optical sensing. They also offer interesting perspectives for integrated quantum optics on a chip. At present, however, they are mostly fabricated using essentially planar fabrication approaches like electron-beam lithography [1], UV lithography [2] or focused Ion beam (FIB) etching [3]. All stated processes are used to prepared 2D resp. 2.5D structures. Nowadays, 3D optical devices are very perspective for on-chip applications [4]. Preparation of 3D devices in nanoscale requires technological equipment with femtosecond laser for two-photon polymerization of photoresist master. While the silicon photonics is a very widespread field, nowadays, polymer materials have become very attractive in photonics and micro optics because of very good material and optical properties. Various optical devices based on polymer materials were presented [5 8]. We focused on preparation of polymer devices using modern three-dimensional (3D) laser lithography system. Design, preparation and morphological inspection of photonic devices was performed in this paper.
Introduction
Integrated optical structures are the fundamental building blocks for application in optical communications and optical sensing. They also offer interesting perspectives for integrated quantum optics on a chip. At present, however, they are mostly fabricated using essentially planar fabrication approaches like electron-beam lithography [1], UV lithography [2] or focused Ion beam (FIB) etching [3]. All stated processes are used to prepared 2D resp. 2.5D structures. Nowadays, 3D optical devices are very perspective for on-chip applications [4]. Preparation of 3D devices in nanoscale requires technological equipment with femtosecond laser for two-photon polymerization of photoresist master.
While the silicon photonics is a very widespread field, nowadays, polymer materials have become very attractive in photonics and micro optics because of very good material and optical properties. Various optical devices based on polymer materials were presented [5 -8]. We focused on preparation of polymer devices using modern three-dimensional (3D) laser lithography system. Design, preparation and morphological inspection of photonic devices was performed in this paper.
Experimental
For fabrication of photonic devices based on polymer material, Photonic Professional GT system was used in our experiments (Fig. 1a). Principle of this DLW (direct laser writing)
POLYMER BASED DEVICES FOR PHOTONICS ON THE CHIP POLYMER BASED DEVICES FOR PHOTONICS ON THE CHIP
Daniel Jandura -Peter Gaso -Dusan Pudis* In this paper we present promising technology for preparation of photonic devices based on polymer materials on the chip. We designed 2D and 3D structures in CAD (computer-aided design) software and we used two-photon polymerization mechanism for direct writing of these structures to IP-Dip photoresist material. This paper also deals with experimental procedures for preparation of polymer photonic devices on the chip and a new way of light coupling to devices on the chip. Morphological properties of prepared devices were investigated by scanning electron microscope (SEM). As a result, transmission spectrum characteristic was measured.
glycol monomethyl ether acetate) developer for 20 min, rinsed in isopropyl alcohol and polymerized parts were bonded to the substrate (Fig. 2c). Precise piezo positioning stage can be used for moving the sample in a vertical dimension. This enabled us to prepare 3D structures using two-photon polymerization in the vertical system is based on 3D scanning of focused laser beam in sample volume [9]. The system uses Er-doped femtosecond frequencydoubled fiber laser emitting pulses at 780 nm wavelength with approximately 100 MHz repetition rate and 150 fs pulse width [10,11]. Laser beam is scanned in x and y plane by high resolution galvanometer mirror system. The movement in z-axis using motorized micro stage [11] allows preparation of complex micro 3D structures layer by layer directly inside the photosensitive medium. In our experiments, we used laser lithography system in configuration with 63 x immersion objective (Fig. 1b). IP-Dip serves as immersion and photosensitive material at the same time by dipping the microscope objective into this liquid photoresist. Due to its refractive index matched to the focusing optics IP-Dip guarantees ideal focusing and hence the highest resolution for DiLL (Dip-in Laser Lithography) [11,12].
Technology of 3D structures preparation consists of several steps. First, we deposited IP-Dip photoresist drop on clean fused silica glass substrate (Fig. 2a) and the sample was turned upside down. In this position it remained during the whole DLW process. For photoresist exposure, we used ultrafast laser pulses which caused two-photon polymerization in photoresist volume (Fig. 2b). The writing process has to start on substrate surface in order to achieve the bond of the structure to the substrate. Otherwise, polymerized parts can be washed away from the substrate. Finally, the sample was developed in PGMA (Propylene 3D lithography allows us to use a non-conventional way how to couple light into the waveguide and out of the waveguide. The ends of the waveguide are not perpendicular walls but they were modified into 45° slope (Fig. 5). The light can be coupled from optical fiber into optical structure and vice-versa, from structure to detection fiber, due to total internal reflection on the interface between the waveguide and the air.
Fig. 5 SEM image of modified end of the waveguide to 45° slope
Finally, we investigated optical transmission properties of prepared racetrack resonator. We coupled light from LED source with central emitting wavelength of 1550 nm to single-mode fiber to the waveguide and we observed the output light using Optical Spectrum Analyzer (Fig. 6). Transmission spectral characteristic was measured from the same channel and is shown in Fig. 7. In this characteristic typical resonance dips were observed. The transmission spectrum corresponds to our calculations where 2 nm free spectral range was measured.
dimension. As a demonstration, we designed helical photonic crystals with period of 3 μm and height of 8 μm. Final prepared helical photonic crystal with width of 700 nm with corresponding CAD model is shown in Fig. 3.
Results and discussion
DiLL configuration used in our experiment enables us to prepare structures in a large area in orders of several cm 2 with 200 nm resolution if the 63 x objective is used. For the preparation of structures in micrometric scale we used higher laser power and coarse hatching distance.
We designed a ring resonator structure in racetrack configuration with the radius of the curved (ring-like) portions of the racetrack resonator R of 100 µm and the racetrack length ΔL of 750 µm. The length of the resonator has a great influence on transmission properties. From the light propagated in the waveguide only part whose wavelength is a divisor of ΔL is coupled into the ring. This coupled light forms a standing wave pattern in the ring resonator part where where m is an integer and n eff is effective refractive index of the waveguide. Frequency separation between two successive resonances is expressed by free spectral range (FSR) where λ is the wavelength of transmitted signal, L C is the coupling length between the resonator and the waveguide bus (which is zero for a point coupled ring). This coupling region between the waveguide bus and the ring resonator has a great influence on quality factor of ring resonator. The gap between the waveguides in coupling region was designed 200 nm. The ring resonator consists of waveguides with an asymmetrical refractive index alignment. The refractive index of IP-Dip core is 1.54 and the bottom glass cladding has refractive index of 1.44. The waveguide is surrounded by air from above. Assuming these parameters and the transmission wavelength of 1550 nm we calculated a value of FSR = 2 nm.
CAD software was used to design our waveguide structures with following parameters. The waveguide height was designed to 3 µm, the width to 3.5 µm and 90º slope angle of the sidewalls. SEM (scanning electron microscope) image of prepared ring resonator in racetrack configuration in IP-Dip polymer with designed parameters is shown in Fig. 4. Very well stitching properties were achieved what is documented in this figure.
Conclusion
In this paper, we highlighted promising technology for preparation of photonic devices based on polymer materials. We presented non-conventional way of light coupling to waveguide devices prepared on fused silica substrates what enables us to measure optical spectral characteristics of prepared devices. This also gives the possibility to integrate more devices on a chip with vertical inputs and outputs. For photoresist master preparation we used 3D laser lithography. We demonstrated the possibilities of this preparation technology on several 2D and 3D devices. The morphological inspection of our structures showed 90º slope angle of the sidewalls. The optical spectrum measurement of racetrack resonator showed typical resonance dips. Final devices are promising for lab on a chip and sensing applications due to unique optical, elastic and chemical properties.
Acknowledgement
This work was supported by the Slovak National Grant Agency under the projects No. VEGA 1/0491/14 and 1/0278/15 and the Slovak Research and Development Agency under the project No. APVV 0395-12. This work was co-funded from EU sources and European regional development by project ITMS 26210120021. | 2022-02-16T16:14:53.735Z | 2017-09-30T00:00:00.000 | {
"year": 2017,
"sha1": "2bf787717ff9760f723e538314bced7f47204d8c",
"oa_license": "CCBY",
"oa_url": "http://komunikacie.uniza.sk/doi/10.26552/com.C.2017.3.16-20.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6debe8d6b2d926409bc60158285850fecaae56c7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
267395074 | pes2o/s2orc | v3-fos-license | Jetting bubbles observed by x-ray holography at a free-electron laser: internal structure and the effect of non-axisymmetric boundary conditions
In this work, we study the jetting dynamics of individual cavitation bubbles using x-ray holographic imaging and high-speed optical shadowgraphy. The bubbles are induced by a focused infrared laser pulse in water near the surface of a flat, circular glass plate, and later probed with ultrashort x-ray pulses produced by an x-ray free-electron laser (XFEL). The holographic imaging can reveal essential information of the bubble interior that would otherwise not be accessible in the optical regime due to obscuration or diffraction. The influence of asymmetric boundary conditions on the jet’s characteristics is analysed for cases where the axial symmetry is perturbed and curved liquid filaments can form inside the cavity. The x-ray images demonstrate that when oblique jets impact the rigid boundary, they produce a non-axisymmetric splash which grows from a moving stagnation point. Additionally, the images reveal the formation of complex gas/liquid structures inside the jetting bubbles that are invisible to standard optical microscopy. The experimental results are analysed with the assistance of full three-dimensional numerical simulations of the Navier–Stokes equations in their compressible formulation, which allow a deeper understanding of the distinctive features observed in the x-ray holographic images. In particular, the effects of varying the dimensionless stand-off distances measured from the initial bubble location to the surface of the solid plate and also to its nearest edge are addressed using both experiments and simulations. A relation between the jet tilting angle and the dimensionless bubble position asymmetry is derived. The present study provides new insights into bubble jetting and demonstrates the potential of x-ray holography for future investigations in this field. Supplementary Information The online version contains supplementary material available at 10.1007/s00348-023-03759-9.
Introduction
Oscillating and collapsing bubbles are main agents of erosion, surface cleaning, or other surface modifications by cavitation.A bubble collapse in a disturbed geometry will not proceed as under spherical symmetric conditions, and in many cases a liquid jet flow will emerge that penetrates the bubble interior gas phase during implosion (Lauterborn and Kurz 2010).The non-spherical collapse and jetting can have different origins, like neighbouring objects or phase boundaries, adjacent bubbles, a liquid flow, gravitational acceleration or the pressure gradient produced by a strong acoustic field (e.g. a shock wave) (Philipp and Lauterborn 1998;Lindau and Lauterborn 2003;Fujiwara et al. 2007;Supponen et al. 2015;Han et al. 2015;Sankin et al. 2005).Although many experimental and numerical studies have been dedicated to the investigation of the jetting phenomenon, it is sufficiently complex that many parts are still not Tim Salditt and Robert Mettin have contributed equally to this work.fully understood.Reasons for experimental challenges for optical imaging comprise the small spatial scales and fast dynamics, but also the lack of a clear visualization of the gas cavity interior as a result of the reflections and scattering of the illuminating light at liquid-gas interfaces (Koch et al. 2021).The highly curved surface of the jetting bubbles deflect the light which actually goes into and through the cavity producing a significant distortion of the observed internal liquid filaments.These effects become increasingly relevant as the characteristic dimensions of the deformed cavity are reduced, and only a small fraction of the bubble is visible around its symmetry axis where the light rays impact the interface almost perpendicularly (Lauterborn and Bolle 1975;Rosselló et al. 2018).So far, the interior of a jetting bubble was only accessible for cases where the bubbles are produced in a transparent liquid, their expanded size is typically above 1 mm and a particularly intense multi-directional illumination was used (Lindau and Lauterborn 2003;Supponen et al. 2015;Koch et al. 2021).Even in those cases, the rather poor contrast of the images was not enough to clearly resolve the jet contour or only a part of it was discernible.
In the last decade, advances in high-speed x-ray imaging opened up an exciting range of possibilities as a novel experimental tool applied to research topics like: the propagation of shock waves in solids or liquids (Escauriza et al. 2020;Vassholz et al. 2021;Hagemann et al. 2021;Hodge et al. 2022;Vassholz et al. 2023) and their interaction with a gas cavity (Olbinado et al. 2017;Hodge et al. 2022;Montgomery 2023), ultrasound driven bubbles (Ehsani et al. 2021;Biasiori-Poulanges et al. 2023), fluid dynamics of cavitating flows in channels (Vabre et al. 2009;Lee et al. 2011;Morgan et al. 2013;Khlifa et al. 2017;Strucka et al. 2023) or in a bursting capillary (Vagovič et al. 2019) or bubble jetting (Hoeppe 2020;Bokman et al. 2023).X-ray imaging stands out from most optical techniques as one can probe complex gas-liquid interfacial structures, like the ones found in jetting bubbles, free of image distortions caused by refraction at boundaries (e.g. at curved interfaces).Furthermore, it can also be applied to optically opaque liquids.
The generally weak absorption of hard x-rays in matter allows to visualize the interior of the gas cavities by exploiting the phase shift induced by the sample, which, after a suitable distance of free-space propagation, leads to the formation of edge enhancement or Fresnel fringes when coherent X-ray beams are applied.The phase shift is proportional to the electron density projected along the optical path through the sample, which in turn depends on the mass density.This technique of x-ray phase-contrast holography is well developed in material-and life science providing spatial resolution from the micron-scale down to 20 nm (Bartels et al. 2015).See Appendix A for a more extensive description of x-ray phase-contrast image formation.The advance of hard x-ray free-electron lasers (XFELs) now provides ultrashort x-ray pulses with sufficient numbers of photons to enable single-pulse imaging, and with that the investigation of fast hydrodynamic processes.
The erosion produced on a solid surface after the impact of a bubble jet is a well-documented phenomenon which attracted the attention of scientists and the industry for decades (Knapp et al. 1970;Karimi and Martin 1986;Philipp and Lauterborn 1998).In spite of that, the exact mechanisms leading to cavitation erosion are still under debate (Dular et al. 2019;Lechner et al. 2019;Koch et al. 2021;Reuter and Ohl 2021;Reuter et al. 2022;Dular and Ohl 2023), pointing to the localized high pressure produced upon jet impact on the surface as a possible cause for the observed damage.The impact of a bubble jet can also lead to piercing of biological tissue (Brujan et al. 2001;Ohl et al. 2006;Yuan et al. 2015) and soft gels (Rosselló and Ohl 2022), which is particularly important in medical applications like eye laser surgery or lithotripsy.The jet impact pressure is related with its speed, which in turn has a strong dependence on the dimensionless stand-off distance of the collapsing bubble to the surface, defined as = d∕R max (Lauterborn and Bolle 1975; Lauter- born et al. 2018), with d representing the distance from the bubble centre to the boundary and R max the maximum radius of the bubble.The impact of the jet and the resultant splash are also relevant on processes that use cavitation for surface cleaning.The splashing produced after the jet has impacted the opposite bubble wall is formed when the spreading liquid of the jet meets the inflow that results from the proceeding bubble collapse (Lindau and Lauterborn 2003).This phenomenon has been described first in Blake et al. (1998); Tong et al. (1999).Recent literature terms it as the Blake splash (Lauterborn et al. 2018;Bokman et al. 2023).
In addition to the mechanical properties of the neighbouring surface, the jetting dynamics are also influenced by the specific boundary geometry.Some good examples of the latter can be found in cases where the bubble collapses near corners or mixed boundaries (Tagawa and Peters 2018;Kiyama et al. 2021;White et al. 2023;Li et al. 2023), sharp edges (Zhang et al. 2020), small platforms (Tomita et al. 2002;Koch et al. 2021;Kadivar et al. 2021;Lechner et al. 2023) or walls with an angle (Molefe and Peters 2019;Wang et al. 2020).Those studies demonstrate the effect of the boundary geometry on the jet direction, which is essentially determined by the relative stand-off distance from the bubble to the nearest flat surfaces.As the jetting bubble interior is not visible in most shadowgraphs, the jet direction becomes evident only by the liquid filament that pierces through the cavity on its re-expansion phase.In the cases with a reduced stand-off distance, e.g. ≲ 1 , the bub- bles expand up to the point of touching the neighbouring surfaces, and as a consequence their shape becomes highly distorted.Due to the optical obscurity of the bubble interior, however, many aspects of the jetting dynamics in those cases remain unknown.Recently, Wang et al. (2020) and Li et al. (2023) employed a numerical boundary integral method (BIM) to study the jetting of bubbles near corners.Interestingly, their results suggest that the bubble jet can follow a curved trajectory as it penetrates the gas cavity (Wang et al. 2020).The limited number of works dealing with this kind of asymmetric cases can be explained by the technical challenges and the requirement of complex full 3D simulations, which demand a significant amount of computational resources.
In the present study we intend to clarify internal structures of collapsing laser-induced bubbles in the proximity of a solid boundary using time-resolved x-ray holographic imaging at the MID station of the European X-Ray Free-Electron Laser (EuXFEL) facility (Madsen et al. 2021).A comprehensive study of the dependence of bubble/jet dynamics on for the axisymmetric case can be found in a recent numerical work by Lechner et al. (2020).Additionally, images of the interior of a jetting bubble under similar conditions were presented by Koch et al. (2021) and Bokman et al. (2023).In this ideal case, the solid boundary has close to infinite dimensions and the bubble volume is negligible compared to the volume of the liquid bulk.Here, we pay special attention to a non-ideal case found when a bubble collapses in "real life" situations, such as the mostly unexplored cases where the bubble collapses under non-axisymmetric boundary conditions (e.g.near the corner of a container or the border of a plate).These scenarios are expected to be more representative of generic and less-controlled environments, making their investigation essential for elucidating the mechanisms contributing to cavitation erosion.
Experimental methods
In brief, the experiments consist of generating individual laser-induced cavitation bubbles in the proximity of a solid plate and then capture synchronous images of their jetting with an optical camera and x-ray holography from two orthogonal directions.An overview on the experimental setup at the Materials, Imaging and Dynamics (MID) instrument of the European XFEL (Madsen et al. 2021) employed in this study, is presented in Fig. 1a.The setup was operated at ambient conditions.
An infrared laser pulse (Litron Nano L 200-10, Q-switched Nd:YAG, =1064 nm, FWHM pulse duration of 6 ns, pulse energy of 17 mJ) is focused with a numerical aperture of NA = 0.2 inside an acrylic cuvette filled with deionized water to produce the bubble.The collinear alignment of the laser-induced cavity and the x-ray light was ensured by reflecting the pump laser beam on a drilled mirror placed at 45 • (hole diameter ∼ 1.2 mm) which allow the x-rays to reach the liquid filled cuvette as detailed in Fig. 1a.The distance between the x-ray detector and the sample, i.e. 9.67 m, is covered by an evacuated flight tube, which is tightly sealed with diamond and kapton windows.
The acrylic rectangular cuvette has interior dimensions: 12 mm × 15 mm × 36 mm and was equipped with a 5 mm cylindrical inlet at its bottom used to add or remove the liquid while holding the cuvette in a fixed position.Two circular windows with a diameter of 7 mm made from thin glass (i.e.∼150 μ m of thickness) are placed at mid- dle height of the cuvette to allow both the IR laser and the x-ray beam to reach the liquid without damaging the container walls and with a minimum energy absorption, see Fig. 1b and c.
In the experiments the boundary condition for the laserinduced bubble was given by a circular glass plate with a diameter of 8 mm that was initially placed with its centre aligned with the glass windows (and the x-ray beam) at the geometrical centre of the cuvette.This disc could be arranged in one of two orientations in order to visualize the bubble jetting from two different perspectives.In one case, described in Fig. 1b, a disc with a thickness of around 1 mm is held from below and with its surface normal oriented orthogonal to the x-ray beam, which allows observing the jetting bubbles from the side (i.e.perpendicular view).In the second case, a thinner glass plate (150 μ m) is placed with its normal parallel to the x-rays (and laser pulse) propagation direction (i.e.parallel view), as depicted in Fig. 1c.The cuvette and the glass plate holders (made from polyester, PLA) were mounted on independent XYZ-axis linear piezo stages (SLC 1730, Smaract, Germany), that allow a precise positioning of the disc with a spatial resolution of 4 nm.
The high-speed videos of the bubble dynamics were recorded with a Photron Fastcam SA5 camera in combination with a long-distance microscope Infinity K2 equipped with a CF-3 lens.The camera frame rate was set to 75000 fps and the exposure time to 1 μ s in all the measurements.The high-intensity back-illumination needed for producing the shadowgraphs was provided by a Sumita LS-M352 halogen lamp.
Ultrashort XFEL pulses with a duration below 100 fs were delivered to the MID instrument (Madsen et al. 2021) with a 10 Hz repetition rate (single-bunch mode).XFEL pulses were generated at the SASE2 undulator line of the European XFEL, with an electron energy of 16.5 GeV.The photon energy was 17.8 keV, corresponding to a wavelength of 0.07 nm.The mean pulse energy during the beam time was ∼ 700 μ J.In this experiment no monochromator was used, so a SASE bandwidth of ≈ 10 −3 is assumed (Geloni et al. 2010).
Single-pulse x-ray phase-contrast holography was performed on the cavitation bubbles in an in-line geometry with an unfocused, parallel x-ray beam.The diameter of the beam was chosen to be approximately 1 mm.The x-ray holograms were recorded with a scintillator-based fibre-coupled sCMOS camera (Andor Zyla 5.5) with a pixel size of 6.5 μ m, placed at a distance of 9.67 m from the sample.In this context, the imaging regime with Fresnel number of F pix = 0.062 is a near-field holographic regime, where strong edge enhancement and Fresnel fringes are observed.Holographic contrast is formed by interference during free-space propagation of the XFEL pulse up to the detector.It is worth noting that contrast in the detected image is predominantly related to the object's phase shift, while its absorption plays a minor role.We further illustrate the image formation and its processing in Appendix A. The x-ray pulses were synchronized with the CCD trigger in the optical camera with a precision of ∼ 100 ns.Due to the fast bubble dynamics, the bubble shape might suffer a noticeable change during the image acquisition time of 1 μ s set in the optical camera; consequently, the events shown in the optical images might slightly differ from the x-ray holographic images, which are taken in a much shorter temporal window (i.e. 100 fs).
The timing and synchronization scheme of this experiment is described in detail in the work of Osterhoff et al. (2021).In short, a pump-probe experiment is performed, where a new cavitation bubble is seeded by the optical laser for every XFEL probe pulse, delivered with 10 Hz repetition rate.Additionally, an optical high-speed video sequence is recorded for each event.All devices were triggered using electronic delays relative to a master XFEL trigger.
Numerical methods
The experimental x-ray holographic images were compared to numerical simulations computed with the Finite Volume method (Ferziger and Perić 1997).The computational fluid dynamics solver is built from the segregated, pressure based, two-phase compressibleInterFoam solver (Miller et al. 2013) within the OpenFoam-package (Jasak 1996;OpenFOAM-v2006 2020, precisely the foam-extend fork.The numerical simulations were performed using a full three-dimensional mesh of the cuvette used in the experiments, which allowed us to study the physics behind the x-ray images of the jetting of bubbles, as well as interpolate and extrapolate cases not covered by the experiments.
Definition of dimensionless parameters
In the following, the bubbles are characterized by two dimensionless numbers computed from the geometry as illustrated in Fig. 2: the previously mentioned normalized stand-off distance of the bubble centre to the solid surface = d∕R max , and a normalized stand-off distance of the bubble centre to the border of the disc, defined as r = (R D − r b )∕R max .Here, r b denotes the radial bubble seeding position meas- ured from the centre of the circular glass plate, and R D is the radius of the disc.In this scaling, the bubble is located just above the edge for r = 0 , and a bubble at the symme- try axis would mean r = R D ∕R max .In the experiments, the maximum radius R max was measured by fitting a circle in the upper half of the optical images of the bubbles (i.e.located away from the surface).In the simulations, a similar method was employed for the sake of consistency.It can be assumed that the absolute size of the disc does not have a significant impact on the jetting dynamics as long as R D ≫ R max .Con- sequently, the use of a dimensionless parameter like r that does not account for the exact disc size is justified.
The temporal evolution of the cavities is compared by computing a prolongation factor in the collapse time, defined as the time measured from the laser shot ( t = 0 ) to the moment where the gas phase reaches its minimum volume ( t(V min ) ), normalized by two times the Rayleigh collapse time of an empty spherical bubble of same maximum radius, (2 T c (R max ) = 1.829R max √ 0 ∕p ∞ , where 0 and p ∞ denote the density and pressure of the liquid.Additionally, we defined a normalized maximum volume, V * max , as the ratio of the bubble gas volume and the volume of an unbounded cavity produced under similar conditions.
Bubble model, CFD solution method, mesh and initial conditions
The numerical model and solution method is briefly sketched here and has been developed and validated with experimental data over the past decade (e.g.see Refs.Koch et al. 2016;Lechner et al. 2017Lechner et al. , 2019Lechner et al. , 2020;;Koch et al. 2021).The bubble contains a small amount of non-condensable gas, the vapour pressure is neglected.While hydrodynamically nucleated cavitation bubbles are composed of mainly vapour, laser-induced bubbles like the ones produced in this experiment contain mostly non-condensable gases.These gases appear due to molecular dissociation-recombination reactions from the plasma.Presumably, the main gas is hydrogen (Maatz et al. 2000), but probably also oxygen and other recombination products exist (e.g.see Table 1 in Schanz et al. (2012)).Therefore, the neglecting of vapour is justified for the cases presented.The non-condensable gases are summarized to an ideal gas with polytropic exponent of 1.4 in the model.The model further assumes a cold liquid, i.e. a liquid far from its boiling point, with the following properties.The gas and also the liquid are taken as compressible in order to allow for a realistic modelling of sound and shock wave emissions and the respective losses during collapse.Viscosity is included in both fluids.Thermodynamic effects and mass exchange through the bubble interface are neglected.Gravitational effects can be omitted due to the small size and fast dynamics of the bubble.
The equations of motion of the two-phase flow are formulated in the "one-fluid" approach, i.e. with one density field (⃗ x, t) , one velocity field ⃗ U(⃗ x, t) , and one pressure field p(⃗ x, t) , satisfying the Navier-Stokes equation and the con- tinuity equation for compressible fluids.Although surface tension plays a minor role for the cases considered here, it is included in the momentum equation via a volume force term.The surface tension coefficient of water is set to = 0.0725kg m −2 .In order to distinguish between liquid (l) and gas (g), a volume fraction field (⃗ x, t) is introduced with = 1 in the liquid phase, = 0 in the gas phase.The position of the interface is then given implicitly by the transition of from 1 to 0. The dynamic viscosities l of the liquid and g of the gas are taken to be constant The equations of motion are closed by the equations of state for the gas and the liquid.For the gas in the bubble, the change of state is assumed to be adiabatic, i.e. g (p)∕p 1∕ g = const., with g = 1.4 the ratio of the specific heats of the gas (air).For the liquid, the Tait equation of state for water is used: l (p)∕(p + B) 1∕n T = const., here the Tait exponent is n T = 7.15 and the Tait pressure is B = 305MPa.The simulations are carried out in full 3D, with the computational mesh of same size as the cuvette in the experiment.The originally coarse mesh with cells in Cartesian orientation is refined in the region of the circular disc shown in Fig. 1b.The disc is then cut out of the mesh, as detailed in Appendix C. The height of the disc is assumed as 1 mm.The coordinate origin is located in the top centre of the disc (matching the centre of the rectangular cuvette).The bubble site at t = 0 is set to be at (x = 0, y = −r b , z = d) , as defined in Figs.1a and 20.The mesh is refined in concentric spherical regions around the bubble site, starting from a distance of 3.5 mm for the first (outermost) refinement and ending at the 7th (innermost) refinement region with a distance of 30μ m to the bubble centre at t = 0 .The mesh is static and does not change over time for one simulation.
The minimum cell size is 3.1 μm in edge length.The total number of cells is half a million with this approach.The resolution is sufficient to capture the bubble dynamics including the jetting.A convergence test showed that neither the bubble jetting dynamics nor the velocity of the jet experience significant changes when the cell size is decreased or the region of finer cells is increased in size.The peak pressure values of shocks, on the other hand, are likely underestimated with the present resolution.However, this is acceptable since the formation and propagation of shock waves are not intended to be extensively explored in this study.
The boundaries of the cuvette are set to be approximately non-reflecting (Poinsot and Lele 1992;OpenFOAM Wiki 2010).This is possible because the cell size at the cuvette walls is significantly larger than the size of a shock front.This results in the broadening of shock waves in space, potentially leading to erroneous results when they are reflected back to the bubble region.The solid boundary (i.e. the circular disc) has a vanishing normal gradient for the pressure and volume-fraction field, as well as a no-slip condition for the velocity.
In the experiments, a typical laser bubble might initially exhibit some asphericity due to the slightly elongated shape of the laser plasma.However, the bubble quickly loses most of this initial asymmetry during expansion, partly due to surface tension.The effect of the initial bubble shape on bubble jetting dynamics is negligible when compared to the influence of the nearby wall.Therefore, for simplicity, we initiated the laser-induced cavity as a spherical bubble seed with a radius of R init = 20 μm at the laser focus site.The internal pressure in the bubble is much higher than the pressure in the surrounding liquid, such that it expands rapidly.For the simulations in Figs. 4, 5 and 7, the parameters d and r b were read from the experiments; however, r b as well as the initial pressure of the bubble is adjusted so that there is a best match between the dynamics of the numerical bubble and the experimentally observed one.For a larger numerical parameter study over r b for a fixed d = 480 μm (compare Figs. 10,11 and 12), the pressure in the bubble is set to 1.1 GPa, which has been chosen in such a way that the bubble would attain a maximum radius of ∼ 500 μ m in unbounded liquid ( R max,ub ), corresponding to a volume V max,ub = 9.187 ⋅ 10 −9 m 3 .The maximum bubble size, how- ever, turns out to decrease slightly as the bubble seeding position approaches the solid boundary (see also, for example, Figure 2 in Ref. (Koch et al. 2023)).
The liquid is set at rest with an ambient pressure of 1 bar.The adaptive time step is limited by the flow Courant number with a maximum value of 0.2.
Results and discussion
The results concerning the jetting dynamics of bubbles expanding and collapsing above a glass disc are presented through a combination of experimental x-ray holographic images, synchronous high-speed optical images, and renders from three-dimensional numerical simulations.Section 4.1 introduces to the peculiarities of bubble imaging by x-ray holography and to the image interpretation.Section 4.2 represents the core of this work which is focused on studying the effect of an off-centred bubble position on the jetting direction for bubbles produced at different stand-off distances .Subsequently, the effect of varying the distance of the bubble to the border of the disc is discussed in Sect.4.3.Furthermore, some remarkable findings on internal gas/ fluid structures, only visible under the x-ray illumination, are detailed in Sects.4.4 and 4.5.
In the experiments, R D = 4 mm, and the bubbles were produced at radial distances around r b = 3 mm, which results in r ∼ 2 for R max ≈ 500 μ m.It was not possible to explore larger variations of r b and R D due to technical issues (e.g. the existing restriction on the cuvette size imposed by the x-ray absorption in the liquid) and limitations in the available beamtime.However, a wider range of r b resp.r has been explored by numerical simulations, as given in Sect.4.3.
The distance between the bubble seeding position and the cuvette walls in this experimental setup was always kept to 8 R max or higher.The separation from the bubble to the stick which holds the rigid plate was even larger (see Fig. 1).Thus, the walls and the stick are not expected to play any significant influence on the bubble jetting dynamics, especially considering the close proximity of the surface of the plate.It is worth noting that the bubble seeding position might have slightly fluctuated for different values of .This is a consequence of using a relatively low numerical aperture in the beam focusing optics, as well as the partial blocking of the laser beam that occurs when the plate is placed very close to the laser focal spot.Accordingly, the exact position of the bubbles on the glass plate was obtained from the optical images in the direction of the beam and finely adjusted through the numerical simulations in the perpendicular direction.
Interior of a jetting bubble
The gas-liquid interface of a jetting bubble can present quite complex shapes characterized by curved domes, rings and an internal liquid column.This makes the observation inside the bubble extremely difficult due to the refraction and reflection of light in the visible spectrum.For instance, when using traditional backlight illumination (shadowgraphy) the majority of the projected area of the bubble in the screen looks obscure in the high-speed recording (Koch et al. 2021).On the contrary, x-ray holography produces images provide a clearer view of the interior of the gas cavities and hence represent a great advantage relative to the traditional optical imaging.However, the x-ray holographic images cannot be interpreted exactly in the same way as conventional optical images.In the holograms, the phase shift induced by the liquid-gas structure of the sample on the transmitted x-rays results in distinctive features of the measured intensity pattern which are characteristic of near-field diffraction, which have to be understood before interpretation of the images.
To give a proper interpretation to the raw holograms, we compared them with simulated holograms derived from data obtained through computational fluid dynamics simulations together with an electromagnetic wave propagation model.Through these x-ray simulations, we gain insights into the origins of certain distinct and counter-intuitive characteristics found in the x-ray holograms.A more detailed explanation on this method of model-based fitting can be found in Appendix A.
One example of the obtained x-ray images is presented in Fig. 3. Figure 3a shows a sequence of optical images on the jetting dynamics of a bubble produced at = 0.57 ± 0.03 , while some of the typical distinctive features observed in the corresponding x-ray holograms are enumerated in Fig. 3b and c.A characteristic that stands out in the x-ray images can be seen at the interface between the interior of the bubble and the water.There, a prominent contrast along the edges manifests as a wide outer dark border accompanied by a bright inner edge (ii).A similar contrast variation is also evident (although less pronounced) at the boundary between the gas phase and the inner jet (iv).These edge interferences ("edge enhancement") are typical for the holographic recording.Another prominent feature found in the holograms is given by the downward-curving bright arc around the jet top (iii).This is a result of the torus-like shape assumed by the collapsing bubble on its upper portion, and it is well reproduced by simulations (see Appendix A).
In the perpendicular x-ray view of the bubble (i.e. Figure 3b), the visibility of the splash (vi) produced by the jet Fig. 3 Optical and x-ray holographic imaging of a bubble jetting towards a rigid (glass) surface.a Optical image sequence of the bubble dynamics taken with 75 kfps (i.e.13.3 μs between frames).Here, the stand-off distance is = 0.57 ± 0.03 .The lower panels present x-ray holographic images captured at the corresponding instant marked with the legend "x-ray" in (a).Panel b shows a side view of the jetting cavity as an X-ray hologram, while in (c) the jetting of a second bubble with similar dynamics is observed from a parallel view (normal to the glass disc).The x-ray light reveals the bubble interior, invisible in the optical shadowgraphs of (a).The arrows indicate some of the features commonly present in the holograms: i x-ray light blocked by the border of the hole of the laser mirror; ii thick interfacial line; iii bright arc; iv double shadow at the jet liquid column; v bubble pedestal stripes; and vi Blake splash impact on the surface is hindered by a dark region in proximity to the surface.This is attributed to both the dark region of the bubble boundary and also to the strong contrast of the edge of the glass boundary which might overlap with certain physical features that occur in close proximity of the solid boundary (see Appendix A).As demonstrated in Fig. 3c, the splash is well discernible in the parallel x-ray view.
Figure 3b also displays the formation of "bubble pedestal stripes" near the surface (v).These stem from the kink in the outer bubble interface region that is projected through the bubble.This characteristic "bell shape" of the bubble forms during jetting for certain , due to a thin liquid boundary layer between bubble and surface (Lechner et al. 2020;Koch et al. 2021).
The x-ray illumination function is non-uniform, which leads to random background fluctuations in most images.Also the position of the illumination centre is subject to jitter.Both is due to the nature of the self-amplification of spontaneous emission (SASE) process, which underlies the XFEL radiation.For further discussion, see Appendix A or Ref. (Hagemann et al. 2021).In addition, as the X-ray beam passes the drilled mirror, the edges of the through hole cut into the x-ray beam in some images, see feature (i) in Fig. 3b, c.Due to slight angular adjustments of the mirror, we see an ellipsoid aperture.
Jetting dynamics of cavities with non-axisymmetric boundary conditions at different stand-off distances
The jetting dynamics of bubbles produced at different standoff distances was studied through a series of individual x-ray holograms captured from multiple laser bubbles produced under the same conditions.The existing limitations on the equipment's precision or the effective energy deposition of the laser on successive shots caused minor fluctuations on the bubble dynamics that translate into a small level of jitter in the temporal phase of the expansion/collapse cycle of the cavities.The holograms obtained from different single-pulse recordings were then arranged in a chronological order, creating the illusion of fluid motion by applying a technique known as "equivalent time sampling" (Philipp and Lauterborn 1998;Lindau and Lauterborn 2003;Koch et al. 2021).
The process of sorting and assigning a specific time to each frame was performed by fitting the experimental bubbles to full 3D numerical simulations.This allows us to generate holography "movies" of the bubble jetting process with a variable equivalent frame rate ranging from one million frames per second (Mfps) to three Mfps, as shown in Fig. 4a.There, the irregular illumination pattern found in the different frames comes from a small level of flickering in the intensity and position of the x-ray beam.
Figure 4a presents a clear view of the oblique jet impacting the solid boundary (at t ≃ 99 μs ) and producing an asymmetrical Blake splash (Blake et al. 1998;Tong et al. 1999;Lechner et al. 2020;Bokman et al. 2023), invisible in the synchronous optical images shown in Fig. 4b.In the accompanying images from the simulation, the gas-liquid interface of the cavity is rendered in a semi-transparent turquoise colour, while the liquid is left white.The position of the jet tip and the evolution of the subsequent Blake splash are remarkably well reproduced by the numerical model.The general bubble dynamics and the formation of an oblique jet resembles the numerical results (obtained with the BIM) reported by Wang et al. (2020) for a case where a bubble collapses near a corner defined by two rigid walls.It is worth noting that in Fig. 4b, the red contours are cut through the centre of the initial bubble, specifically in a direction perpendicular to the x-ray image plane.Consequently, due to the oblique nature of the jets, there appears to be a delay in the jet dynamics displayed in panel (b) when compared with the corresponding frames in panel (a) of Fig. 4.
The numerical fitting of the x-ray holograms with the CFD model brings the possibility of performing a realistic and precise assessment of the jet's speed, which is believed to be closely related to the extent of potential damage produced at the point of impact.This technique is particularly useful for interpolating the dynamics of experimental bubbles when the phenomena being studied occur at higher speeds than the sampling rate utilized in the holograms, e.g. to characterize fast jets or the bubble collapse.In the example of Fig. 4, the simulations indicate that the jet reaches a rather moderate speed of 72m/s right before the impact.
After the jetting, the toroidal bubble collapses asymmetrically, i.e. starting from the side closer to the boundary and then progressing towards the centre of the disc.As demonstrated in panels "I" and "J" of Fig. 4, the cavity experiences significant fragmentation during its implosion towards the surface, subsequently expanding as a bubbly ring with a complex gas-liquid structure.
Further details on the dynamics of the splashing and the later rupture of the toroidal bubble can be examined in the parallel view of the phenomenon presented in Fig. 5.The view from this perspective makes evident that the jet can be detected even on the early stages of its formation as a shaded region on the holograms.This perspective also highlights the asymmetry of the splash and provides an account of the height of its crown through the thickness and darkness of the splash contour line, which becomes more pronounced where the crown reaches greater heights.
Figure 6a exhibits a similar measurement taken for a slightly higher stand-off distance (i.e.= 0.84 ).In this case, the bubble is not in contact with the surface at the instant of jet impact.As the jet progresses further towards the disc, the asymmetry of the Blake splash becomes even more evident.
For instance, this is seen in the non-central location where the jet touches the surface at t = 98.8 μs in Fig. 6b.
The investigation of jet tilting for a value of close to unity was further explored through comprehensive 3D numerical simulations, as depicted in Fig. 7. Interestingly, we noticed that the passage of the laser pulse results in a faintly darker line in many of the optical images.This phenomenon is produced by the appearance of a linear trail of micrometric or sub-micrometric bubbles along the centre of the laser beam cone of light (Rosselló and Ohl 2021).The regions in the images with increased contrast (outlined in red) in Fig. 7a show how these tiny bubbles are dragged by the flow induced by the laser cavity collapse, serving as tracers of the liquid displacement, as corroborated using a dye advection visualization (Jobard et al. 2002;Laramee et al. 2004;Koch et al. 2023) in the numerical simulations in Fig. 7b.In this figure, the grey horizontal layer in the first frame represents the secondary laser-induced bubbles advected with the flow, assuming the shape of the grey warped layer in the last two frames.The corrugations observed in the bubble interface in the simulation frames at maximum extension result from the Fig. 4 Bubble jetting observed from a perpendicular view, as explained in Fig. 1c, and contrasted with the full 3D numerical simulations, computed for a measured stand-off distance = 0.57 ± 0.03 .a x-ray holographic images of the jet dynamics and its impact on the solid surface.The images are compared with frames from the numerical simulations with time indicated in μs after the laser pulse.In those, the gas-liquid interface is highlighted by a black contour drawn along a plane crossing the centre of the bubble.The Blake splash can be clearly seen right after the jet impact, i.e. at t ≃ 100 μs (as pointed out by the blue arrows.b Optical images acquired simul-taneously to the x-ray images from a direction orthogonal to the x-ray propagation.As the jetting points predominantly away from the camera, the bubble appears nearly symmetrical when viewed from this perspective.The two types of images are paired by the capital letters in the lower right corner of the frames.The red contours show the numerical simulations from the perspective of the optical camera, cut through the origin.Therefore, the moment of impact appears slightly retarded as compared to the according frames in (a).Multimedia view: (Video Fig. 3.avi) interpolation of an oblique cut across computational cells and should not be mistaken as artefacts.
As the distance between the bubble seeding point and the plate is further increased, the effect of the anisotropic boundary conditions on the jetting dynamics becomes increasingly noticeable.For instance, if the stand-off parameter used in Fig. 4 is doubled while maintaining r fixed, we obtain a cavity that remains spherical during the expansion phase to later produce an oblique jet which forms an angle (with respect to the vertical direction) as shown in Fig. 8.
The cases with values of ranging from 1.3 to 2 are distinguished by a relatively wider jet (i.e. with respect to the bubble size) that penetrates the bubble without being hindered by the rigid boundary.In the current anisotropic scenario, this leads to the formation of a non-axisymmetric toroidal cavity with variable cross section.
The changes observed on the oblique jet/bubble dynamics for the range of values previously discussed are summarized in Fig. 9.The parametric plot illustrates the evolution of the jetting bubble shape with increasing stand-off distance, showing the significant impact of in the bubble Fig. 5 Parallel view of the bubble collapse as the jet hits the glass plate for a case with = 0.56 ± 0.03 .The scale bars represent 250 μm .a X-ray images of the jet impact observed from a parallel view (see Fig. 1c).The asymmetry of the splash and the cavity collapse is clearly visible from this perspective.For example, the splash is predominantly directed towards the lower right corner of the pan-els in (a) and (b).Here the time in μs was obtained by matching the experimental frames with the simulations presented in panel (b).The gas-liquid interface in the simulation is rendered in a semi-transparent turquoise colour, while the liquid is left white.c Synchronous optical images taken perpendicular to the glass disc and orthogonal to the x-ray beam direction cal images in side view corresponding to the frames in (a), taken perpendicular to the glass disc and orthogonal to the x-ray beam direction morphology, for instance producing a progressive flattening of the gas cavity.However, the rather minor changes observed at higher values of suggest that its influence becomes gradually weaker as the stand-off distance is further increased.In this regard, the angle formed by the jet tip with the vertical direction becomes slightly larger when the bubble is placed away from the surface, i.e. the oblique jet gradually deviates from the vertical direction.This suggests the existence of a nonlinear relationship existing between the stand-off distance and the influence of the rigid boundary on jet development.This implies that when the bubble is in close proximity to the surface, even minor variations in exert a substantial impact on the bubble dynamics, in contrast to similar changes when the bubble is far from the boundary.This observation was confirmed through numerical simulations.
Effect of the radial position r b on the bubble dynamics analysed through numerical simulations
In the preceding section, we explored the effect of on the bubble/jetting dynamics.Let us now focus on examining the influence of the seeding radial position, denoted as r b .
To quantify the impact of this parameter on the jetting, and in absence of sufficient experimental data, we performed systematic full 3D numerical simulations shifting r b from the axisymmetric case (i.e.r b = 0 ) to the border of the glass rigid plate at r b = 4mm.Here the value of the dimensionless stand-off is fixed to = 0.99 , which corresponds to the experimental case presented in Fig. 7.
The results are summarized in Fig. 10.There, it is possible to observe that the axisymmetric case ( r = 8 ) does not deform considerably, however deviations from axial symmetry become noticeable down to r ≈ 4 .The experimental case in Fig. 7 corresponds to r = 2 , i.e. between rows (c) and (d).The simulations demonstrate the growing impact of the proximity of the disc boundary on some features of the jet temporal evolution, particularly its direction.As a general rule, the jet is bent inwards, i.e. towards the centre of the disc.The downward infiltration of liquid into the cavity originates from an involution of the bubble wall at its cusp, at approximately t = 93 μ s for the cases with higher r and slightly earlier as the bubble approaches the border.As the value of r decreases, the influence of the disc limits becomes more pronounced, leading to the creation of a curved filament.Remarkably, even though the jet tip is following a straight line, as depicted by the red dots in the second column of Fig. 10, the whole jet structure exhibits a curved shape.
Essentially, the formation of an oblique jet originates from the uneven inflow during collapse, as illustrated in panel (a) of Fig. 11.There, we can see that the liquid can flow almost unhindered from the cavity side closer to the disc border, resulting in a flattening on that side of the bubble which becomes increasingly pronounced as the bubble collapse accelerates.At the same time, the pressure in the liquid surrounding the bubble wall is higher on one side of the bubble cusp, producing in a localized depression that leads to the intrusion of liquid into the bubble interior, i.e. forming the oblique jet (see Fig. 11b).
The lack of axial symmetry of the jet dynamics has also consequences on the collapse of the toroidal gas cavity.For instance, panels (c), (d) and (e) of Fig. 11 show how the gas phase collapses into the solid surface unevenly, i.e. starting from the side which is closer to the border of the disc, meaning that certain parts of the bubble collapse later than others.Furthermore, it can occur that different portions of the cavity begin to re-expand before others reach their maximum compression.An asymmetrical collapse also results in nonuniform acoustic emissions, which might lead to acoustic pressure focusing in regions other than the centre spot below the bubble, potentially causing surface damages (Philipp and Lauterborn 1998;Gutiérrez-Hernández et al. 2022;Reuter et al. 2022;Dular and Ohl 2023).In the axisymmetric case, the downwards jet generates a stagnation point right below the position where the bubble was seeded by the laserinduced plasma.As a consequence, an over pressure builds up on the surface that can easily reach a few MPa of peak value (Lechner et al. 2019).This elevated pressure persists throughout the duration of the jet, which typically lasts for approximately 5 μs in the case of bubbles with a maximum Fig. 9 Oblique jetting of bubbles produced at different stand-off distances for a fixed radial position r = 2 .The parameter plot makes evident the changes occurring on the bubble shape and the jetting direction when is varied.Here, the jet evolution is captured instants before the jet pierces the lower wall of the cavity.As the bubble is generated closer to the rigid boundary the orientation of the jet becomes progressively aligned with the surface's normal direction radius of around 500 μm .In this regard, the simulations reveal an interesting observation: the temporal evolution of the over pressure produced at the stagnation point after the jet impact is very similar for all the cases; however, in the case of an oblique jet, the location of maximum pressure is not fixed throughout the jetting process, as shown in panels (c), (d) and (e) of Fig. 11.As the parameter r is reduced, the increased tilting of the jet results in an increased motion of the stagnation point, i.e. the spot of maximum pressure at the boundary.Caused by the inward bending of the jet, the stagnation point starts nearer to the plate's centre, but then shifts towards the bubble's initial distance r b during the ongoing collapse.
The movement of the stagnation point for bubbles originated at different values of r is depicted in Fig. 12a.It is worth clarifying that even when the cell size in the 3D simulations is not refined enough to accurately estimate the peak pressures on the disc surface, the observed change in its location is not influenced by minor variations in the mesh size.In all the cases, a moving maximum pressure might have consequences on the erosion scenario acting on the solid surface.For instance, the local duration of the peak pressure would be reduced, leading to the expectation of less material damage.However, a spatial shift of the peak pressure position might induce an additional shear force and thus cause additional lateral material loads.
The changes in the tilting of the jet when varying the radial position of bubble seeding have been extracted from the 3D numerical simulations, as detailed in Fig. 12b.In that analysis, the angle between the jet tip and the vertical direction ( j ) was measured at the moment just before the jet pierces the inner cavity wall.This measurement was performed by drawing a line connecting the tip and the midpoint of the jet width at the final third of its overall length (marked with a red line in the inset of Fig. 12b).The influence of the plate boundary on the jetting direction exhibits a nonlinear relationship with the parameter r .Specifically, the numerical results reveal that the tilting effect is relatively minor for the central regions of the disc (i.e.large r ), however, it becomes increasingly significant when r decreases below about 3. As previously discussed, the asymmetric boundary exerts a significant influence, not only on the jet but also on the overall dynamics of the cavity.One specific example of this is the acceleration of the jetting and collapse as the bubble approaches the border of the plate.The speed of the jet, measured just before touching the bubble wall, demonstrates a similar dependence on r as the angle at which the jet pierces the bubble wall.Figure 12c shows that tilted jets exhibit a higher impact speed compared to cases where the jet is directed straight at the surface.
The acceleration of the collapse was measured using the prolongation factor, defined as t(V min )∕(2, T c (R max,ub )) , where t(V min ) is the collapse time of the cavity and T c (R max,ub ) is the Rayleigh collapse time (Plesset and Chapman 1971).Figure 12d depicts a reduction in this dimensionless time as the bubble approaches the edge of the disc.The change in the prolongation factor can be attributed to variations in flow restriction around the bubble at different values of r .The normalized maximum volume ( V * max ) of the bubbles near the border of the disc, on the other hand, is higher than in the middle of the disc, as detailed in Fig. 12d (i.e. the green axis).In principle, this result might appear contradictory because, according to the Rayleigh collapse time (defined in Sect.3.1), larger bubbles are expected to take more time to collapse.From the simulations, it is observed that the bubbles in the centre region of the disc are hindered to reach the full volume of the unbounded case due to the boundary restriction.At the border of the disc the restriction is reduced, so the bubble size approaches the one of the unbounded case.
A deeper understanding of the combined and individual influence of the stand-off distances on the jetting dynamics can be attained by examining parameter maps, such as those shown in Fig.The figure shows maps of the jet speed and the prolongation factor in the collapse time of bubbles, obtained by varying the values of r and in the numerical simulations across a range that encompasses the parameter values utilized in the experiments.From Fig. 13a, it becomes evident that the jet velocity achieves higher values for bubbles located at a radial position near the border of the disc (i.e.r = 0 ), as suggested in Fig. 12c for the case with = 0.99 .At the same time, the speed of the oblique jets also increases with the vertical distance to the plate (i.e. ), similar to what is observed for the collapse of axisymmetric bubbles (Lechner et al. 2020).
Figure 13b displays the map of the prolongation factor in the collapse time computed as t(V min )∕(2 T c (R max,ub )) .In line with the observed trend in the jet speed, the bubble dynamics are slower in regions near the centre of the disc.In contrast, bubbles placed over its periphery or away from the solid surface (i.e. for higher stand-off distances ) exhibit a prolongation factor close to unity, corresponding to the unbounded case of a Rayleigh bubble.
The maps for both the jet speed and the prolongation factor confirm that, in general, the effect of the disc border proximity becomes increasingly relevant for r ≲ 3 .Beyond r ≳ 5 , the scenario approaches similarity to the case of a bubble in a semi-unbounded liquid.The values of the jet speed for r ≈ 8 are in good agreement with those reported by Lechner et al. (2020) for an axisymmetric case, i.e. a bubble collapsing near an infinite solid planar boundary.
Internal structure of a jetting bubble at ≳ 2
When the stand-off distance is ≳ 2 , the influence of the boundary on the bubble dynamics gradually diminishes, leading to a "weak" jetting regime characterized by the growth of a thin and relatively long liquid filament inside the cavity during its rebound phase (Lauterborn and Bolle 1975;Rosselló et al. 2023).This experimental scenario is depicted in Figs. 14 and 15 from two orthogonal perspectives.Figure 14a presents a sequence of x-ray holographic images showing the collapse and rebound of a bubble for ≃ 2.5 taken from a perspective parallel to the jetting direction.These x-ray images are complemented by optical images taken from the side view (see Fig. 14b) meant to provide a comprehensive interpretation of the holograms.
Within this range of stand-off distances, the bubbles adopt a concave form during the collapse process (e.g. the meniscus formed at t = 74 μs in Fig. 17).This distinc- tive characteristic is evident in the first row of panel (a) of Fig. 14, as observed by the ring-shaped appearance acquired by the bubble in the final stages of collapse.Following the rebound, the optical images clearly show the development of a liquid filament that penetrates the bubble and extends well beyond the size of the gas cavity.This elongated fluid structure is visible as a darker line segment in the holograms.
The second collapse of the cavity in the weak jet scenario also displays an interesting structure formed when the gas phase shrinks around the liquid filament, as shown in Fig. 15.The intricate flow dynamics leading to this particular shape is discussed in detail in Ref. (Rosselló et al. 2023).
Formation of cavitation bubbles during the jetting
For some values of , the violence of the bubble collapse is such that the cavity undergoes a severe fragmentation promoted by the Rayleigh-Taylor instability and the strong acoustic emissions launched at the cavity collapse.An intriguing observation, which can only be discerned from the x-ray images, is that the flow generated during the jetting can drag some gaseous fragments inside the bubble, leading to the formation of internal structures as confirmed by the examples shown in Fig. 16.The fragmentation of the bubble, referred to as the "counterjet" by Lindau and Lauterborn (2003), typically occurs within a specific range of values, ranging from 1.2 to 3, consistent with the range observed in this study.When falls below 1.2, the bubbles collapse directly onto the surface during jetting, resulting in a bubble cloud ring that is visible in the holograms.Nevertheless, a bubble ejection similar to the counterjet has been observed for extremely low stand-off distances (Koch et al. 2021;Reuter and Ohl 2021) (e.g. ≲ 0.1 ).In the latter case, the bubble fragments are produced by a pinch-off of the main cavity produced by an annular flow on the cavity cusp.A different situation is observed for ≳ 3 where the jetting is considerably weaker, leading to more spherical collapses.At larger standoff distances, bubble rupture may still occur, but due to the improved shape stability of the cavities, the formation of satellite bubbles is less likely to occur.Additionally, as the nearly spherical bubble collapses generate a predominantly radial flow around the cavity, many of the detached fragments are quickly reabsorbed by the main bubble during re-expansion.
The precise mechanism for the bubble fragmentation in the previous experimental examples was further investigated through the numerical simulations presented in Fig. 17.These simulations were performed with a minimum grid spacing of 1μ m in a region that completely covers the bub- ble.The numerical results confirm that the initial appearance of small fragments occurs immediately after the collapse, which is accompanied by powerful acoustic emissions reaching amplitudes of several hundreds of bar.Such entrained and transported gas pockets might modify the stagnation pressure when impacting onto the solid (not shown in Fig. 16), and potentially they might collapse on their own then, contributing to cavitation damage.Further, it can be seen in the simulations and conjectured from the x-ray images that the gas pockets experience expansion during their transport in the jet flow.Impact and expansion both might lead to burst and thus contribute to ejection of very small droplets into the jetting bubbles.Such micro-or nanosprays are supposed to be responsible for certain aspects of sonoluminescence and sonochemistry, namely the excitation and pyrolysis of non-volatile liquid components inside the bubble (Thiemann et al. 2017;Pflieger et al. 2019).
Conclusion
In this study, we employed x-ray holographic imaging to investigate the jetting and internal structure of laser-induced bubbles collapsing near a solid surface.The bubbles were seeded off the symmetry axis of a circular glass plate, which resulted in non-axisymmetric collapse and jetting behaviour.The comparison and combination of experimental x-ray images with full 3D numerical simulations provide valuable insights into the phenomenon.
The advantages of x-ray holography over traditional optical imaging are evident.The absence of optical distortions caused by strong changes of the refractive index and the clear visualization of the gas cavity's interior provided by x-ray imaging allow, for instance, for a realistic assessment of the jet's shape and its speed, the tilting of its tip, and the Fig. 16 Surface instability of cavitation bubbles and its effect on the jetting.In both panels (a) and (b) the top row presents high-speed videos of a jetting cavity which evidences the bubble fragmentation at its collapse, here indicated by the blue arrows.The lower row compares optical (left) and x-ray (right) images taken simultaneously.The x-ray images reveal internal gaseous structures inside the jet-ting bubbles.Here, the stand-offs were = 1.87 ± 0.05 for cases in (a) and = 1.36 ± 0.05 in (b).In this last panel, the region framed in red highlights the displacement of tiny secondary laser-induced bubbles (initially arranged as a horizontal line).The scale bars represent Blake splash produced upon its impact on the solid surface (as shown in Fig. 4 and Fig. 5).All these critical features are accurately reproduced by the numerical model, demonstrating its reliability in capturing the complex dynamics of the system.This enabled us to extend beyond the experimental results and conduct a parametric analysis to investigate, for a fixed normalized plate displacement = 0.99 , the influence of the bubble seeding position ( r ) on the curved shapes of the jets (see Fig. 10).The closer the bubble is seeded to the plate edge, the more the jet bends inward, i.e. towards the plate centre.The obliqueness of the jet is explained by the uneven inflow of the liquid surrounding the bubble during collapse.It is further observed from the simulations that the tilting of the jet leads to an impact on the solid with moving stagnation point, which is the location of momentary maximum pressure on the plate.This point moves from the impact position (further away from the plate edge) towards the projection point of the bubble centre (closer to the edge).Bending and stagnation point drift is stronger for bubbles seeded close to the plate edge (see Fig. 11).
Further interesting findings concern gas fragments entrained into the jet flow and thus into the rebounding bubble.This result could only be obtained with a clear view through the bubble, as provided by the x-rays (e.g.see Fig. 16).It could be relevant as well for cavitation erosion, but also for liquid injection mechanisms contributing to sonochemistry of non-volatile components.The details and connections with the counterjet phenomenon still remains to be investigated.
The results obtained in this study are representative of similar scenarios involving bubbles near the boundaries of irregular geometries with sharp edges.The implications of these findings extend beyond laser-induced cavities to bubbles generated by other methods and experimental scenarios such as acoustic nucleation in cleaning baths or cavitation from pressure gradients induced by a strong flow in turbomachinery.Non-axisymmetric jetting scenarios are supposed to be generic and omnipresent in "real-world" situations, since a disturbance of symmetry is easily induced for many reasons.Thus, several features described in the present report should be representative for a considerably larger class of real-world systems.Among these phenomena are the non-straight and bent shapes of the jet, the entrainment of gas into the jet, the drifting stagnation points during jet impact, and non-uniform ring collapses.Implications of these effects are highly relevant for improving our understanding on cavitation erosion mechanisms; thus, they should be taken into consideration in the future research.X-ray holography at an XFEL offers an unmatched capability to analyse rapid flow phenomena, such as those encountered in cavitation problems.This advanced technique proves invaluable in scenarios where optical access is hindered by distortions or opacity of the working liquid.The applicability of this methodology is promising for targeted applications in various industrial processes, enabling a deeper comprehension of bubble cleaning mechanisms, as well as the mitigation of cavitation erosion, paving the way for potential improvements in these areas.The effects of the boundary conditions in the jetting were not sufficiently considered in the literature and could be relevant for a better understanding of the processes that lead to cavitation erosion.Future research directly extending the present work could, for instance, explore a parameter space considering the curvature radius of the disc R D and its correlation with the impact angle of the jet mapping different values of and r .In a broader sense, the employed x-ray method can be applied to explore further situations of bubble-object or bubble-bubble interaction, but also problems of bubble nucleation and fast multiphase scenarios.
Note that for the purpose of the present paper, the interpretation of the X-ray micrographs in relation to jet and bubble geometry proved sufficient through a thorough comparison of near-field diffraction effects with simulations of wave-field propagation.Future work could be directed at a full inversion of the images by phase retrieval.To this end, smaller propagation distances in the parallel beam than in the current experiment would probably be useful.Furthermore, for projections at a single angle, phase retrieval only gives projected values, which then need to be interpreted based on geometric constraints.To this end, the fully 3D hydrodynamics simulations shown here could be extremely useful.
Aspects not yet fully exploited in the present work concern the reconstruction of matter density and shapes in three dimensions from the holographic data, and higher resolution than obtainable at optic wavelengths.Both requires additional steps of the methodology, but will be explored as well in the future.Moreover, x-ray imaging could possibly also be extended to full 3d by recording images from a series of rotation angles, or-simpler-by directing split beams onto the same scene for stereo-imaging.
Appendix A: X-ray image processing and forward-propagation
A typical workflow for the image processing of x-ray phasecontrast holography would include a flat-field correction followed by numerical phase retrieval.Usually, this second step implies the reconstruction of the amplitude and phase of the sample's image, which results in an image with a phase-contrast proportional to the electron density of the sample.This approach is conveniently used in synchrotron experiments (see, for example, Paganin et al. 2002;Bartels et al. 2015) and has also been demonstrated with single XFEL pulses (Hagemann et al. 2021;Vassholz et al. 2021).Reliable phase retrieval, however, requires a well working flat-field correction and sufficient pixel resolution on the acquired x-ray images, conditions which are not easy to meet for some setup configurations.An alternative method involves employing a parallel beam geometry to examine the raw detector images.Subsequently, a forward modelling approach can be utilized for an exemplary x-ray hologram, to describe typical features of the holographic imaging method.In this regard, future work could benefit from the use of smaller propagation distances in the parallel beam than in the current experiment.This would enable performing a full inversion of the images through phase retrieval.
In order to interpret the holograms we compare them with simulated holograms, based on data from our hydrodynamic simulations.In this scheme of educated fitting, we understand the origin of some characteristic and unintuitive features that occur in the x-ray holograms.The forward simulation of the x-ray holograms is based on a framework for numerical wave propagation that was previously used to simulate x-ray optics (Soltau et al. 2021).This approach is summarized in Fig. 18 and can be described in the following steps: 1) From the hydrodynamic simulations (Fig. 18d), we use the mass density of water (⃗ r) and interpolate the data set on a Cartesian grid with a resolution of 2 μm and volume of 500 x 500 x 500 pixels.The Tait equation of state (Vassholz et al. 2021) 2) We propagate a plane wave through the sample volume using a finite differences (FD) propagator (Soltau et al. 2021;Fuhse and Salditt 2006).n(⃗ r) is passed to the propagator slice by slice.In this way, we calculate the complex object wave function Ψ Obj directly behind the sample (see Fig. 18b and c).
3) The wave function at the detector Ψ det is obtained by a single-step free-space propagation with a Fresnel contrast-transfer kernel (Soltau et al. 2021;Zhang et al. 2020;Voelz and Roggemann 2009).Here we use the geometry of the experiment, hence the effective Fresnel number F pix = 0.0071 (additionally we used a correction factor of 1.2 to fine-tune the holographic regime).In this step, the interference effects develop which are characteristic for the holographic image.Before propagation, the image intensity is modulated by an effective illumination function, obtained from a measured hologram similar to the numerically calculated case (Fig. 18(f), for comparison with this exemplary measurement).This illumination function describes both the ellipsoidal aperture that results from clipping at the mirror through holes and the intensity distribution of the SASE pulse.4) At the detector, we measure the intensity |Ψ det | 2 .To account for the point spread function of the imaging system, we perform a convolution with a Gaussian kernel with a FWHM of 13 μm , corresponding to the width of 2 physical pixels.
We can now compare the simulated hologram (Fig. 18e) with a measured hologram (Fig. 18f) and a visualization of the corresponding hydrodynamic simulation (Fig. 18d).In this way, we can assess the level of realism of the hydrodynamic simulations to describe the structure of the jetting bubbles.As shown in this example, the holographic simulations were key to explain characteristic features found on the images.Some of these features are listed below: i-A broad dark outer border and a bright inner edge can be seen at the boundary between bubble interior and water.
This contrast is also visible but less pronounced at the boundary between bubble and jet.Note that the outline of the projected bubble corresponds to the outer edge of the dark border.This dark-to-bright contrast is typical for a sample with less density than its surrounding.ii-The torus-like geometry of the upper part of the collapsing bubble leads to a downwards-curved bright arc.iii-Close to the surface, the hologram shows a dark region that partially overlaps with some of the bubble features, like the Blake splash observed after the jet impact.This is due to both the bubble boundary (i) and the strong contrast of the edge of the glass boundary.iv-In this example, the bubble forms a bell-like shape defined by an edge on the outer bubble boundary close to the surface, also referred to as "bubble pedestal".This edge is also projected through the central region of the jetting bubble, visible as horizontal fringe and might lead to a slight deformation of the tip of the jet (see Fig. 18e).
Note that in the experiment the x-rays are partially coherent, and without a monochromator we assume a SASE bandwidth of ≈ 10 −3 , while the forward-propagation workflow is monochromatic.We hence attribute some differences between measured and simulated hologram to the difference in longitudinal coherence.For example, some regions of constructive interference are brighter in the simulation than in the measurement.Due to the SASE process, each XFEL pulse is subject to random fluctuations of the spatial intensity pattern, as well as pulse energy and spectral structure.This leads to different and usually non-uniform illumination of the single-shot raw detector images.One can employ a flat-field correction that is based on principle component analysis of a set of empty images (Hagemann et al. 2021).In this scheme, for example, we determined the illumination function of the exemplary raw hologram shown in Fig. 18f and used it to modulate the forward-propagated hologram (Fig. 18e).
Appendix B: X-ray images of bubble-bubble interactions
As an addition to the holographic images of bubbles jetting towards a solid boundary, Fig. 19 presents some examples of the ring structure produced when a bubble collapses in the vicinity of a second bubble.Here, the two cavities are induced (spuriously) by a single laser pulse along the axis of x-ray beam.In the measurements included in Fig. 19, the glass plate is not present or sufficiently far away from the bubbles not to play any role in their dynamics.
Interestingly, the direction of the jet is determined by the relative size of the cavities, i.e. the smaller bubble always collapses first producing a jet that pierces thought the second bubble.In the case where the bubbles have a similar size, the jetting occurs almost simultaneously and collide around the middle point between them, as shown in the third row of Fig. 19.On the other hand, in the cases where the dissimilarity in the bubble size is marked, the smaller bubble provokes a thin jet that goes through the bigger bubble inducing an annular vortex ring which forms a toroidal bubble, e.g.see the last row of Fig. 19.Apart from the last case, the x-ray images reveal a disturbed axial symmetry, not discernible from the video images.The undulated ring features and spikes apparently indicate some kind of splashing.
Appendix C: Mesh of the numerical simulations
The mesh used for the numerical simulations is depicted in Fig. 20.It is initiated as a rectangular block with Cartesian cells of same edge length in each direction.Afterwards the Fig. 19 Internal structure of two bubbles with a different size jetting towards each other.The scale bar represents 500 μm and the numbers indicate time in μs .The thin jet formed when the smaller bubble jets into the bigger one is visible in the last row of images.Here, the x-ray holographic images at the end of the sequence correspond to the last optical frame, marked in red regions are refined where the bubble is going to be generated and the region where the disc is going to be cut out.Around the bubble generation site 7 times concentric refinement is used.Around the region of the cylindrical disc, 3 times refinement is applied.Finally the disc is cut out of the mesh.The top of the disc coincides with y = 0 and the amount of cells in y-direction is even, so that a flat top is guaranteed.The outer circle of refinement around the bubble generation site has a radius of 3.5 mm, the most inner circle has a radius of 30 μm .In the OpenFOAM concepts, there are no hang- ing nodes at the refinement boundaries, but the cells are fit together via one layer of tetrahedral cells.
Fig. 1
Fig. 1 Experimental setup: Cavitation bubbles are seeded by an infrared pump laser close to a glass disc held inside an open cuvette filled with water.The jetting bubbles are probed by XFEL pulses while an optical camera observes from the side.a Schematic of the x-ray beam path in quasi-parallel beam illumination (not to scale).b The cuvette
Fig. 2
Fig. 2 Distances and dimensionless parameters used to characterize the bubble dynamics
Fig. 6
Fig. 6 a X-ray holographic images in parallel view of the jet impacting the rigid boundary at = 0.84 ± 0.03 .Here the bubble is located around r b = 2.8 mm from the centre of the plate (i.e.r = 2.4 ).The scale bars represent 250 μm , the numbers indicate time in μs .b Opti-
Fig. 7
Fig.7Perpendicular view of the jetting of a bubble with a stand-off distance of = 0.99 ± 0.03 .The optical images presented in (a) are compared with full 3D simulations in (b).The inset framed in blue presents the x-ray image corresponding the optical frame above to show the excellent agreement between experimental and numerical results.In the insets framed in red, the contrast of a section of the optical images was increased to highlight the generation of micromet-
Fig. 10
Fig. 10 Three-dimensional numerical simulations of the bubble jetting at different positions on the disc.The parameters for the simulation were taken from the experimental case presented in Fig. 7 (i.e. with = 0.99 ).The initial radial position of the bubble was set at a r = 7 ( r b = 0.5 mm), b r = 4 ( r b = 2.0 mm), c r = 3.2 ( r b = 2.4 mm), d r = 2.4 ( r b = 2.8 mm), e r = 1.6 ( r b = 3.2 mm), f
Fig. 11
Fig. 11 Numerical simulations of the velocity field (a) and the pressure distribution (b) around the bubble during the formation of the oblique jet for a case with r = 0.8 ( r b = 3.6 mm) and = 0.99 .Panels c, d and e present the temporal evolution pressure produced at the solid boundary by jetting bubbles at different positions on the disc.
Fig. 12
Fig. 12 Bubble jetting parameters obtained from simulations with different r and a fixed = 0.99 , as shown in Fig. 10. a Radial location of the maximum pressure spot in the disc's surface as a function of the time, which is normalized with the instant when the jet pierces
Fig. 13
Fig. 13 Simulated parameter maps are presented for various r and values.These maps depict a more comprehensive representation of a the jet speed at bubble piercing, measured in m/s, and b the prolongation factor in the collapse time of the cavities.The contour plots were generated from the simulations denoted by the circular markers
Fig. 17
Fig. 17 Simulation in axial symmetry for = 1.77 with R max = 375 μm corresponding to the experimental case presented in Fig. 16.Snapshots of a cut through of the bubble (outlined by a white line) at various stages during collapse and rebound.The time in μs is indicated by the white numbers in the upper-right corner of the frames.The color represents the pressure in units of bar for the range shown in black numbers at the bottom of each frame.The solid boundary is located at the lower border of each frame (in white).The black scale bar represents 250 μm .First row: The bubble seed is pro-
Fig. 18
Fig. 18 Overview of x-ray forward propagation simulations.a Schematic of the forward propagation steps, using a FD propagator through the sample volume and a Fresnel-TF propagator for freespace propagation to the detector.Panels b and c respectively depict the phase and intensity of the object wave function Ψ Obj , directly Schoonjans et al. 2011))pressure and density (see Sect. 3.2).A map of the complex index of refraction n(⃗ r) for E =17.8 keV is calculated by scaling n H 2 O(Schoonjans et al. 2011)according to (⃗ r) .Additionally, we insert a volume of quartz glass ( n SiO 2Schoonjans et al. 2011)to form the solid boundary that the bubble is jetting towards. | 2024-02-04T05:07:55.540Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "2150e13f78582fbe5af2b1acc36c84b98ab63184",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00348-023-03759-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b6399e648d7d8c5e68202d4a5a23f34ad59ccab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219406572 | pes2o/s2orc | v3-fos-license | CO 2 hydrogenation on Cu-catalysts generated from Zn II single-sites: Enhanced CH 3 OH selectivity compared to Cu/ZnO/Al 2 O 3 Journal of Catalysis
The hydrogenation of CO 2 to CH 3 OH is mostly performed by a catalyst consisting mainly of copper and zinc (Cu/ZnO/Al 2 O 3 ). Here, Cu-Zn based catalysts are generated using surface organometallic chemistry (SOMC) starting from a material consisting of isolated Zn II surface sites dispersed on SiO 2 – Zn II @SiO 2 Grafting of [Cu(OtBu)] 4 on the surface silanols available on Zn II @SiO 2 followed by reduction at 500 (cid:1) C under H 2 generates CuZn x alloy nanoparticles with remaining Zn II sites according to X-ray absorption spectroscopy (XAS). This Cu-Zn/SiO 2 material displays high catalytic activity and methanol selectivity, in particular at higher conversion compared to benchmark Cu/ZnO/Al 2 O 3 and most other catalysts. In situ XAS shows that CuZn x alloy is partially converted into Cu(0) and Zn(II) under reaction conditions, while ex situ solid state nuclear magnetic resonance and infrared spectroscopic studies only indicate the pres- ence of methoxy species and no formate intermediates are detected, in contrast to most Cu-based catalysts. The absence of formate species is consistent with the higher methanol selectivity as recently found for the related Cu-Ga/SiO 2 . (cid:3) 2020 The Authors. Published by Elsevier Inc. ThisisanopenaccessarticleundertheCCBYlicense(http://
Introduction
The conversion of carbon dioxide (CO 2 ) to value added products would allow the mitigation of CO 2 emissions that are recognized as a major contributor to climate change [1,2]. One strategy to mitigate the deleterious effect of CO 2 emissions would be to convert it by hydrogenation to methanol (CH 3 OH), an important bulk chemical that can also be used for the generation of energy, thereby forming a closed carbon-fuel-cycle, referred to as the ''methanol economy" [3][4][5][6]. This entails the sustainable production of H 2 , the efficient capture of CO 2 as well as the use of highly active and selective hydrogenation catalysts. Currently, copper-based catalysts are among the most common hydrogenation catalysts used for the production of CH 3 OH from CO, a mixture of CO/CO 2 as well as CO 2 [7]. The most studied and industrially used catalyst is Cu/ ZnO/Al 2 O 3 , where the role of the different components is still under debate. The role of ZnO has been particularly discussed and has mainly assigned to the formation of highly active Cu-ZnO interfacial sites or a CuZn surface alloy [8][9][10][11][12][13][14][15]. However, these catalysts still suffer from low activity, selectivity and stability in CO 2 rich streams [16]. Alternatively, Cu/ZrO 2 has also been reported to be an efficient catalyst for the formation of CH 3 OH [17][18][19][20]. We recently showed that the role of ZrO 2 is to act as a Lewis acidic surface site at the periphery of Cu nanoparticles to stabilize reaction intermediates (formate and methoxy) [21]. Furthermore, by using a surface organometallic chemistry approach, we could show that Cu nanoparticles supported on silica decorated with isolated Zr IV surface sites (Cu-Zr/SiO 2 ) [22] display the same performance as Cu/ZrO 2 by also providing Lewis acidic Zr IV sites at the interface with Cu thereby increasing CH 3 OH selectivity. The same effect is observed with Lewis acidic isolated Ti IV surface sites on SiO 2 as a support [23,24]. However, all these catalysts suffer from fast erosion of selectivity with increasing conversion due to competitive adsorption of methanol/water on the Lewis acid sites needed for CO 2 activation and conversion to methanol (competitive adsorption).
More recently, we have shown that this approach could be used to improve the CH 3 OH selectivity when starting from a silica support consisting of well-defined Ga III sites. In this case, grafting of the Cu precursor followed by a hydrogen treatment yields CuGa x alloy nanoparticles (Cu-Ga/SiO 2 ) along with remaining isolated Ga III sites [25]. This catalyst shows high activity and excellent CH 3 -OH selectivity, especially at higher conversion, in sharp contrast to Cu-M/SiO 2 with M = Ti or Zr. In situ X-ray absorption spectroscopy (XAS) showed that, under reaction conditions, such catalysts evolve to generates Cu 0 and fully oxidized gallium sites. Compared to other catalysts prepared by surface organometallic chemistry (SOMC), no formate but only methoxy surface species are observed in the case of Cu-Ga/SiO 2 , which correlates with and can explain the increase in selectivity at higher conversion [26].
We thus decided to investigate the formation of the corresponding Cu/Zn systems starting from the silica-supported isolated Zn II surface sites [27] using an SOMC approach to explore its catalytic performance and to compare it with Cu/ZnO/Al 2 O 3 and other SOMC CO 2 hydrogenation catalysts.
Synthesis of Cu-Zn/SiO 2
A solution of [Cu(OtBu)] 4 (110 mg, 0.20 mmol) in 20 mL of toluene was added to 1 g of Zn II @SiO 2 wetted with toluene. The suspension was stirred for 4 h, washed three times with toluene (5 mL) and dried at 10 À5 mbar for 1 h. The solid was then reduced under H 2 at 500°C for 5 h (100°C h À1 ) cooled down to room temperature under H 2 , evacuated under high vacuum (10 À5 mbar) and stored in an argon filled glovebox.
Material characterization
Elemental analyses of all materials were performed by the Mikroanalytisches Labor Pascher, Remagen, Germany. Powder Xray diffraction (pXRD) patterns were recorded on a PANalytical X'Pert PRO-MPD diffractometer at a voltage of 40 kV and a current of 40 mA by applying Cu-Ka radiation (c = 1.54060 Å). Catalyst morphology was obtained by transmission electron microscopy (TEM) on a Hitachi HT7700 microscope within the facilities of Sco-peM at ETH Zurich. For the determination of the particle size distribution, >100 individual particles were considered, and the mean particle size and standard deviation are given according to a lognormal distribution function. Fourier-Transform Infrared (FTIR) spectroscopy experiments were performed on self-supporting wafers using a Bruker Alpha FT-IR spectrometer in transmission mode (24 scans, 4 cm À1 resolution) under exclusion of air. The specific surface area of the catalysts was measured from a N 2 physisorption isotherm recorded at 77 K on a BEL JAPAN BELSORP-mini II apparatus. The samples were degassed at 300°C under vacuum (10 À3 mbar) for 3 h prior to measurement. The data was analyzed by the BET method with a p/p 0 range between 0.1 and 0.3. H 2 chemisorption isotherms were obtained using a BELSORP-max apparatus on the reduced samples at 40°C and fitted according to a Langmuir isotherm (Eq. (1)), where P H 2 ;eq is the equilibrium hydrogen pressure, Q H 2 the hydrogen uptake (lmol g cat À1 ), Q H 2 ;max the saturation uptake of H 2 and K H 2 the thermodynamic constant for the dissociative hydrogen chemisorption.
Metal surface area was determined by N 2 O titration. In a typical experiment 30-50 mg of sample were weight into a U-shape quartz tube and connected to the instrument (BEL Japan, INC., BELCAT-B). Prior to analysis, the samples were pretreated under a flow of 50% H 2 /He at 300°C for 2 h, after which 25-30 successive pulses of the titration gas mixture (1% N 2 O in He) were introduced by a calibrated injection valve (2.77 mL N2O (STP) per pulse). The amount of N 2 O consumed was determined by monitoring the amounts of N 2 O and N 2 in the exhaust with a thermal conductivity detector. The quantity of surface metal sites are then determined considering the titration equation Pyridine adsorption experiments were performed on a selfsupporting pellet of the Cu-based catalysts and monitored by infrared spectroscopy (Nicolet NEXUS 6700) in transmission mode with a 4 cm À1 spectral resolution. After exposure of pyridine in the gas phase, the pellet was subsequently placed under high vacuum (10 À5 mbar) at room temperature (rt), 100°C, 200°C, 300°C, 400°C and 500°C (300°C/min) for 15 min prior to measurement of the IR spectrum. Similar to pyridine adsorption, CO adsorption was performed on a self-supporting pellet of the Cu-based catalyst by exposure of ca. 90 mbar of CO at room temperature followed by recording the infrared spectrum in transmission mode.
X-ray absorption spectroscopy (XAS)
X-ray absorption spectra at the Cu and Zn K-edge were measured at the SuperXAS beamline at the Swiss Light Source (SLS). The SLS was operating in top-up mode at a 2.4 GeV electron energy and a current of 400 mA. The incident photon beam provided by a 2.9 T super bend magnet source was selected by a Si(1 1 1) quick-EXAFS monochromator [32] and the rejection of higher harmonics and focusing were achieved by a silicon collimating mirror at 2.5 mrad. During the measurements the monochromator was rotating with 10 Hz frequency and X-ray absorption spectra were collected in transmission mode using ionization chambers specially developed for quick data collection with 1 MHz frequency [32]. The resulting spectra were averaged over 5 min. Calibration of the monochromator energy position was performed by setting the inflection point of a Cu or Zn foil spectrum recorded simultaneously with the sample to 8979 or 9662 eV for Cu or Zn K-edges, respectively.
In a typical in situ experiment, about 10-20 mg of the powder sample was packed into a 3 mm thick quartz capillary (0.1 mm wall thickness), which was connected with a pressurizable gas flow system. The catalysts were reduced under a H 2 /N 2 mixture (15%, 1 bar) at 300°C for 60 min, and then cooled down to reaction temperature (230°C). The reduction gas was flushed with N 2 for 15 min and then changed to the reaction gas mixture (CO 2 :H 2 : N 2 = 1:3:1, 5 mL min À1 ). Under reaction gas, the set-up was pressurized to 5 bar using a back-pressure regulator and the spectra was recorded every 15 min for one hour or until no changes in the spectra occurred. The spectra were background-corrected and normalized using the Demeter software package. Ex situ samples were pressed in pellets with optimized thickness for transmission detection and placed in aluminized plastic bags (Polyaniline (14 lm), polyethylene (15 lm), Al (12 lm), polyethylene (75 lm)) from Gruber-Folien GmbH & Co. KG using an impulse sealer inside an argon filled glovebox to avoid air contamination. References (ZnO, Cu, Zn and a-brass) were mixed with cellulose (in case of CuZn and ZnO), pressed into wafers and sealed in Kapton tape.
Solid state nuclear magnetic resonance spectroscopy
Solid-state NMR experiments on 1 H and 13 C were recorded on a Bruker 400 MHz AVANCE III HD spectrometer with a 4 mm MAS triple resonance probe operating in double resonance mode with a magic angle spinning frequency of 10 kHz. The chemical shift scale was calibrated using adamantane as an external secondary reference. Ramped cross polarization ( 1 H-13 C) was used for experiments with 1 H excitation frequency at 100 kHz. The contact time was 2 ms for 1D experiments and for 1 H-13 C HETCOR experiments. Additionally, for 1 H-13 C HETCOR experiment, DUMBO homonuclear ( 1 H-1 H) decoupling was used during t 1 . The static magnetic field was externally referenced by setting the 13 C higher frequency peak of adamantane to 38.4 ppm. The 1 H excitation and decoupling radiofrequency (rf) fields were set as 100 kHz. CP conditions were optimized to fulfill the Hartmann-Hahn condition under magicangle spinning with minor adjustments to reach optimal experimental CP efficiency. All samples were packed in an argon filled glovebox. For preparing the ex situ sample, to 100 mg of Cu-Zn/ SiO 2 (reduced at 300°C under H 2 after exposure to air) in a thick-walled glass reactor was introduced 1 bar of 13 CO 2 , which was then condensed under liquid nitrogen cooling. Then 1 bar of H 2 was introduced while still maintaining cooling of liquid nitrogen at À196°C. The reactor was then heated up to 230°C which leads to a pressure increase to 5 bar. After 12 h, the reaction vessel was cooled down to room temperature and evacuated under high vacuum (10 À5 mbar) and the resulting solid was stored in an argon filled glovebox.
2.6. Catalytic testing in CO 2 hydrogenation CO 2 hydrogenation reactions were conducted in a fixed-bed tubular reactor (9.1 mm ID) in down-flow configuration (PID Eng&Tech). In a typical experiment 250 mg of catalyst powder oxidized in air was mixed with 5.0 g of SiC and loaded in the reactor under ambient conditions (20 and 30 mg was used for Cu/ZnO/ Al 2 O 3 and Cu/ZnO/Al 2 O 3 Katalco, respectively). First, the catalyst was reduced for 1 h under a flow of 15% H 2 /N 2 (50 mL min À1 ) at 300°C and atmospheric pressure. After cooling down to 230°C, the reactor was pressurized to 25 bar with a flow of CO 2 :H 2 :N 2 (1:3:1, 50 mL min) for 30 min. The reactor was then set to measurement conditions (230°C, 25 bar) and the gas phase was analyzed via online gas chromatography (Agilent 7890B) equipped with an FID for CH 3 OH and TCD for N 2 , CO 2 , CO and CH 4 . Different contact times were probed by changing the gas flow rate from 100 mL (STP) min À1 to as low as 6 mL (STP) min À1 . Finally, activity data was collected at the initial flow rate of 100 mL min À1 to check for potential deactivation of the catalyst. The reaction rates, conversions and selectivities were calculated using the following set of equations (Eqs. (2)-(5)): where F in is the total gas inlet flowrate [mol h À1 ], F out is the total gas outlet flow rate [mol h À1 ], c x;in and c x,out are the inlet and outlet gas fraction of species x, r Cu;x is the formation rates of species x per gram copper g x h À1 g À1 Cu h i ; m cat is the mass of catalyst in the reactor g cat ½ ; w Cu the weight loading of copper wt Cu % ½ ; S x the product based selectivity of product x, F i;out the flowrates of the products, and X CO2 the conversion of CO 2 . Intrinsic formation rates (with respect to the contact time) are obtained by using a second/third order polynomial fit on the experimental data at conversions below 7%.
The specific surface area of this material is 187 m 2 g À1 as determined by N 2 physisorption isotherms and Brunauer-Emmett-Teller [36] (BET) analysis (Table S1), which is close to the initial SiO 2 material (ca. 200 m 2 g À1 ). Transmission electron microscopy (TEM) studies show the presence of small and narrowly distributed CuZn x alloy (vide infra) nanoparticles of Cu-Zn/SiO 2 (3.9 ± 1.0 nm) (Fig. 1b). The particle size is slightly larger than for the corresponding Cu/SiO 2 (2.9 ± 1.3 nm) prepared via a similar approach and more similar to corresponding gallium based materials (Cu-Ga/ SiO 2 ) (4.6 ± 1.4 nm) [22]. A surface metal nanoparticle concentration of 52 lmol g cat À1 was determined by N 2 O titration for Cu-Zn/ SiO 2 (assuming a 1:2 stoichiometry between N 2 O and the surface sites) (Table S1), which is similar to what is obtained for Cu/SiO 2 , [22] considering the larger particle sizes for Cu-Zn/SiO 2 . This increased N 2 O consumption for Cu-Zn/SiO 2 is likely due to the reaction of N 2 O with reduced zinc sites arising from CuZn x (surface)alloy (vide infra) [37]. Chemisorption experiments using H 2 at 40°C was performed since it was shown to be a reliable method to obtain metal dispersions for Cu-based systems with similar physicochemical properties [31]. A metal surface site concentration of 64 lmol g cat À1 for Cu-Zn/SiO 2 (assuming a 1:2 stoichiometry between H 2 and the metal surface site) was obtained, consistent with the number obtained from N 2 O titration (Table S1 and Fig. S2). Powder X-ray diffraction show no crystalline phases, due to the amorphous nature of the SiO 2 support and the presence of small metal nanoparticles (Fig. S3). IR spectroscopy of Cu-Zn/SiO 2 in the presence of 90 mbar CO at room temperature shows stretching bands at 2092 cm À1 which is red-shifted with respect to what is observed for pure Cu/SiO 2 at 2106 cm À1 evidencing a different copper morphology/structure (Fig. S4). Furthermore, the presence of Lewis acidic zinc sites is shown by pyridine adsorption and IR spectroscopy [38], where the ring vibrational band of pyridine at 1611 cm À1 for Cu-Zn/SiO 2 is observed, likely associated with its adsorption on Zn II sites (Fig. S5). Pyridine on Cu-Zn/SiO 2 is fully desorbed at 500°C under high vacuum (10 À5 mbar) (Fig. S5).
In order to obtain further information regarding the oxidation states and structural environments of zinc and copper in Cu-Zn/ SiO 2 , the XAS spectra at the zinc and copper K-edges are recorded for the as-prepared material ex situ under inert conditions (Fig. 1c). The zinc K-edge for Cu-Zn/SiO 2 , shows an edge energy at 9658 eV while for Zn II @SiO 2 and ZnO the edge energy is higher at 9662 eV (Fig. S6). This decrease in edge energy for Cu-Zn/SiO 2 corresponds to reduced Zn species [39]. A feature at 9662 eV shown by the first derivative of the Zn K-edge XANES spectrum of Cu-Zn/SiO 2 also evidences that a fraction of the zinc sites remain as Zn II (Fig. 1c). The XANES spectrum of the Cu K-edge has an edge energy at 8979 eV indicative of reduced copper (Fig. S7). Linear combination fits using Zn II @SiO 2 and a-brass shows that 51% of the sites can be fitted as Zn II and 49% as a-brass (Fig. S8). Overall, the XAS spectra show that reduction of the samples after Cu grafting (500°C under H 2 ) leads to a partial reduction of Zn II with the formation of CuZn x alloys along with remaining Zn II sites. These findings are similar to what was found for corresponding Cu-Ga/SiO 2 system where the formation of a CuGa x alloy is also observed [25], but contrasts with what was observed for Cu-Ti/SiO 2 [23] and Cu-Zr/SiO 2 [22] systems that remained as isolated Ti IV and Zr IV sites upon Cu nanoparticle formation.
Catalytic performance in CO 2 hydrogenation
Cu-Zn/SiO 2 was tested in CO 2 hydrogenation at 230°C and 25 bar (Fig. S9). Following exposure to air, the material was first reduced at 300°C under H 2 . The catalyst was then tested by varying the gas flow rate to examine the effect of contact time (Fig. S10) on the catalytic activity/selectivity at conversions below 10% (ca. 15% conversion for thermodynamic equilibrium).
The intrinsic formation rates obtained from extrapolating to zero contact time are evaluated and compared with Cu/SiO 2 , Cu-Zr/SiO 2 and Cu-Ga/SiO 2 . These materials have similar Cu loadings, particle size distribution as well as metal (M) site densities in Cu-M/SiO 2 (M = Ga, Zn and Zr), albeit slightly lower for Zr (Table S2), thus allowing a direct comparison between these catalysts and Cu-Zn/SiO 2 . Two Cu/ZnO/Al 2 O 3 catalysts, one commercially available and one prepared from a malachite precursor were used as benchmark materials [30]. The intrinsic CH 3 OH formation rate is 1.6 g h À1 g Cu À1 for Cu-Zn/SiO 2 which is 5 times higher than Cu/ SiO 2 and also slightly higher than Cu-Zr/SiO 2 or Cu-Ga/SiO 2 (Fig. 2a). Note that the catalytic activity of the support by itself (Zn II @SiO 2 ) is below detection limits. The intrinsic CO formation rates for Cu/SiO 2 and Cu-Zr/SiO 2 (0.3 g h À1 g Cu
À1
) are similar to Cu-Zn/SiO 2 . This leads overall to a CH 3 OH selectivity of 86% for Cu-Zn/SiO 2 , with CO being the only byproduct. Thus, Cu-Zn/SiO 2 has a higher CH 3 OH selectivity than unpromoted Cu/SiO 2 (48%) or even Cu-Zr/SiO 2 (77%), and is similar to, albeit slightly lower than Cu-Ga/SiO 2 (90%) [22]. Both the CH 3 OH and CO formation rates decrease at longer contact times (Fig. S11), indicating product inhibition involved in both pathways for Cu-Zn/SiO 2 , similar to Cu-Ga/SiO 2 [25]. It is particularly noteworthy that for the corresponding Cu-Ti/SiO 2 and Cu-Zr/SiO 2 catalysts, only CH 3 OH formation rates decrease with longer contact times [22,23]. Since both CH 3 OH and CO formation rates decrease with increasing contact time, a high selectivity toward CH 3 OH is maintained for Cu-Zn/SiO 2 (>70%) at conversions up to 5% (Figs. 2b and S12). The high CH 3 OH selectivity at higher conversions is not observed for similar type of catalysts that are more affected by conversion: Cu/SiO 2 and Cu-Zr/ SiO 2 only reaches 30% and ca. 40% CH 3 OH selectivity at a conversion of 5% (Fig. 3b).
In comparison to Cu-Zn/SiO 2 , Cu/ZnO/Al 2 O 3 [29] (Figs. S13-S18) shows higher formation rates for CH 3 OH (3.9 g h À1 g Cu À1 ) under the same reaction conditions, but it also favors the formation of CO (0.9 g h À1 g Cu
À1
), hence the overall lower intrinsic CH 3 OH selectivity (79% vs. 86%) (Fig. 2b). The CH 3 OH selectivity of Cu/ZnO/Al 2 O 3 drops more drastically with conversion (50% CH 3 OH selectivity at around 5% conversion) compared to Cu-Zn/SiO 2 . It shows that the Cu-Zn based catalyst generated via SOMC can maintain a higher CH 3 OH selectivity at higher conversion (Fig. 2b). The main difference is that the CO formation rate is less affected by contact time for Cu/ZnO/Al 2 O 3 compared to Cu-Zn/SiO 2 suggesting different reaction mechanisms for CO formation between the two materials.
A similar particle size distribution by TEM is obtained for the spent Cu-Zn/SiO 2 catalyst (4.2 ± 1.3 nm (Fig. S19)) compared to the fresh catalyst (3.9 ± 1.0 nm). This is further confirmed by absence of any crystalline phases by powder X-ray diffraction (Fig. S3).
In situ X-ray absorption spectroscopy
In order to obtain further insights of the structure of zinc and copper in Cu-Zn/SiO 2 , the material was further investigated by in situ XAS at the Zn and Cu K-edges (Fig. 3). The X-ray absorption spectra of Cu-Zn/SiO 2 were measured after oxidation of the catalyst in air, followed by reduction at 300°C under H 2 . The temperature was then decreased to 230°C and the reaction gas mixture consisting of CO 2 :H 2 :N 2 (1:3:1) was introduced and the pressure was increased to 5 bar (see materials and methods section).
The zinc K-edge after exposure to air shows an edge energy at 9662 eV consistent with the complete oxidation of zinc. Upon reduction of the oxidized catalyst at 300°C under H 2 , the white line intensity decreases and a feature towards lower energy (9658 eV) appears, which is indicative of reduced zinc sites (Fig. 3). A similar feature is observed for the as-prepared catalyst but with a higher intensity of the signal at lower energy (9658 eV), indicating that the reduction of the catalyst exposed to air results in a lower fraction of reduced zinc than in the as-prepared catalyst. Under reaction conditions -at 230°C and at 5 bar under a mixture of CO 2 : H 2 :N 2 (1:3:1) -the intensity of the white line is intermediate between these of the material after exposure to air vs. after reduction. The feature at lower energy (9658 eV) persists, indicating the presence of remaining reduced zinc sites for the Cu-Zn system. In comparison, the XAS spectrum of Zn II @SiO 2 has a higher white line intensity and lacks the feature at 9658 eV characteristic of reduced zinc sites under H 2 at 300°C (Fig. S20). This shows that in order to have reduced zinc sites, the presence of copper is necessary. In order to obtain a more quantitative ratio of the reduced zinc sites, linear combination fits using isolated Zn II surface sites (Zn II @SiO 2 ) and a-brass are performed; this analysis shows that after H 2 treatment at 300°C, 71% of the zinc sites are fitted as Zn II and the remaining sites can be fitted as a-brass (Fig. S21). Linear combination fits of the XANES spectrum under the reaction gas mixture show that 84% of zinc are present as Zn II sites, with the remaining sites being fitted as a-brass (Fig. S22). This contrasts with what was found for Cu-Ga/SiO 2 where all the gallium sites are oxidized under the same reaction conditions [25]. Since the in situ XAS measurements were only carried out at 5 bar in contrast to 25 bar for the catalytic test due to instrumental limitation, one may expect that increasing the total pressure (e.g. to 25 bar) could further favor the oxide form of zinc and Cu 0 due to a higher CO 2 conversion and thus a higher partial pressure of H 2 O. However, this experiment at low pressure already indicates the subtle differences between Cu-Zn/SiO 2 and Cu-Ga/SiO 2 . At the Cu K-edge, copper in Cu-Zn/SiO 2 is fully oxidized under air and fully reduced under H 2 and remains so under reaction conditions (Fig. S23). In summary, in situ XAS shows that the oxidation state of zinc in Cu-Zn/SiO 2 is highly dependent on the reaction conditions, where Zn II and Zn 0 sites are coexisting under reaction conditions.
The common features between all the Cu-based CO 2 hydrogenation catalysts prepared via SOMC is that they contain well-defined Lewis acidic surface metal sites on SiO 2 (Zr IV , Ti IV , Ga III and Zn II ). They likely play an important role in driving CH 3 (Fig. S25) also shows the absence of formate and the presence of only methoxy species by the 13 C-H stretches at around 2955 and 2855 cm À1 . It is noteworthy that Cu-Zn/SiO 2 and Cu-Ga/SiO 2 [25], which only show the presence of methoxy surface species, are also both highly selective for CH 3 OH at higher conversions. This likely indicates that the absence of stable formate surface species could play a major role in improving the CH 3 OH selectivity over CO. In fact, we have recently shown that highly stabilized formate species as in the case of Cu/Al 2 O 3 leads to the preferential formation of CO likely via formation of methyl formate at higher conversion that can readily decompose in CO and methanol [26].
Conclusion
A Cu-Zn based catalyst was generated by surface organometallic chemistry forming CuZn x alloy nanoparticles along with residual Zn II sites on SiO 2 . This material contrasts with the previously prepared Cu-Zr or Cu-Ti based systems were no reduction of Zr IV and Ti IV occurred but display similar feature to reported Cu-Ga based systems, which also consist of CuGa x alloy in the assynthetized material. The Cu-Zn based catalyst also shows high activity in CO 2 hydrogenation, mainly forming CH 3 OH as the main product, even at relatively high conversion. Comparing to Cu/ZnO/ Al 2 O 3 , the Cu-Zn based catalyst generated via SOMC shows higher CH 3 OH selectivity especially at higher conversions. Under reaction conditions, zinc is present in both its reduced state and as Zn II sites according to in situ XAS, which contrast with what is observed with CuGa x , where only Ga III sites are present. Noteworthy, no formate species are intercepted and only methoxy surface species are observed according to ex situ solid state NMR and IR spectroscopy; this observation is consistent with the higher CH 3 -OH selectivity at higher conversion. The Cu-Zn based catalyst shows structural and catalytic similarities to the previously reported Cu-Ga based system, and thus indicates important features required for highly active and selective catalysts for the hydrogenation of CO 2 to CH 3 OH. We are currently working on transposing these findings to develop improved industrial catalysts.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-05-21T09:07:47.370Z | 2020-05-15T00:00:00.000 | {
"year": 2021,
"sha1": "0f48b5a3b33b61121f0fabbba7f5cf0f41e3fd2d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jcat.2020.04.028",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8950f6da618a372510ddc532e96b4924bbb2795b",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
28782415 | pes2o/s2orc | v3-fos-license | Online educational counselling for students with special needs : building rapport
This paper reports the findings from a study that investigated the effects of providing online counselling for undergraduate students with long-term health problems. Issues associated with learning at a distance for such students include fatigue, manual dexterity, academic and social isolation, together with a need for better interactive communication with support agencies (Debenham, 1996a). The results of a feasibility study undertaken in 1996 suggested that for students with special needs personal rapport with their educational counsellor is considered important for problems to be aired and addressed (Debenham, 1998a). This raises interesting questions relating to how such rapport can be developed using computer-mediated communication (CMC). Participants in the study reported appreciation of a small amount of informal contact with the counsellor in a closed peer-group conference; this conference is described in Debenham (1996b). Building on this finding, a main study was undertaken which was modified by the addition of a counselling topic a 'Virtual Study' for the counsellor within this conference area (Debenham, 1998b). The counsellor was encouraged to participate informally in the other student-led topics. This added a group dimension to the study. The results are encouraging: increased levels of motivation and enjoyment of the study process were reported by more than three-quarters of the sample and in the degree of autonomy by more than half the sample. These findings throw light on the support of students with special needs and also contribute to the development of knowledge in the wider fields of academic advising and the use of CMC in distance education.
Introduction
Distance learners with long-term health problems face a number of difficulties in their studies.These include severe fatigue, problems of manual dexterity, academic and social isolation, together with a need for better interactive communication with support agencies (Debenham, 1996a).It was expected that the use of computer-mediated communication (CMC) could provide a possible way to tackle all four problems described above.
Computer conferencing might also prove to be an effective route for facilitating educational counselling services for such students.The role of an educational counsellor in the Open University, the context of the study reported here, is based on a developmental view of counselling and advising (Bailey et al., 1996;Frost, 1991;UDACE, 1986).This approach aims to encourage the personal development and progress of the learner as a whole person.A feasibility study was undertaken in 1996 to investigate the effects of providing access to an educational counsellor online within the environment of the "Virtual Campus' (computer conferencing system) of the university.The results of this study suggested that for students with special needs, personal rapport with their educational counsellor is considered important for problems to be aired and addressed.This raises questions relating to how such rapport can be developed using computer-mediated communication (CMC).New features introducing an element of interactive group discussion between the counsellor and students were adopted for the main study.The methodology and results of this study are described in the following section.
The main study
Thirteen part-time undergraduate distance learners (seven male, six female) from seven different regions participated in this study.They included the six who had taken part in the earlier feasibility study.All thirteen had previously completed undergraduate courses with the university.The new sample (four male, three female) was drawn from students with a similar level of health difficulties who had had access to the self-help areas of the 'Virtual Campus' in the previous year.The intention was to provide a level playing field for the whole group in every respect other than online educational counselling.This strategy would enable comparisons to be made between the experience of the two samples at the end of the study.Questionnaires were issued to the subjects at the beginning, mid-session and end of the academic year.The counsellor was asked for a pre-participation statement at the beginning of the year and to complete a questionnaire at the end of the year.
In 1997 the Open University adopted Soft Arc's FirstClass, an icon-based program, for general use to support the 'Virtual Campus'.This superseded the text-based CoSy4/ Wigwam software in use during the period of the earlier study.
Introducing a group dimension to online educational counselling
The online educational counselling provision was modified for the main study by features that added a group dimension to the online environment.Firstly, a counselling topic was set up within the closed area of the existing peer-group conference, DOORway.Within this topic only, metaphorically speaking the 'Virtual Study', the counsellor was in charge of the discussion.This enabled students to raise issues that they felt were relevant to the whole group within a confidential environment.It permitted both 'one-to-many' and 'many-tomany' interactive communication, and consequently the possibility of wider exchange of information relating to study matters.Additionally the counsellor was encouraged to participate informally, as a guest, in the five student-led topics of the conference.Figure 1 shows the icon-based DOORway desktop as it appeared on the computer screen of each member of the conference during the period of the main study.
It was expected that these two measures might combine to create a relaxed atmosphere where students and counsellor could 'get to know each other'.In this way it might be possible steadily to build a relationship of rapport so that educational counselling could be both readily sought and given (either by one-to-one email or in the group) and so increase the effectiveness of interactive contact between student and counsellor.An 'information' topic accessed via the main DOORway icon enabled formal dissemination of information to the group by the student moderator and the researcher.
Results
At the end of the study the data taken from the student questionnaires and counsellor records show that there had been a marked increase in the level of usage of the online counsellor's services compared with that of the feasibility study.The majority of the sample (ten students) had contacted the online counsellor for help/advice during the year.
The data relating to the number of issues raised with the counsellor in respect of the whole sample is presented in Table 1.Each query raised resulted in an exchange of messages between the student and counsellor.Nine of the subjects said that, given the choice, they would choose access to an online counsellor in preference to traditional routes.Possible reasons for this are explored later in the paper.One student thought both routes desirable, and two remained unsure.The students' responses to the question of whether they had experienced any changes in levels of motivation, autonomy and enjoyment of the study process by the end of the main study are presented in Table 2.
The results indicate that more than three-quarters of the sample reported an increase in motivation and enjoyment of their studies and more than half experienced a greater feeling of autonomy.
What then were the reasons for these encouraging changes?Data collated from the questionnaires indicates that informal interactive contact in the area of the peer-group conference had been effective in promoting a relationship of mutual trust and rapport between the online counsellor and the students.The conference had played a key role in providing a secure base both for access to online educational counselling and for peer support.It was ranked as the primary or secondary reason (of six options) for logging on to the 'Virtual Campus' by nine of the sample; seven of the sample ranked online counselling as next in importance.Recent research by Preece and Ghozati (1998) into empathic online communities identifies the importance of a strongly shared mutual interest base in such groups.In the case of the present study there was a doubly shared interest, that of Open University study and problems associated with disability.
A summary of the freeform answers in the student questionnaires on the value of contact with the online counsellor in the conference environment suggest that there are a number of possible advantages related to group discussion.These results are presented in Table 3.One way of looking at these data could be that informal contact in the student-led topics might be considered analogous to chatting to a tutor in the bar after a face-to-face tutorial.
In this setting students will often feel more relaxed and able to open up discussion informally.The fact that two students mentioned the relaxation of hierarchical boundaries suggests that 'meeting' in the student-led areas of the conference environment had a positive effect on feelings of autonomy and control.The final two points indicate the potential value of group discussion for sharing and dissemination of knowledge both for the counsellor and the students.In the end-of-year questionnaire, the counsellor commented that, over the course of the academic year, her use of one-to-one email had increased as her relationship developed with the students in the peer-group conference.This suggests that student confidence in the counsellor had resulted from the building of rapport through group contacts.However, these inferences are to be pursued in a follow-up interview study.
In respect of the issue of motivation, there is evidence drawn both from a student questionnaire and from the counsellor's records that support and reassurance provided by the counsellor had enabled one particular student who had been on the point of withdrawing from her studies to continue.She successfully completed her course.In two other instances where students withdrew from their courses, continuity of contact was maintained in the DOORway conference and they were encouraged to re-register in the following year.The counsellor's records show that in six instances she had liaised with others in the university on behalf of students, usually regarding special needs.Use of CMC provides a fast and convenient way for such contacts (sometimes with more than one department) to be made.The result was that she had been able to provide unobtrusive personal help at the point of need.
In relation to the asynchronous use of CMC, two of the subjects specifically commented that it removed their fear of intrusion on the counsellor's personal time (by telephone); messages could be uploaded at a time convenient to themselves knowing that they would be read at her convenience.This may be of particular importance for this category of students (data collected both in the feasibility study and current study suggests that working flexibly, sometimes at unsociable hours, to accommodate health difficulties is not unusual).Two students expressed a preference for contact via email rather than voice or face-to-face in an upsetting situation.This is also interesting since it runs counter to the widely held view (initially held by the online counsellor) that visual and aural cues are helpful in the creation of empathy in such situations.Nine of the sample considered that the use of an off-line reader was 'very important' to their use of the 'Virtual Campus' and a further two 'quite important'.It had permitted messages to be prepared and read over a period of time.This is of particular relevance to those with dexterity problems; in such cases it can take considerable time for them physically to prepare a message.Use of an offline reader removes the worry of running up large telephone bills.In the final section that follows, the conclusions drawn from these results are presented.
Conclusions
The results suggest that CMC can provide an effective route for access to an educational counsellor.They indicate that it is possible to foster a relationship of mutual trust and rapport between counsellor and students within a closed group.Confidence to approach the counsellor with problems was promoted.The increased levels of motivation, autonomy and enjoyment reported by the majority of the sample suggest a perception of benefit to their studies.Issues related to these findings are to be the subject of a follow-up interview study.In this study the perceived effects of the relaxation of hierarchical boundaries in the environment of the peer group conference on student feelings of autonomy and motivation will be investigated.The reasons for the expressed student preference for the use of email in a distressing situation will also be explored.It could be that the medium facilitates communication, which is at the same time both intimate and distancing, in addition to permitting a more considered dialogue between counsellor and student than might otherwise be possible.In a wider context, the results suggest that it would be valuable to explore further the role which empathic online communities might play in a distancelearning environment.There may be other groups for whom this approach could prove beneficial -for example single parents of young children who may also sometimes be house-bound and socially isolated -and for whom CMC support and educational counselling could bring equivalent gains.
Figure
Figure /."TheDOORway Desktop -introducing an interactive group dimension to personal educational counselling online
Table 1 :
Total number of issues raised (categorized by type of issue) for whole sample
Table 2 :
Changes in perceived levels of student motivation, autonomy and enjoyment of the study process at conclusion of main study
Table 3 :
Advantages of group discussion in building rapport | 2017-05-27T17:14:01.029Z | 1999-01-01T00:00:00.000 | {
"year": 1999,
"sha1": "050eb16eb658fdb62e4d1a62599794b27804d44f",
"oa_license": "CCBY",
"oa_url": "https://journal.alt.ac.uk/index.php/rlt/article/download/1046/1296",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "03c67f9012f5e5f6f39ac12313b2848263482760",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
9149893 | pes2o/s2orc | v3-fos-license | Scaling and maintenance of corneal thickness during aging
Corneal thickness is tightly regulated by its boundary endothelial and epithelial layers. The regulated set-point of corneal thickness likely shows inter-individual variations, changes by age, and response to stress. Using anterior segment-optical coherence tomography, we measure murine central corneal thickness and report on body size scaling of murine central corneal thickness during aging. For aged-matched mice, we find that corneal thickness depends on sex and strain. To shed mechanistic insights into these anatomical changes, we measure epithelial layer integrity and endothelial cell density during the life span of the mice using corneal fluorescein staining and in vivo confocal microscopy, respectively and compare their trends with that of the corneal thickness. Cornea thickness increases initially (1 month: 114.7 ± 3.0 μm, 6 months: 126.3 ± 1.6 μm), reaches a maximum (9 months: 129.3 ± 4.4 μm) and then reduces (12 months: 127 ± 2.9 μm, 13 months: 119.5 ± 7.6 μm, 14 months: 110.6 ± 10.6 μm), while the body size (weight) increases with age. We find that endothelial cell density reduces from 2 months old to 8 months old as the mice age and epithelial layer accumulates damages within this time frame. Finally, we compare murine corneal thickness with those of several other mammals including humans and show that corneal thickness has an allometric scaling with body size. Our results have relevance for organ size regulation, translational pharmacology, and veterinary medicine.
Introduction
Size is a critical property of biological systems and is tightly regulated [1]. Body size determines the metabolic rate of organisms [2,3], interactions of organisms with their environment [4,5] and is related to biological diversity and population size [6]. How does body size relate to the size of internal organs, what determines size of internal organs, and how internal organs respond to environmental stresses are fundamental questions in biology [7][8][9].
Allometry, a term coined by Julian Huxley and Georges Tessier in 1936, applies to the phenomenon of relative growth. Organs may have higher growth rate than the whole body (positive allometry), identical growth rate with the whole body (isometry) or lower relative growth rate (negative allometry) [10]. It is noteworthy that studies on allometry are not limited to analyzing age-related changes, the so-called ontogenetic allometry, but also include analysis of inter-individual and inter-species size variations, termed as static and evolutionary allometry respectively.
The eye has been subject to allometric analysis. Axial length of vertebrate eyes obeys a logarithmic relationship with body weight with a negative allometric scaling [11]. Visual organs in human grow to its 80% of adult size by age 4 [12]. Early in life, the orbit size changes with age and doubles its birth weight by 7-8 years of age when it reaches the adult size [13]. The size of an emmetropic human adult eye does not depend on sex or age [14]. Whether eye components also follow size rules similar to the whole eye remains to be studied.
The cornea forms the anterior segment of the eye and is the eye's primary light-focusing structure. Here, we ask how central cornea thickness changes during development and aging in laboratory mouse and how it scales with body size. We determine how the scaling is affected by sex, and how it depends on species. Finally, we perform a systematic literature study and compare body size scaling of murine corneal thickness to several other mammals including humans.
Materials and methods
Mice, husbandry and anesthesia 1-14 month old C57BL/6 (H-2b) and BALB/c (H-2d) female and male mice were purchased from Charles River Laboratories (Wilmington, MA, USA). Mice were housed in a specific pathogen-free environment at the Schepens Eye Research Institute animal facility. They were aged in our AAALAC-certified vivarium in a standard 12:12-hour light-dark cycle and fed irradiated diet (Teklad global 19% protein extruded Rodent Diet 2918, Harlan Laboratories, Indianapolis, IN, USA). Mice were weighed by weight scale.
Anesthesia was administered intraperitoneally by ketamine/xylazine solution at a dose of 120 mg/kg body weight and 20 mg/kg body weight, respectively. Under these conditions, the eyes of mice are naturally wide open and in a stable position, with pupils pointing laterally and upward.
All animals were treated according to the guidelines established by the Association for Research in Vision and Ophthalmology (ARVO) Statement for the Use of Animals in Ophthalmic and Vision Research and Public Health Review, and all procedures were approved by the Institutional Animal Care and Use Committee of the Schepens Eye Research Institute.
Corneal thickness measurement
Images of the anterior segment were taken by anterior segment-optical coherence tomography (AS-OCT; Bioptigen, Durham, NC, USA) in order to determine the corneal thickness. For high resolution central corneal cross-sectional scans (scan range; 3.0mm, scan resolution; 1000, 100 length) were obtained by the radial scan mode at each time point. We aligned the position of cornea by the real-time display used for guidance ( Fig 1A). The position of the cornea was adjusted until the intensity peaks corresponding to the cornea were detected and maximized. The center of the scan pattern was aligned with the corneal vertex reflection [15] visualized on the OCT images ( Fig 1B). Corneal epithelial thickness (Epi) and, the total amount of corneal stroma (St) and corneal endothelial thickness (End), were measured by the supplied software ( Fig 1C).
Corneal endothelial cell density measurement
In vivo confocal microscopy (IVCM), the Heidelberg Retina Tomograph (HRT) / Rostock Cornea Module (Heidelberg Engineering GmbH, Heidelberg, Germany) was used to examine endothelial cell density (ECD) in the cornea. Mice were anesthetized and placed on the microscope stand and the eyes were coated with Genteal gel (Novartis, St. Louis, MO, USA). Images were taken covering an area of 400×400 μm 2 and axial optical resolution of 1 μm/pixel. Then, ECD areas were analyzed quantitatively using ImageJ.
Corneal fluorescein staining
Corneal fluorescein staining (CFS) and the National Eye Institute grading system (Bethesda, MD) were used to evaluate corneal epithelial damage caused by DED [16]. Briefly, 1 ml of 2.5% fluorescein (Sigma-Aldrich) was applied into the lateral conjunctival sac of the mice and after 3 minutes corneas were examined with a slit lamp biomicroscope under cobalt blue light. Punctate staining was recorded in a masked fashion with the standard National Eye Institute grading system of 0-3 for each of the five areas of the cornea-central, superior, inferior, nasal, and temporal.
Allometric analysis
In allometric analysis, the relationship between the two measured quantities is typically expressed as a power law function which expresses a scale symmetry: Y = kX α , or in a logarithmic form: Log(Y) = αLog(X) + Log(k) [17]. Thus, we fit a linear function to the log/log plot of our data and report the slope, α, as the estimated allometric coefficient.
Statistical analysis
Significance of difference of corneal thickness, body weight and corneal thickness adjusted by weight between different groups were analyzed by one-way ANOVA with Bonferroni post hoc test (Fig 2), and corneal thickness, endothelial cell density and CFS scores were compared to baseline levels by Student's t-test (Fig 3) using Prism software (GraphPad, San Diego, CA, US). Data are presented as mean ± standard error of mean (SEM) and considered statistically significant at p <0.05. Linear regression analysis and correlation analysis were performed among body weight and corneal thickness using Origin V8.5 SR1 software (OriginLab corporation, Northampton, MA, US) and the built-in statistical packages (Fig 4). Pearson correlation analysis was used for normally distributed data and Spearman correlation analysis was adopted for the abnormally distributed data.
Cornea thickness changes by age
We measured cornea thickness for BALB/c female mice at different ages ranging from one month (114.7 ± 3.0 μm) to 14 months (Fig 2A and 2B). We identified two phases: (i) the thickness increases initially (6 months: 126.3 ± 1.6 μm), and then reaches a maximum (9 months: 129.3 ± 4.4 μm); (ii) we then observed a reduction in the corneal thickness (12 months: 127 ± 2.9 μm, 13 months: 119.5 ± 7.6 μm, 14 months: 110.6 ± 10.6 μm). The body size (weight) showed a distinct trend (Fig 2B). For young ages, we observed an increase in the body weight by age that saturates at nearly 6 months (1 month: 11.7 ± 0.6 g, 3 month: 20.8 ± 0.3 g, 6 month: 25.5 ± 0.3 g, 9 months: 27.8 ± 0.7 g, 12 month: 27.5 ± 0.9 g, 13 month: 26.5 ± 0.9 g, 14 month: 28.0 ± 0.6 g). Weight normalized thickness ( Fig 2B) follows a trend comprised of an initial decline (1 month: 10.0 ± 0.6 μm/g, 3 month: 5.7 ± 0.2 μm/g, 6 month: 5.0 ± 0.1 μm/g, 9 months: 4.5 ± 0.2 μm/g), a plateau and a decline after one year (12 month: 4.5 ± 0.2 μm/g, 13 month: 4.5 ± 0.3 μm/g, 14 month: 4.0 ± 0.4 μm/g). We then assessed the contributions of different cornea layers to the overall thickness change. We observed that stroma and endothelium contribute to the thickness change during adulthood and also late in life (S1 Fig). Finally, we assessed epithelial integrity and ECD to see whether age-related changes of corneal thickness are correlated with structural changes of the boundary epithelial and endothelial layers (Fig 3 and Panels A-C in S2 Fig). Initially, aging affects corneal thickness by changing the boundary layers that maintain the homeostasis of corneal water and electrolytes. ECD reduces as the mice age from 2 months to 8 months The data suggest that, initially, aging affects corneal thickness by changing the boundary layers that maintain the homeostasis of corneal water and electrolytes. Endothelial cell density (ECD) reduces as the mice age from 2M old to 8M old and epithelial layer accumulates damages within this time frame. We have not seen significant changes in corneal epitheliopathy and ECD (contrary to thickness data), when we assessed very old mice (14M). All data were obtained from n = 10 mice/group and representative data from three independent experiments are shown. All data were compared to baseline (2 months). We used female BALB/c mice for this analysis. p values are calculated using the Student's t-test and error bars represent SEM. (***<0.001). Next we asked if corneal thickness depends on sex and strain. We compared the corneal thickness of 1-month old mice and we observed no difference between the two sexes ( Fig 2C). However, for 1-year-old mice we observed significantly thinner cornea in male animals. The difference between sexes became clearer when we normalized the thickness by body weight. To determine if mice of different strains differ in their corneal thickness, we studied two aged-and sex-matched strains, BALB/c and C57BL/6J mice. We observed no significant difference in corneal thickness between the two strains initially, but after normalizing the thickness with body weight, we found that C57BL/6J mice have relatively thinner corneas for their body size (S3 Fig).
Boundary epithelial and endothelial layers change with age
To gain mechanistic insights into the dynamics of corneal thickness, we measured ECD (Panels A and B in S2 Fig). We observed that the density declines fast after one month for several weeks and then continuously decreases with a relatively lower rate for young adults until 14 months. We then normalized the density values with weight and observed that the weight-normalized thickness initially declines and then reaches a plateau at 5-6 months of age.
Next, we asked if animals with different gender differ in their ECD (Panel C in S2 Fig). We compared the ECD of 1-month old female and male mice and we observed no differences. For 1-year old mice, we observed a lower ECD for male mice as compared to the female ones. Normalizing the density values with weight, we found more dramatic difference between the genders. The male animals, both 1-month and 1-year old, had lower densities when compared to their aged-matched female counterparts.
To gain further mechanistic insights into the dynamics of corneal thickness, we measured epithelial integrity of mouse cornea and its dependency on age (Fig 3). Using CFS and slit lamp assessment, we found that the integrity of corneal epithelium is significantly impaired in 8-month old mice irrespective of gender. The corneal epithelial lining of young adult mice (2-month old) was found to be typically intact.
Discussion and conclusions
In this study, we measured central corneal thickness changes with age, assessed the dependence of corneal thickness on sex in young and old mice, and compared murine corneal thickness with that of other mammals. The latter provided us with a scaling relation which in turn provides us with a simple way to estimate weight/age dependency of corneal thickness in other animals by only knowing it for mice and without directly measuring it for other animals (which might be practically very difficult for certain rare species).
The changes of corneal thickness by age were still unclear in human and animals. Previous studies reported that there was no significant change in the corneal thickness over time [29,30] and the others showed the decreased trend of corneal thickness by age [31,32]. However, those studies did not adjust the measured thicknesses by study subjects' body sizes (weights), which is what we performed in our study (Fig 2). The thickness has a seemingly increasing trend in younger phase [33], which follows by a decreasing trend (Fig 3), but after adjusted by weight, we clearly showed the central corneal thickness has a decreasing trend by age.
The increase in corneal thickness in old mice (8 month old) as compared to young mice (2 month old) was associated with increase in epitheliopathy and decrease in ECD (Fig 3). The changes in epithelial and endothelial function may explain, at least partially, the observed changes in corneal thickness because epithelium and endothelium maintain the hemostasis of water and materials [34] in the cornea. Due to age related changes in these layers, flux of water and materials will likely change and a new steady state and a new corresponding thickness may be reached. Our study however does not provide any functional analysis of epithelial and endothelial layer and these possible scenarios are to be tested in future studies.
To compare corneal thickness from other inbred strains of mice, we measured the central cornea thickness of both BALB/c and C57BL/6J mice (S3 Fig). Our study showed that the murine central corneal thickness was highly strain-dependent. Our data supported previous studies demonstrating that the central corneal thickness of C57BL6J mice was thinner than that of BALB/c mice under weight adjustment [25]. In addition, we revealed ECD was also strongly influenced by genetic backgrounds, suggesting that the genes may influence the physiologic attrition of ECD. Our data has great potential to increase our understanding of the ECD disorder.
This study has some limitations that should be noted. First, we only used AS-OCT for assessing corneal thickness. For accurate examination, it is preferable to evaluate corneal thickness with various machines such as AS-OCT [15] and Pentacam Scheimpflug system [35]. In clinical setting for human, previous study has reported the comparison between AS-OCT and Pentacam Scheimpflug system for assessing corneal thickness, reporting that AS-OCT and Pentacam are both reliable and reproducible for measuring corneal thickness [36], therefore we examined mice cornea thickness by AS-OCT in this study. Moreover, CFS score was used for assessing corneal epitheliopathy in this study. Although we did not evaluate the corneal structural changes by histological assessment, CFS score was used historically for the evaluation of corneal epitheliopathy for the ocular surface disease such as dry eye disease [37], which is strong correlation with aging [38][39][40][41]. Therefore, we consider CFS score was useful for assessing corneal surface epitheliopathy by aging in murine model. Finally, this study was focused on corneal thickness and not cornea size. The thickness is only a partial measure of cornea size. Allometric scaling however can be done for any length scale in our body including limb length and corneal thickness as reported in this article.
Our study has revealed dynamics of corneal thickness during the lifetime of laboratory mouse. Our study will be of interest to researchers studying aging, comparative ophthalmology and veterinary medicine. To extrapolate the results of pharmacological studies performed on mice to other animals, it is essential to understand relevant scaling relations. This is important not only for human studies but also for designing drug therapy for rare animals.
Supporting information S1 Fig. Thickness of epithelium and, stroma and endothelium combined versus age. Female BALB/c mice were used for the OCT measurements. All data were obtained from n = 10 mice/ group and representative data from three independent experiments are shown. All data were compared to baseline (1 month). p values are calculated using the Student's t-test and error bars represent SEM. ( Ã <0.05). The left panel presents the full cornea thickness, which is split into two parts, thickness of epithelium and, thickness of stroma and endothelium combined in the right panel. | 2018-04-03T00:00:38.419Z | 2017-10-06T00:00:00.000 | {
"year": 2017,
"sha1": "44b64fcc5c0a6a4cf5b886fe02b7676667576c2f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0185694",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "738f4ac03458f9a6f2a11710d4e83ec18a075cda",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
59416019 | pes2o/s2orc | v3-fos-license | Application of LW 7 marker for identification of progenies with male sterility gene in sweet sorghum population
The objectives of this study were to verify the use of LW7 marker in identifying maintainer lines (B-lines) and restorer lines (R-lines) in grain sorghum and sweet sorghum, and to identify B-lines in the F2, BC1F2 and BC2F2 generations. Twenty five accessions of sorghum were evaluated, and LW7 marker correctly identified accessions which presented male sterility gene (rf4) in Suphan Buri1 and 03B cultivars; moreover, these genotypes did not show 779 bp band. The cross between Suphan Buri1 and a male-sterile line (A-line) 03A resulted in a sterilized male, confirming the usefulness of the marker in breeding programs. B-lines in the F2, BC1F2 and BC2F2 generations were identified by LW7 marker. The segregation ratio of 3:1 for male fertility and male sterility in the progenies of the three generations supported the one-gene model of Mendelian segregation. The use of marker assisted selection was successful for line development of sweet sorghum with male sterility.
INTRODUCTION
Agriculture provides fiber, food and fuel to human being, and energy from renewable sources is very important due to the depletion of non-renewable crude oil (Ratnavathi et al. 2011).Bio-ethanol production, for example, is an effective way to convert excessive supply of agricultural products (sugarcane and cassava) into value-added of transportation fuel.However, bio-ethanol industry competes for raw material with other industries such as sugar, animal feed and cassava starch, leading to unstable supply chain of raw material and fluctuation of its price.Sweet sorghum can fill the gap of raw material shortage and extend the operation season of bio-ethanol factories for few months.
Extensive research on sweet sorghum has been carried out in many countries such as in China (Gnansounou et al. 2005), India (Reddy et al. 2005), the Philippines (Reddy et al. 2011), the United States, Australia (Reddy et al. 2010a) and Europe countries (Berenji and Dahlberg 2004).Development of hybrid varieties is an approach to improve yield potential of sweet sorghum (Yu andCuizhen 1998, Reddy et al. 2010a).Since the discovery of male sterility in sorghum (Stephens and Holland 1954), hybrid varieties have been used extensively for grain sorghum.An attempt to converse male sterility into sweet sorghum has been made (Reddy et al. 2010b); moreover, the research on heterosis among germplasm accessions (Pfeiffer et al. 2010) and related species has also been carried out (Murray et al. 2009, Wang et al. 2009).Some experimental hybrids of A 3 male sterility showed better performance for brix and stalk yield than their parents (Pfeiffer et al. 2010); additionally, Makanda et al. (2009) reported that the hybrids had performance for brix higher than parents and exhibited heterosis of up to 112%.
Markers associated with male fertility such as LW7, LW8 and LW9 were screened for their efficacy in identifying restorer genes in grain sorghum and sweet sorghum populations in breeding program at Khon Kaen University.LW8 and LW9 were discarded from the project since these markers were not effective in identifying the correct genotypes.Nevertheless, LW7 was polymorphic and was selected for use in marker assisted selection program (Lunmat et al. 2008).LW7 was extensively used in grain sorghum (Wen et al. 2002).However, the application in sweet sorghum to develop B-lines has not been reported.The objectives of this study were to verify the use of LW7 marker in identifying maintainer lines (B-lines) and restorer lines (R-lines) in grain sorghum and sweet sorghum and to select B-lines in the F 2 , BC 1 F 2 and BC 2 F 2 segregating populations.
Plant materials
Twenty six cultivars and lines of sweet sorghum, grain sorghum and forage sorghum were used in this study.Eighteen cultivars were sweet sorghum from different sources.The cultivars Keller, Theis, Wray, Cowley and Bailey were introduced from the United States (Murray et al. 2009).Urja was introduced from Praj Co.Ltd., India.SSV84 and SV74 were introduced from the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), India.BJ248 and BJ281 were from China (Audilakshmi et al. 2010).Suwan Sweet1, Suwan Sweet2, Suwan Sweet3, Suwan Sweet4, Suwan Sweet5, DA5 and KD1 were developed by Kasetsart University, Thailand; and KKU40 was developed by the Department of Plant Science and Agricultural Resources, Faculty of Agriculture, Khon Kaen University, Thailand (Ariyajaroenwong et al. 2012).
Five cultivars of grain sorghum (KU439, KU630, KU804, KU901 and KU905) were kindly donated by Kasetsart University, Thailand.A forage sorghum cultivar (Suphan Buri1), which had been previously identified with the male sterility gene, was kindly donated by Suphan Buri Agricultural Research and Development Center, Thailand; and a B line (03B) of grain sorghum was kindly donated by Kasetsart University, Thailand.These B lines were used as checks for male sterility and were also used as parent to transfer male sterility to sweet sorghum.A male-sterile line (A-line) (A1 cytoplasm) of grain sorghum was kindly donated by Kasetsart University, Thailand, which was also used as female parent.The genetics for male sterility in other genotypes were not previously known, and they were selected for the experiment due to its good yield and agronomic traits.
Twenty-five genotypes (except for A-line) were planted in single row plots of 5 m long, spaced 50 cm between rows and 10 cm between plants, in late February 2010.The line 03B (B-line) was also planted and used as check.Twentyfive sorghum cultivars were crossed with A-line sorghum and the F 1 crosses were planted in the next season.
Three hundred and fifteen plants in the F 2 generation, derived from the cross between a B-line of grain sorghum (03B), and a high yielding sweet sorghum (SSV84) were used as initial population in this study.The hybridization of this F 2 population was carried out in the earlier work.The F 2 plants were planted in rows, spaced 50 cm between rows and 30 cm between plants.B-lines in F 2 population were identified by LW7 marker, then backcrossed were done with SSV84 for three times, in order to transfer male sterility to sweet sorghum background.The backcross was accomplished by marker assisted selection.Six hundred and twenty two plants of BC 1 F 2 population and five hundred and thirty seven plants of BC 2 F 2 population derived from the progenies with genes conferring male sterility in the F 2 populations were screened for gene conferring male sterility, the selected genotypes were crossed with SSV84 and the F 1 generations were allowed to self-pollinate.
DNA extraction
One-month old plants were used for DNA extraction.The entries consisted of 25 lines/cultivars, and the single plants in the F 2 population derived from the cross between 03B and SSV84.The 25 lines/cultivars were used for screening for male sterile genes (rf4rf4), whereas the F 2 , BC 1 F 2 and BC 2 F 2 populations were used for screening of progenies that carry male sterile genes (rf4rf4).
A newly fully-expanded leaf of each plant was used for DNA extraction.Two plants were sampled individually for 25 lines/cultivars, and single plants were used for the F 2, BC 1 F 2 and BC 2 F 2 segregating populations.Leaf samples of 3 g were collected from 4-week-old field-grown plants and snap frozen in liquid nitrogen.DNA extraction was done according to the method described by Dellaporta et al. (1983), with minor alteration.One percent agarose gel electrophoresis was used to cross checked quantity and quality of DNA and compared to λ-DNA, which is a standard check.
The standardized amplification/PCR condition was set with minor modification at pre-denaturation at 95 o C for 3 minutes, 35 cycles of denaturation at 94 o C for 30 seconds, annealing at 58 o C for 60 seconds (changed to 60 o C for 90 seconds), extension at 72 o C for 60 seconds (changed to 120 seconds) followed by final extension at 72 o C for 5 minutes.The amplified PCR products were then separated on 2% agarose gel electrophoresis.Bands were then scored (1) for presence or (0) for absence of banding at 779 bp (Wen et al. 2002).The data of PCR reactions were recorded as 1 or 0 for the presence of 779 bp or absence of the bands, respectively.
DATA ANALYSIS
The data of 25 sorghum genotypes were used merely for screening of these genotypes for the lines, which possess male sterility gene (rf4).The binary data of PCR reactions for 315 plants in the F 2 generation, 622 plants of BC 1 F 2 population and 537 plants of BC 2 F 2 generation were analyzed by Chi-square (χ 2 ) test (Gomez and Gomez 1984) as follows: , where χ 2 = Chi-square, O i = frequencies of observed value, E i = frequencies of expected value.The Chi-square test would verify the segregating models of the progenies.
RESULTS AND DISCUSSION
In this study, 25 genotypes of sorghum were used for screening of genotypes containing male sterile genes (rf4) by using a molecular marker.Once the lines were identified, they were used in the crossing program to transfer male sterility genes into sweet sorghum background and backcross was carried out three times.Marker assisted selection was used for screening progenies with male sterility genes.
Banding in lines and cultivars
Twenty three of 25 cultivars/lines produced a band of 779 bp of LW7 marker flanking the male fertile restorer genes (Rf4) (Figure 1).Suphan Buri1 (SP1) and the line 03B did not show this specific band (Figure 1), and they were previ-ously known as the genotypes with gene conferring male sterility (rf4).The results indicated that the marker could identify correctly the sorghum genotypes which present male restorer gene.
The test crosses of 25 sorghum genotypes with the line 03A (an A line) was also carried out to confirm the results.As expected, the 23 sorghum genotypes with Rf4 gene did not show male sterility, whereas the pure lines Suphan Buri1 and 03B that had rf4 gene produced F 1 plants showing male sterility (Data not reported).The results were very convincing to apply molecular assisted selection in this sweet sorghum breeding program.
LW7 was identified and subsequently converted to STS/ CAPS marker.This marker is small (214 bp) and AT-rich, which generated a 779 bp at fertility restorer gene, and the LW7 marker is closely linked with Rf4 gene (5.13 bp) (Wen et al. 2002).Therefore, identification of B-line in grain, sweet and forage sorghum and B-lines in three segregating populations of sorghum is very effective.
Marker assisted selection in segregating populations
Molecular assisted selection using LW7 marker was employed in three consecutive populations (F 2 , BC 1 F 2 and BC 2 F 2 ) from the cross of a B-line (03B) of grain sorghum and a high yielding sweet sorghum (SSV84).In the F 2 generation, a total number of 315 plants were screened for a band of 779 bp using LW7 molecular marker (Figure 2A).Two hundred and forty three plants showed the band of 779 bp, and 72 plants did not (Table 1).The segregation ratio of 3:1 conformed to one-gene model of Mendelian segregation ratio (1RfRf : 2Rfrf : 1rfrf).
In the BC 1 F 2 generation, 622 plants were screened using LW7 marker.Four-hundred and sixty nine plants showed the band of 779 bp, and 153 plants did not (Figure 2b).The segregation ratio of 3:1 was in good agreement with onegene model of Mendelian segregation ratio.In the BC 2 F 2 generation, 537 plants were screened.Four hundred and eight plants showed the band of 779 bp, and 129 did not.Again, the segregation ratio of 3:1 also conformed to Mendelian segregation for one-gene model (Table 1).Darika Bunphan et al.
LW7, LW8 and LW9 are STS or CAPS markers, two of which are co-dominant (Wen et al. 2002).LW7 is a dominance marker and, therefore, it cannot identify homozygous and heterozygous genotypes.However, this marker can identify rf4rf4 genotypes correctly.Although LW8 and LW9 are co-dominance markers, they are not applicable to this population, and they were discarded from the experiment (Lunmat et al. 2008).However, in this study, the interest is stalk yield rather than grain yield, since new B-line and A-line sweet sorghum are being developed.Wen et al. (2002) successfully used LW7, LW8 and LW9 markers for identifying R-line sorghum due to of its close links with restorer gene (Rf4) 5.13 3.18 and 0.79 cM, respectively.However, LW8 and LW9 markers could not identify R-line and B-line in F 2 population (Lunmat et al. 2008).
Comparison of marker assisted selection and conventional method
Schematic diagrams of marker assisted selection and conventional selections are presented in Figure 2. It is clear that marker assisted selection can reduce the time required for line development.This is due to the fact that this method does not require progeny test for each cycle of backcrossing.Moreover, this method also reduces costs for labor and progeny trials, and it may also reduce population size for line selection.
Marker assisted selection requires well-equipped laboratory and the cost for line development may be higher than conventional method.However, the method is faster, leading to the rapid release of new cultivars.This seems to be advantageous under high competition of seed industry.Under situation where labor cost is not expensive and time for line development is not demanding, conventional method may be more suitable than marker assisted selection.
Marker assisted selection for male sterility in sweet sorghum was completed in nine seasons.The marker LW7 was successfully used for line development.Due to time limit, gene validation was not performed.The efficiency of the method compared to the conventional one was not determined, and further investigations are required.This study proved that marker assisted selection for male sterility using LW7 marker was effective.
Figure 1 .
Figure 1.Agarose gel electrophoresis of DNA fragments of 25 sorghum genotypes obtained from PCR reaction of LW7 marker.Note: 03B-a B line sorghum, SP1-a cultivar with known genetic background for male sterility.One lane is used for 03B and two lanes were used for other.
Figure 2 .
Figure 2. Agarose gel electrophoresis of DNA fragments of individual plants in the F 2 generation (A), BC 1 F 2 generation (B) and BC 2 F 2 (C) obtained from PCR reactions of LW7 marker.Note: 03B-a B line sorghum used as check.
Figure 3 .
Figure 3.The diagram shows comparison of marker assisted selection and conventional method.
. Several DNA markers are associated Crop Breeding and Applied Biotechnology 13: 59-66, 2013 Brazilian Society of Plant Breeding.Printed in Brazil
Table 1 .
Segregation ratio of fertile to sterile plants of sweet sorghum in the F 2 , BC 1 F 2 and BC 2 F 2 populations ns: non-significant Darika Bunphan et al. | 2018-12-29T09:01:18.874Z | 2013-05-06T00:00:00.000 | {
"year": 2013,
"sha1": "a64be6e6822099c5296bb149f4c4de2fdcff328f",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/cbab/a/sjkh8nY8gyt5xSvXHngdLTy/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a64be6e6822099c5296bb149f4c4de2fdcff328f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology"
]
} |
203625820 | pes2o/s2orc | v3-fos-license | Synthesis, Characterization and Kinetic Behavior of Supported Cobalt Catalysts for Oxidative after-Treatment of Methane Lean Mixtures
The present work addresses the influence of the support on the catalytic behavior of Co3O4-based catalysts in the combustion of lean methane present in the exhaust gases from natural gas vehicular engines. Three different supports were selected, namely γ-alumina, magnesia and ceria and the corresponding catalysts were loaded with a nominal cobalt content of 30 wt. %. The samples were characterized by N2 physisorption, wavelength dispersive X-ray fluorescence (WDXRF), X-ray diffraction (XRD), Raman spectroscopy, X-ray photoelectron spectroscopy (XPS) and temperature-programmed reduction with hydrogen and methane. The performance was negatively influenced by a strong cobalt-support interaction, which in turn reduced the amount of active cobalt species as Co3O4. Hence, when alumina or magnesia supports were employed, the formation of CoAl2O4 or Co–Mg mixed oxides, respectively, with a low reducibility was evident, while ceria showed a lower affinity for deposited cobalt and this remained essentially as Co3O4. Furthermore, the observed partial insertion of Ce into the Co3O4 lattice played a beneficial role in promoting the oxygen mobility at low temperatures and consequently the catalytic activity. This catalyst also exhibited a good thermal stability while the presence of water vapor in the feedstream induced a partial inhibition, which was found to be completely reversible.
Introduction
Compressed natural gas is regarded as a suitable alternative to substitute the traditional automotive fuels such as gasoline or diesel that are becoming more expensive and scarce with time.Natural gas vehicles have been demonstrated to produce less CO 2 , NO x and soot emissions than gasoline or diesel vehicles and are safer in case of accident [1,2].Nevertheless, the application of this technology is accompanied by the necessity of controlling the emissions of unburned methane from the engine, as this is a powerful greenhouse effect gas.The most commonly applied solution is the complete oxidation over a supported noble metal catalyst, such as platinum and/or palladium [3,4].However, the price of these metals is extremely high and they are also prone to deactivation by sintering and the presence of water, and thus, this raises the cost of natural gas engines and limits their massive implementation.For this reason, the interest in developing noble-metal free catalysts for methane oxidation is increasing.
Cobalt oxide-based catalysts, and among them spinel-type cobalt oxide (Co 3 O 4 ), are considered good alternative candidates to noble metals due to their already demonstrated high efficiency for the oxidation of hydrocarbons and a greater availability [5][6][7].The reason for this high activity seems to lie on their good redox properties such as reducibility and mobility of oxygen species at low temperatures, which derives from the easiness that the constituent ions of these materials have to shift between oxidation states.Moreover, these catalysts are more thermally stable than noble-metal based catalysts and generally more resistant to water inhibition [8,9].However, cobalt oxides tend to present poor structural and textural properties as well, especially when they are prepared by common synthesis methods such as precipitation, sol-gel or solution combustion [10,11].For this reason, these materials are usually supported over porous materials as a way to improve their properties, and also as a way to facilitate their incorporation into the monolithic systems where they should eventually operate [12,13].
The selection of an appropriate support is not a trivial task and can have a significant effect on the properties of the final catalyst, due to the different nature of the cobalt-support interactions, to the point of even rendering the catalyst useless for the specific purpose under study.For cobalt oxide catalysts the most commonly used supports are alumina [14,15], magnesia [16,17], zirconia [18,19], silica [20,21] ceria [22,23], silicon carbide [24,25], zeolites [26,27] or cordierite [28,29].The decision of using one or other support for a specific cobalt-based catalyst is generally made on the basis of the specific catalytic properties that regulate the activity for the reaction under study.
Regarding this, many studies have dealt with the effect of specific supports on the performance of Co 3 O 4 catalysts for different reactions.For instance, Grzybek et al. [30] found out that cobalt oxide showed very different behavior for N 2 O abatement depending on which polymorph of alumina was used as the support, and concluded that it was better to sacrifice the textural properties of the final catalyst by using low-surface α-Al 2 O 3 instead of γ-Al 2 O 3 with the objective of inhibiting the occurrence of cobalt-alumina interactions.This effect was also found by Solsona et al. [31], when studying the total oxidation of propane with cobalt oxide supported over alumina with low, medium and high surface area.On the other hand, Yung et al. [32] examined the oxidation of NO with cobalt catalysts supported over titania and zirconia, finding that the latter was a more suitable support for this purpose than the former.However, on a different study, Kim et al., [33] reported that high-surface ceria was a better support than titania or zirconia for the same reaction.Ceria was also found to be, as pointed out by Wyrwalski et al. [34], the most suitable support for cobalt oxide for the complete oxidation of propene.This type of studies has been also carried out for liquid phase reactions.For example, Zhang et al. [35] analyzed the influence of several supports for the degradation of organic dyes in solution.They concluded that MgO was the most suitable for this purpose due to the increase in the population of surface Co 2+ ions induced by cobalt-magnesia interactions.
In the case of methane oxidation, it has been demonstrated that the population of Co 3+ species in the spinel lattice is the key parameter that provides the catalyst with the good reducibility and oxygen mobility involved in the Mars-van Krevelen mechanism [36,37].Thus, an appropriate support for this reaction will be one that enables a high dispersion of the cobalt deposited, while at the same time allows the cobalt oxide species to maintain their good redox properties as intact as possible.However, this is not always achievable, as a high dispersion of the deposited cobalt is usually accompanied by a strong cobalt-support interaction that, more often than not, ends up being detrimental for the oxygen mobility of the final catalyst [38].
A wide number of works have investigated the oxidation of methane over supported cobalt catalysts, each one focusing on a specific support, under different reaction conditions, with different degrees of success [39][40][41].However, there are no studies about the effect that supports with a varying physico-chemical nature have on the fundamental properties of cobalt oxide-based catalysts, and the comparison of their activity under the same conditions.For this reason, in the present work, three Co 3 O 4 catalysts supported over γ-alumina, magnesia and ceria were prepared by the same synthesis route, characterized and examined for the oxidation of methane under lean conditions, with the objective of determining the effect that the different supports have on the textural, structural and redox properties and the activity of the cobalt oxide active phase.
Synthesis of the Supports and Supported Cobalt Catalysts
Three different supports (γ-Al 2 O 3 , MgO and CeO 2 ) were used for preparing the cobalt catalysts.The employed alumina was kindly provided by Saint-Gobain (Paris, France).This was previously thermally stabilized at 850 • C for 4 h in static air.Both MgO and CeO 2 were prepared by precipitation with an aqueous solution of Na 2 CO 3 (CAS 497-19-8) 1.2 M.This was slowly added to aqueous solutions of magnesium (II) nitrate hexahydrate (Mg(NO 3 ) 2 6H 2 O, CAS 13446-18-9) or cerium (III) nitrate (Ce(NO 3 ) 3 6H 2 O, CAS 10294-41-4), respectively, at a constant temperature of 80 • C, until the pH was 8.5 or 9.5.A similar synthesis route was followed for preparing the supported cobalt samples.Thus, for each support, 5 g of the selected support were mixed with 100 cm 3 of a solution of Co(NO 3 ) 2 •6H 2 O (CAS 10026-22-9) with adjusted concentration and then a solution of Na 2 CO 3 1.2 M was added dropwise at 80 • C until the pH reached 8.5.The nominal Co content of the three metal oxide catalysts was 30 wt. %.The samples were denoted as Co/Al 2 O 3 , Co/MgO and Co/CeO 2 .For comparative purposes, a bulk Co 3 O 4 catalyst was also prepared (14 m 2 g −1 , 0.09 cm 3 g −1 and 335 Å).
All samples were dried at 110 • C for 16 h and then calcined in static air to obtain the final supports (MgO and CeO 2 ) and cobalt catalysts.The calcination protocol consisted on three heating ramps separated by 30-min isothermal steps at 125 and 300 • C: An initial ramp at 5 • C min −1 from room temperature to 125 • C, a second ramp at 1 • C min −1 up to 300 • C, and a final ramp at 5 • C min −1 up to 600 • C, temperature that was then kept constant for 4 h.
Characterization Techniques
Textural properties of the supports and catalysts were examined by N 2 physisorption in a Micromeritics TriStar II apparatus (Micromeritics Instrument Corp. Norcross, GA, USA).The Brunauer-Emmett-Teller BET (Brunauer-Emmett-Teller) method was used to determine the specific surface area of the samples while the BJH (Barrett, Joyner and Halenda) method was applied for the estimation of the average pore size.Degassing of the samples prior to analysis was performed on a Micromeritics SmartPrep apparatus (Micromeritics Instrument Corp. Norcross, GA, USA). at 300 • C for 10 h with a N 2 flow.The elemental composition of the cobalt catalysts was determined by wavelength dispersive X-ray fluorescence (WDXRF).Each sample was mixed with a flux agent (Spectromelt A12, Merck 111802, Darmstadt, Germany) in an approximate proportion of 20:1 and placed in an induction micro-furnace at 1200 • C to form a boron glass pearl.The pearls were analyzed under vacuum in a PANalytical AXIOS sequential WDXRF spectrometer (Malvern Panalytical Ltd, Royston, UK), equipped with a Rh tube and three different detectors (gas flow, scintillation and Xe sealed).
Structural properties of the catalysts were determined by X-Ray diffraction and Raman spectroscopy.XRD analysis was performed on a X'PERT-PRO X-Ray diffractometer (Malvern Panalytical Ltd, Royston, UK). using Cu Kα radiation (λ = 1.5406Å) and a Ni filter.The X-Ray source was operated at 40 kV and 40 mA of current.The diffractograms were obtained in the 2θ range of 5-80 • with a step size of 0.026 • and a counting time of 2.0 s.Phase identification was carried out by comparing the diffraction patterns with JCPDS (Joint Committee on Powder Diffraction Standards) database cards.Additionally, in order to perform a detailed XRD analysis over the supported cobalt catalysts a longer counting time (26.8 s) was applied.The cell size of the cobalt spinel phase was estimated by profile matching of the detailed XRD patterns using FullProf.2ksoftware (version 6.30, Institut Laue-Langevin, Grenoble, France).
The analysis by Raman spectroscopy was carried out by using a Renishaw InVia Raman spectrometer (Renishaw, Wotton-under-Edge, Gloucestershire, UK), coupled to a Leica DMLM microscope (Wetzlar, Germany) with a spatial resolution of 2 microns.For each spectrum, 20 s were employed and five scans were accumulated with the 10% of the maximum power of a 514 nm laser (ion-argon laser, Modu-Laser, Centerville, UT, USA) in a spectral window of 150-1500 cm −1 .X-ray photoelectron spectroscopy (XPS) analysis was performed using a SPECS system (SPECS GmbH, Berlin, Germany) equipped with a Phoibos 150 1D analyzer (SPECS GmbH, Berlin, Germany) and a DLD-monochromatic radiation source.The obtained spectra were calibrated by fixing the signal of adventitious carbon at 284.6 eV.
Temperature-programmed reduction with hydrogen (H 2 -TPR) analysis were carried out on a Micromeritics Autochem 2920 apparatus (Micromeritics Instrument Corp. Norcross, GA, USA), using a 5%H 2 /Ar mixture as the reducing gas.Each sample was subjected to a pre-treatment with a 5% O 2 /He mixture at 300 • C for 30 min prior to the analysis.All TPR experiments were performed up to 950 • C with an isothermal step of 10 min at that temperature.The water produced throughout each experiment was removed from the outlet stream using a cold trap, to avoid interference with the thermal conductivity detector.Additional information regarding the activation of methane was obtained by means of temperature programmed reaction with a 5% CH 4 /He mixture in the absence of oxygen (CH 4 -TPR) coupled to mass spectrometry (MKS Cirrus Quadrupole Mass Spectrometer, Andover, MA, USA).The experiments were conducted up to 600 • C with a heating ramp of 10 • C min −1 followed by an isothermal step at 600 • C for 30 min.
Evaluation of the Catalytic Performance
Catalytic activity was examined in a bench-scale fixed bed reactor (PID Eng&Tech S.L. Madrid, Spain) in the 300-600 • C temperature range at atmospheric pressure.In each reaction experiment, 1 g of catalyst was used (particle size of 0.25-0.3mm).The catalyst was diluted with the same mass of inert quartz (particle size 0.5-0.8mm) to ensure a good distribution of heat and reactants along the catalytic bed.The feedstream (1%CH 4 , 10%O 2 and N 2 as the balance gas) was fed to the reactor with a total flow of 500 cm 3 min −1 , which corresponded to a space velocity of 300 cm 3 CH 4 g −1 h −1 (60,000 h −1 approximately for an estimated catalyst density of 2 g cm −3 ).The temperature of the reactor was increased in a stepwise progression, with heating ramps of 1 • C min −1 followed by 15-min isothermal periods each 25 • C, where methane conversion and product profiles were determined.Each chromatographic analysis was performed in triplicate.Methane conversion was calculated by the difference between inlet and outlet CH 4 concentrations.Inlet and outlet streams were analyzed using an on-line Agilent Technologies 7890N gas chromatograph (Agilent Technologies, Santa Clara, CA, USA) equipped with a thermal conductivity detector (TCD) and two columns: for the analysis of CH 4 , O 2 , N 2 and CO, a PLOT 5A molecular sieve column was used.For CO 2 analysis, a PLOT U column was used.To ensure that mass or heat transfer limitations were not affecting the obtained kinetic results, the criteria for intra and extra-particle mass diffusion, heat transfer and temperature gradients were checked to be above the limits, according to the Eurokin procedure [42,43].
Characterization of the Supports
The textural properties of the commercial γ-alumina support and the as-prepared magnesia and ceria supports in terms of BET surface area, pore volume and mean pore diameter are shown in Table 1.Notable differences were noticed among the investigated supports.Hence, the commercial alumina showed the largest specific surface area (136 m 2 g −1 ), followed by MgO (80 m 2 g −1 ) and CeO 2 (8 m 2 g −1 ).This decreasing order was also consistent for the estimated pore volume of the samples.It varied from 0.55 cm 3 g −1 (γ-alumina) to 0.08 cm 3 g −1 (ceria).The samples showed type IV isotherms with H2 hysteresis loops.On the other hand, XRD patterns of the as-prepared supports well matched with those expected for the pure materials (Figure 1).Hence, the observed diffraction signals could be indexed as γ-alumina (2θ at 37.7, 45.8 and 67.3 • , JCPDS 01-074-2206), magnesium oxide (2θ at 43.0, 62.3, 74.7 and 78.6 • , JCPDS 00-004-0829) and cerium oxide (2θ at 28.5, 33.3, 47.5, 56.4 and 76.7 • , JCPDS 00-004-0593).Moreover, the crystallinity of both magnesia and ceria was higher than that of alumina in view of their noticeably more intense and sharper signals.The formation of ceria was further corroborated by Raman spectroscopy, which revealed a strong peak assigned to the F 2g Raman-active mode characteristic of the fluorite-like lattice of CeO 2 .Note that the vibrational modes of MgO and γ-Al 2 O 3 are essentially Raman inactive.
Characterization of the Supported Cobalt Catalysts
Table 1 lists the cobalt loading of the synthesized cobalt catalysts as determined by WDXRF and ICP-AES (Inductively Coupled Plasma-Atomic Emission Spectroscopy) in the case of the Co/CeO2 sample.It was verified that this content was relatively close to the nominal value (30% wt.Co).BET measurements revealed that cobalt species markedly blocked the pores of the alumina and magnesia supports, as evidenced by the notable decrease in the surface area of the Co/Al2O3 and Co/MgO
Characterization of the Supported Cobalt Catalysts
Table 1 lists the cobalt loading of the synthesized cobalt catalysts as determined by WDXRF and ICP-AES (Inductively Coupled Plasma-Atomic Emission Spectroscopy) in the case of the Co/CeO 2 sample.It was verified that this content was relatively close to the nominal value (30% wt.Co).BET measurements revealed that cobalt species markedly blocked the pores of the alumina and magnesia supports, as evidenced by the notable decrease in the surface area of the Co/Al 2 O 3 and Co/MgO catalysts, from 136 to 108 m 2 g −1 (26%) and 80 to 47 m 2 g −1 (42%), respectively.As with the supports, the catalysts showed type IV isotherms with H2 hysteresis loops as well.Both samples presented a lower pore volume with respect to their corresponding support (0.29 and 0.16 cm 3 g −1 , respectively).By contrast, the impact on the textural properties on the Co/CeO 2 was less noticeable.Thus, a slight increase in both surface area and pore volume was found (18 m 2 g −1 and 0.07 cm 3 g −1 , respectively).
The comparative analysis of the pore size distributions of the supports and the cobalt catalysts (Figure 2) evidenced that the deposition process of cobalt particles was highly dependent on the pore accessibility and interconnectivity.Hence, when cobalt was deposited over γ-alumina, which exhibited a bimodal distribution centered at 90 and 150 Å, the cobalt preferentially deposited over its largest pores.Conversely, when using magnesia as a support, characterized by pores with a markedly different size (35-50 and 325 Å), the cobalt favorably located over the smaller pores.Finally, in the case of ceria, both support and catalyst possessed a unimodal distribution centered around 225 Å, but with an increased width for the cobalt catalyst.This could be due to the fact that cobalt species did not find enough space to deposit on the pores of ceria, and subsequently located on its external surface as well.In addition, since the amount of pores of 335 Å (the prevalent pore size of bulk Co 3 O 4 ) was larger in the Co/CeO 2 catalyst than in the ceria support, it could be assumed that this catalyst contained segregated cobalt oxide to some extent.catalysts, from 136 to 108 m 2 g −1 (26%) and 80 to 47 m 2 g −1 (42%), respectively.As with the supports, the catalysts showed type IV isotherms with H2 hysteresis loops as well.Both samples presented a lower pore volume with respect to their corresponding support (0.29 and 0.16 cm 3 g −1 , respectively).By contrast, the impact on the textural properties on the Co/CeO2 was less noticeable.Thus, a slight increase in both surface area and pore volume was found (18 m 2 g −1 and 0.07 cm 3 g −1 , respectively).The comparative analysis of the pore size distributions of the supports and the cobalt catalysts (Figure 2) evidenced that the deposition process of cobalt particles was highly dependent on the pore accessibility and interconnectivity.Hence, when cobalt was deposited over -alumina, which exhibited a bimodal distribution centered at 90 and 150 Å, the cobalt preferentially deposited over its largest pores.Conversely, when using magnesia as a support, characterized by pores with a markedly different size (35-50 and 325 Å), the cobalt favorably located over the smaller pores.Finally, in the case of ceria, both support and catalyst possessed a unimodal distribution centered around 225 Å, but with an increased width for the cobalt catalyst.This could be due to the fact that cobalt species did not find enough space to deposit on the pores of ceria, and subsequently located on its external surface as well.In addition, since the amount of pores of 335 Å (the prevalent pore size of bulk Co3O4) was larger in the Co/CeO2 catalyst than in the ceria support, it could be assumed that this catalyst contained segregated cobalt oxide to some extent.3° and 2 at 28.5, 47.5 and 56.4°, respectively).On the other hand, it is worth pointing out that the formation of CoAl2O4 is very frequent in Co/alumina systems due to the strong interaction between Co3O4 and the support at mild temperatures (>450 °C) [44,45].However, from the XRD analysis the extent of the formation of this undesired phase was not possible.Note that both spinel-like cobalt phases (Co3O4 and CoAl2O4, JCPDS 00-044-0160) crystallize in the cubic structure with comparable cell parameters, thereby showing very close 2 values in their diffraction patterns.In addition to that, the crystallinity of the support phases did not noticeably change since their crystallite size was similar before and after cobalt deposition (Table 1).However, the crystallite size of the cobalt spinel was highly dependent [44,45].However, from the XRD analysis the extent of the formation of this undesired phase was not possible.Note that both spinel-like cobalt phases (Co 3 O 4 and CoAl 2 O 4 , JCPDS 00-044-0160) crystallize in the cubic structure with comparable cell parameters, thereby showing very close 2θ values in their diffraction patterns.In addition to that, the crystallinity of the support phases did not noticeably change since their crystallite size was similar before and after cobalt deposition (Table 1).However, the crystallite size of the cobalt spinel was highly dependent on the employed support.The smallest crystallite size was obtained over the magnesia (17 nm) while the largest size was found over the ceria (44 nm).This finding was consistent with the poorer textural properties of the synthesized ceria, which resulted in a preferential location of Co 3 O 4 in its external surface.Nevertheless, in all cases the crystallite size was smaller than that of the bulk Co 3 O 4 (63 nm), which evidenced a good dispersion of the cobalt spinel over the surface of the studied supports.
The Raman spectra in the 150-900 cm −1 region of the cobalt catalysts supported on alumina, magnesia and ceria are shown in Figure 3.As a reference, the spectrum of pure Co 3 O 4 is shown as well.Apart from a relatively intense band at 462 cm −1 (F 2g mode of CeO 2 ) for the Co/CeO 2 catalyst, all supported catalysts displayed the five Raman actives modes attributable to Co 3 O 4 , namely three F 2g modes located at 194, 519 and 617 cm −1 , and the E g and A 1g modes at 479 cm −1 and 687 cm −1 , respectively [46].In addition, two shoulders at 705 and 725 cm −1 attached to the A 1g vibration mode were also visible in the case of the Co/Al 2 O 3 catalyst.These two signals evidenced the presence of cobalt aluminate in this sample [47,48].As far as the Co/MgO catalyst was concerned, it should be remarked that small bands at about 1250 and 1350 cm −1 were detected (not shown).These signals could be associated with a certain increase in the disorder of the structure of MgO owing to the insertion of cobalt ions leading to the formation of a Co-Mg solid solution [49,50].Note that the formation of this mixed oxide was difficult to verify by XRD since no significant changes in the 2θ diffraction angles of the Co/MgO catalysts were noted with respect to those of pure MgO. on the employed support.The smallest crystallite size was obtained over the magnesia (17 nm) while the largest size was found over the ceria (44 nm).This finding was consistent with the poorer textural properties of the synthesized ceria, which resulted in a preferential location of Co3O4 in its external surface.Nevertheless, in all cases the crystallite size was smaller than that of the bulk Co3O4 (63 nm), which evidenced a good dispersion of the cobalt spinel over the surface of the studied supports.The Raman spectra in the 150-900 cm −1 region of the cobalt catalysts supported on alumina, magnesia and ceria are shown in Figure 3.As a reference, the spectrum of pure Co3O4 is shown as well.Apart from a relatively intense band at 462 cm −1 (F2g mode of CeO2) for the Co/CeO2 catalyst, all supported catalysts displayed the five Raman actives modes attributable to Co3O4, namely three F2g modes located at 194, 519 and 617 cm −1 , and the Eg and A1g modes at 479 cm −1 and 687 cm −1 , respectively [46].In addition, two shoulders at 705 and 725 cm −1 attached to the A1g vibration mode were also visible in the case of the Co/Al2O3 catalyst.These two signals evidenced the presence of cobalt aluminate in this sample [47,48].As far as the Co/MgO catalyst was concerned, it should be remarked that small bands at about 1250 and 1350 cm −1 were detected (not shown).These signals could be associated with a certain increase in the disorder of the structure of MgO owing to the insertion of cobalt ions leading to the formation of a Co-Mg solid solution [49,50].Note that the formation of this mixed oxide was difficult to verify by XRD since no significant changes in the 2 diffraction angles of the Co/MgO catalysts were noted with respect to those of pure MgO.A closer inspection of the A1g mode as function of the type of used support could be helpful in determining the properties of the lattice of deposited Co3O4.This influence was analyzed in terms of the shift and the full width at half maximum (FWHM) of this signal.The Raman spectra of a bulk Co3O4 were used as a reference.Thus, this Raman mode was located at 687 cm −1 and its FWHM was 11 cm −1 .While no significant shift in the position of the band was found for the Co/Al2O3 (688 cm −1 ) and Co/MgO (688 cm −1 ) catalysts, a notable shift to a lower frequency (681 cm −1 ) was noted over the Co/CeO2 sample.This redshift of the signal could be assigned to the distortion of the spinel lattice, probably owing to insertion of Ce ions [51].On the other hand, the largest FWHM value for the Co/MgO catalyst was assigned to the presence of Co-Mg mixed oxides.The lattice distortion in the cobalt spinel generated by the insertion of Ce ions and/or formation of Co-Mg mixed oxides was A closer inspection of the A 1g mode as function of the type of used support could be helpful in determining the properties of the lattice of deposited Co 3 O 4 .This influence was analyzed in terms of the shift and the full width at half maximum (FWHM) of this signal.The Raman spectra of a bulk Co 3 O 4 were used as a reference.Thus, this Raman mode was located at 687 cm −1 and its FWHM was 11 cm −1 .While no significant shift in the position of the band was found for the Co/Al 2 O 3 (688 cm −1 ) and Co/MgO (688 cm −1 ) catalysts, a notable shift to a lower frequency (681 cm −1 ) was noted over the Co/CeO 2 sample.This redshift of the signal could be assigned to the distortion of the spinel lattice, probably owing to insertion of Ce ions [51].On the other hand, the largest FWHM value for the Co/MgO catalyst was assigned to the presence of Co-Mg mixed oxides.The lattice distortion in the cobalt spinel generated by the insertion of Ce ions and/or formation of Co-Mg mixed oxides was further evidenced by a slight increase in the cell parameter of the Co 3 O 4 phase for both Co/MgO (8.083 Å) and Co/CeO 2 (8.082Å) catalysts with respect to the bulk Co 3 O 4 sample (8.052 Å), where the distortion of the cobalt spinel was minimal.
The surface composition of the samples was investigated by XPS.The Co2p spectra of the supported cobalt catalysts are shown in Figure 4, along with the spectrum of bulk Co 3 O 4 for the sake of comparison.All samples showed broad signals suggesting the presence of various different cobalt species on the surface of the catalysts.More specifically, all spectra showed the main Co2p 3/2 signal in the position range 781.4-779.9eV, along with two satellite signals centered around 785.9-786.7 eV and 789.5-790.2eV, that were attributed to the presence of Co 2+ and Co 3+ ions, respectively [52].The position of the main signal exhibited a shift with respect to the position for bulk Co 3 O 4 (779.9eV), which depended upon the chosen support.For the Co/CeO 2 catalyst, the position of the main signal and the intensity of the satellite signals were comparable to that of the bulk sample.This pointed out that the nature of the cobalt oxide for this catalyst was similar to that of the bulk sample.The main signal of the Co/MgO samples was shifted towards higher binding energy values while the intensity of the Co 2+ satellite signal was notably stronger with respect to the bulk oxide.Both features were compatible with a higher presence of Co 2+ ions in the surface of the Co/MgO catalyst [53], as a result of the Co-Mg interaction and the subsequent formation of the Co-Mg solid solution.Lastly, for the Co/Al 2 O 3 catalyst the main signal was located at 781.4 eV, which was a position indicative of the presence of cobalt aluminate [54].Besides, the surface composition was determined from the integration of the XPS spectra.The respective Co/M (M = Al, Mg and Ce) surface molar ratio could be then calculated and compared with the bulk ratio calculated from the XRF analysis.For both Co/Al 2 O 3 and Co/CeO 2 catalysts their surface ratio (0.43 and 3.15, respectively) was higher than the bulk ratio (0.39 and 1.39, respectively), which indicated a more pronounced presence of cobalt on the surface of these samples.For the Co/MgO, however, the surface ratio (0.12) was notably lower than the bulk ratio (0.38).This could be due to the strong Co-Mg interaction and the partial insertion or dissolution of Co ions in the MgO lattice to form a Co-Mg solid solution, which in turn decreased the amount of cobalt present on the surface of this catalyst.The surface composition of the samples was investigated by XPS.The Co2p spectra of the supported cobalt catalysts are shown in Figure 4, along with the spectrum of bulk Co3O4 for the sake of comparison.All samples showed broad signals suggesting the presence of various different cobalt species on the surface of the catalysts.More specifically, all spectra showed the main Co2p3/2 signal in the position range 781.4-779.9eV, along with two satellite signals centered around 785.9-786.7 eV and 789.5-790.2eV, that were attributed to the presence of Co 2+ and Co 3+ ions, respectively [52].The position of the main signal exhibited a shift with respect to the position for bulk Co3O4 (779.9 eV), which depended upon the chosen support.For the Co/CeO2 catalyst, the position of the main signal and the intensity of the satellite signals were comparable to that of the bulk sample.This pointed out that the nature of the cobalt oxide for this catalyst was similar to that of the bulk sample.The main signal of the Co/MgO samples was shifted towards higher binding energy values while the intensity of the Co 2+ satellite signal was notably stronger with respect to the bulk oxide.Both features were compatible with a higher presence of Co 2+ ions in the surface of the Co/MgO catalyst [53], as a result of the Co-Mg interaction and the subsequent formation of the Co-Mg solid solution.Lastly, for the Co/Al2O3 catalyst the main signal was located at 781.4 eV, which was a position indicative of the presence of cobalt aluminate [54].Besides, the surface composition was determined from the integration of the XPS spectra.The respective Co/M (M = Al, Mg and Ce) surface molar ratio could be then calculated and compared with the bulk ratio calculated from the XRF analysis.For both Co/Al2O3 and Co/CeO2 catalysts their surface ratio (0.43 and 3.15, respectively) was higher than the bulk ratio (0.39 and 1.39, respectively), which indicated a more pronounced presence of cobalt on the surface of these samples.For the Co/MgO, however, the surface ratio (0.12) was notably lower than the bulk ratio (0.38).This could be due to the strong Co-Mg interaction and the partial insertion or dissolution of Co ions in the MgO lattice to form a Co-Mg solid solution, which in turn decreased the amount of cobalt present on the surface of this catalyst.The redox properties of the catalysts were studied by temperature-programmed reduction with hydrogen in the 50-950 °C temperature range.The corresponding profiles are displayed in Figure 5 while the quantitative results of the analysis are listed in Table 2.A noticeably different redox The redox properties of the catalysts were studied by temperature-programmed reduction with hydrogen in the 50-950 • C temperature range.The corresponding profiles are displayed in Figure 5 while the quantitative results of the analysis are listed in Table 2.A noticeably different redox behavior Materials 2019, 12, 3174 9 of 18 was found among the three different cobalt catalysts.Hence, the largest H 2 uptake was exhibited by the sample supported on ceria (7.6 mmol g −1 ) followed by the catalysts supported on alumina (5.6 mmol g −1 ) and magnesia (4.6 mmol g −1 ).First, the redox properties of the latter two samples will be comparatively discussed since both γ-Al 2 O 3 and MgO could be considered non-reducible in the studied temperature window.Hence, the observed H 2 consumption could be exclusively assigned to the reduction of the present cobalt species.In this sense, and taking as a reference the ideal specific H 2 uptake for the reduction of Co 3 O 4 as the only cobalt phase (22.6 mmol H 2 g Co −1 ), both Co/Al 2 O 3 and Co/MgO catalysts revealed a significantly lower consumption, namely 18.7 and 14.5 mmol H 2 g Co −1 , respectively.Therefore, the estimated degrees of Co reduction were 88 and 64%.In line with the results given by Raman spectroscopy, these findings suggested that a fraction of deposited cobalt species strongly interacted with these supports, thereby negatively influencing their redox properties.
Materials 2019, 12, x FOR PEER REVIEW 9 of 19 behavior was found among the three different cobalt catalysts.Hence, the largest H2 uptake was exhibited by the sample supported on ceria (7.6 mmol g −1 ) followed by the catalysts supported on alumina (5.6 mmol g −1 ) and magnesia (4.6 mmol g −1 ).First, the redox properties of the latter two samples will be comparatively discussed since both -Al2O3 and MgO could be considered nonreducible in the studied temperature window.Hence, the observed H2 consumption could be exclusively assigned to the reduction of the present cobalt species.In this sense, and taking as a reference the ideal specific H2 uptake for the reduction of Co3O4 as the only cobalt phase (22.6 mmol H2 gCo −1 ), both Co/Al2O3 and Co/MgO catalysts revealed a significantly lower consumption, namely 18.7 and 14.5 mmol H2 gCo −1 , respectively.Therefore, the estimated degrees of Co reduction were 88 and 64%.In line with the results given by Raman spectroscopy, these findings suggested that a fraction of deposited cobalt species strongly interacted with these supports, thereby negatively influencing their redox properties.The values in brackets correspond to the H2 uptake on a catalyst weight basis. 2 This degree of reduction was estimated based on 22.6 mmol H2 gCo −1 for full reduction of cobalt as Co3O4.
The reduction process of the Co/Al2O3 catalyst consisted on two main reduction events.The first event, centered at about 250-550 °C, could be attributed to the reduction of free Co3O4.This reduction process could be subsequently divided into other two features with peak reduction temperatures at The reduction process of the Co/Al 2 O 3 catalyst consisted on two main reduction events.The first event, centered at about 250-550 • C, could be attributed to the reduction of free Co 3 O 4 .This reduction process could be subsequently divided into other two features with peak reduction temperatures at 310 and 400 • C, following the same reduction process as for the bulk Co 3 O 4 catalysts.This consisted of the sequential reduction to CoO and metallic Co, respectively [55,56].An additional peak between 550-750 • C was clearly ascertained.This was attributed to the presence of significant amounts of CoAl 2 O 4 derived from the strong interaction between Co 3 O 4 and Al 2 O 3 .It was quantitatively deduced that the total amount of cobalt of the sample was equally distributed as Co 3 O 4 (49%) and CoAl 2 O 4 (51%).It must be noted that all cobalt species present in the catalysts were completely reduced to metallic Co.This was verified by XRD analysis of the samples recovered after the TPR run where only metallic Co and Al 2 O 3 were detected.
The H 2 -TPR profile of the Co/MgO sample also revealed two distinct reduction regions although the temperatures windows were markedly different when compared with the alumina-supported counterpart.Hence, the observed consumption at low-temperature (200-350 • C) was ascribed to easily reducible free Co 3 O 4 species as well.The band located at higher temperatures (350-650 • C) was assigned to the reduction of cobalt-MgO species formed during the synthesis route.The integration of these two features gave the following cobalt distribution, 57% as Co 3 O 4 and 43% as Co-Mg mixed oxides.The formation of Co-Mg mixed oxides with a superior stability, which could not be reduced even at 950 • C, could not be however ruled out, as the total H 2 uptake of this sample was rather low in view of its Co content [17,57].In fact, when assuming that the uptake at low temperatures (<350 • C) was owing to the reduction of Co 3 O 4 with a stoichiometry of 4 moles of H 2 :3 moles of Co, and the uptake at higher temperatures (350-650 • C) was related to the reduction of Co species with a stoichiometry of 1 mol of H 2 : 1 mol of Co, the corresponding metal content of the sample, in view of its overall H 2 uptake, would be equivalent to about 25% wt.This was considerable lower than the actual Co loading as determined by XRF (close to 32% wt.).Anyway, the amount of free Co 3 O 4 , with a significantly higher oxidation activity in comparison with CoAl 2 O 4 or Co-Mg mixed oxides, was relatively similar for both γ-Al 2 O 3 and MgO supported catalysts.
While the H 2 uptake of both pure alumina and magnesia was negligible, the bare CeO 2 sample exhibited a weak signal at 450-500 • C that corresponded to the surface reduction of the oxide whereas the notable H 2 consumption peaking at about 825 • C was related to the reduction of the bulk [58,59].The TPR profile of the Co/CeO 2 catalyst was characterized by a remarkable uptake between 200-600 • C that was related to the reduction of precipitated Co 3 O 4 .Similarly to the Co/Al 2 O 3 sample, this feature exhibited two fairly discernible peaks at 310 and 380 • C. A small band was noted at 800 • C as well, which corresponded to the reduction of the bulk of the support.Note that this occurred to slightly lower temperatures with respect to the bare support, probably due to the catalytic role played by cobalt.A quantitative analysis of the amount of consumed H 2 revealed that the overall uptake (7.6 mmol H 2 g −1 ) reasonably matched with that theoretically expected for the total reduction of Co 3 O 4 along with the reduction of ceria (1.8 mmol H 2 g Ce −1 ).This corresponded to a 100% degree of Co reduction (Table 2).
At low temperatures, the amount of consumed H 2 was 23.4 mmol g Co −1 , which was slightly larger than theoretically required for the reduction of the Co 3 O 4 oxide (22.6 mmol g Co −1 ).This suggested that deposited cobalt facilitated the reduction of the surface of the ceria in this temperature range [60].
An overall overview of the redox properties of the three cobalt catalysts pointed out that the use of ceria was beneficial for obtaining a sample with a limited interaction of the active phase with the surface support, thereby not favoring the formation of hardly reducible cobalt oxides such as cobalt aluminate or cobalt-magnesium mixed oxides.In addition, cobalt helped in promoting the reducibility of the ceria.
More useful insights on the influence of the catalyst composition on the reactivity of available active oxygen species for methane oxidation were obtained by CH 4 -TPR analysis coupled to mass spectrometry.The analysis was performed between 50 and 600 • C with a subsequent isothermal step at this temperature for 30 minutes.The evolution of CO 2 (m/z = 44) and CO (m/z = 28, not shown) was monitored (Figure 6).In the low temperature range, namely 375-525 • C, the generation of CO 2 was noticed over the three cobalt catalysts (peaking at about 485 • C).This was attributed to the oxidation of methane by oxygen species associated with Co 3+ ions.However, the extent of this reaction was considerably different over each sample in view of the comparatively larger amount of consumed oxygen (0.70 mmol O 2 g −1 ) or larger yield of CO 2 over the Co/CeO 2 sample, followed by the Co/MgO (0.51 mmol O 2 g −1 ) and Co/Al 2 O 3 (0.28 mmol O 2 g −1 ) catalysts.(Table 2) Moreover, the temperature for the onset of reduction (marked by arrows in Figure 6) was significantly lower for the Co/CeO 2 sample (395 • C) in comparison with the other two catalysts, namely 415 and 425 • C over Co/MgO and Co/Al 2 O 3 , respectively.
temperature for the onset of reduction (marked by arrows in Figure 6) was significantly lower for the Co/CeO2 sample (395 °C) in comparison with the other two catalysts, namely 415 and 425 °C over Co/MgO and Co/Al2O3, respectively.
On the other hand, during the isothermal period at 600 °C a second oxidation process was only evidenced over the Co/CeO2 sample.This was also accompanied by the generation of CO and H2 to some extent that was related to partial oxidation or cracking of methane in the presence of metallic or oxygen-deficient cobalt species [61].In fact, the diffraction pattern of the sample after the CH4-TPR run evidenced the formation of graphitic carbon (signal at 2 = 26.6°)[62].The distinct
Performance of the Supported Cobalt Catalysts
The performance of the cobalt catalysts was examined by their corresponding light-off curves at 300 cm 3 CH4 g −1 h −1 (30,000 cm 3 g −1 h −1 , about 60,000 h −1 ) in the 200-600 °C temperature range.For each catalyst, three consecutive reaction cycles were conducted.In all cases the first light-off curve revealed slightly lower reaction temperatures while the second and third runs were identical to each other, as can be seen for the Co/CeO2 catalyst in Figure S1 (Supplementary Material).For this reason, Figure 7 compares the 3rd-cycle curve for each examined catalyst.Recall that only CO2 was detected in the product stream in the whole temperature range.Hence, a 100% selectivity towards CO2 formation was achieved for all tested catalysts.Appreciable methane conversion (>5%) was detected over 350 °C over the Co/CeO2 catalysts while a similar conversion level was attained at significantly higher temperatures (400 °C) over the samples supported on both magnesia and alumina.The T50 value (temperature at which 50% conversion was attained) was used as criterion for the relative reactivity of each sample (Table 3).In a similar way to the results observed in the low-conversion range, a substantially different performance was addressed with values close to 500 °C (Co/CeO2), 525 °C (Co/MgO) and 550 °C (Co/Al2O3).Accordingly, conversion values at around 85% (Co/Al2O3), 95% (Co/MgO) and 98% (Co/CeO2) were noted at 600 °C.
An additional evidence of the goodness of the Co/CeO2 sample was given by the analysis of the specific reaction rate of the cobalt catalysts (Table 3).This reaction rate was calculated under differential conditions (conversion < 20%) at 425 °C.The ceria supported sample exhibited a markedly higher specific activity (3.1 mmol CH4 gCo −1 h −1 ) with respect to the other two counterparts, which showed a similar performance (1.4-1.8 mmol CH4 gCo −1 h −1 ).On the other hand, during the isothermal period at 600 • C a second oxidation process was only evidenced over the Co/CeO 2 sample.This was also accompanied by the generation of CO and H 2 to some extent that was related to partial oxidation or cracking of methane in the presence of metallic or oxygen-deficient cobalt species [61].In fact, the diffraction pattern of the sample after the CH 4 -TPR run evidenced the formation of graphitic carbon (signal at 2θ = 26.6 • ) [62].The distinct formation at high temperatures of these species (CO 2 , CO and H 2 ) was not observed over the other two catalysts, thereby revealing the presence of very stable, inactive cobalt species on these samples, such as cobalt aluminate or cobalt-magnesium mixed oxides, where cobalt was mainly present as Co 2+ .
Performance of the Supported Cobalt Catalysts
The performance of the cobalt catalysts was examined by their corresponding light-off curves at 300 cm 3 CH 4 g −1 h −1 (30,000 cm 3 g −1 h −1 , about 60,000 h −1 ) in the 200-600 • C temperature range.For each catalyst, three consecutive reaction cycles were conducted.In all cases the first light-off curve revealed slightly lower reaction temperatures while the second and third runs were identical to each other, as can be seen for the Co/CeO 2 catalyst in Figure S1 (Supplementary Materials).For this reason, Figure 7 compares the 3rd-cycle curve for each examined catalyst.Recall that only CO 2 was detected in the product stream in the whole temperature range.Hence, a 100% selectivity towards CO 2 formation was achieved for all tested catalysts.Appreciable methane conversion (>5%) was detected over 350 • C over the Co/CeO 2 catalysts while a similar conversion level was attained at significantly higher temperatures (400 • C) over the samples supported on both magnesia and alumina.The T 50 value (temperature at which 50% conversion was attained) was used as criterion for the relative reactivity of each sample (Table 3).In a similar way to the results observed in the low-conversion range, a substantially different performance was addressed with values close to 500 Table 3. Kinetic results of the oxidation of lean methane over the supported cobalt catalysts.
Sample T 50
• C Specific Rate at 425 An additional evidence of the goodness of the Co/CeO 2 sample was given by the analysis of the specific reaction rate of the cobalt catalysts (Table 3).This reaction rate was calculated under differential conditions (conversion < 20%) at 425 • C. The ceria supported sample exhibited a markedly higher specific activity (3.1 mmol CH 4 g Co −1 h −1 ) with respect to the other two counterparts, which showed a similar performance (1.4-1.8 mmol CH 4 g Co −1 h −1 ).
On the basis of the fact that methane oxidative decomposition over cobalt catalysts requires highly active oxygen species [63,64], the observed trend in catalytic activity was proposed to be directly related to the amount of easily reducible cobalt species in each sample, which could be measured by the specific oxygen consumption at low temperatures in the CH 4 -TPR profiles.In this sense, Figure 8 shows that there was a markedly good correlation between the T 50 values and the reacted O 2 below 550 • C. Accordingly, a comparable relationship was evidenced in relation with the H 2 uptake involved in the reduction of the free Co 3 O 4 present in each supported cobalt catalyst (H 2 -TPR profiles).
The integral method was followed for evaluating the apparent activation energy of the reaction over the examined cobalt catalysts.A power law kinetic equation, derived from a simplified Mars-van Krevelen reaction mechanism in excess of oxygen, was used.Hence, methane was assumed to follow first pseudo-order kinetics while for oxygen a zero pseudo-order was used [65].The results are listed in Table 3 while the corresponding plots for this linearized kinetic equation are shown in Figure 9.In addition, the activation enthalpy and entropy (Table 3) were estimated by applying the Eyring-Polanyi equation.The corresponding linearized plots are shown in Figure S2 (Supplementary Materials).The following apparent activation energies were estimated, namely 82 kJ mol On the basis of the fact that methane oxidative decomposition over cobalt catalysts requires highly active oxygen species [63,64], the observed trend in catalytic activity was proposed to be directly related to the amount of easily reducible cobalt species in each sample, which could be measured by the specific oxygen consumption at low temperatures in the CH4-TPR profiles.In this sense, Figure 8 shows that there was a markedly good correlation between the T50 values and the reacted O2 below 550 °C.Accordingly, a comparable relationship was evidenced in relation with the H2 uptake involved in the reduction of the free Co3O4 present in each supported cobalt catalyst (H2-TPR profiles).
The integral method was followed for evaluating the apparent activation energy of the reaction over the examined cobalt catalysts.A power law kinetic equation, derived from a simplified Marsvan Krevelen reaction mechanism in excess of oxygen, was used.Hence, methane was assumed to follow first pseudo-order kinetics while for oxygen a zero pseudo-order was used [65].The results are listed in Table 3 while the corresponding plots for this linearized kinetic equation are shown in Figure 9.In addition, the activation enthalpy and entropy (Table 3) were estimated by applying the Eyring-Polanyi equation.The corresponding linearized plots are shown in Figure S2 (Supplementary Material).The following apparent activation energies were estimated, namely 82 kJ mol −1 over Co/CeO2, 90 kJ mol −1 over Co/Al2O3 and 102 kJ mol −1 over Co/MgO.When compared with the value obtained by the bulk Co3O4 (78 kJ mol −1 ) a close similarity was found in relation to the Co/CeO2 catalyst.This finding was coherent with the fact that in both cases the nature of the active cobalt phase was the same, namely, Co3O4.A noticeable higher activation energy was noticed for the samples in which a mixture of cobalt phases was present, namely Co3O4/CoAl2O4 (Co/Al2O3 catalyst) and Co3O4/Co-Mg mixed oxide (Co/MgO catalyst).This behavior could lie in the contribution of these intrinsically less active phases to the reaction mechanism, especially in the case of the Co/MgO given its different activation entropy, thereby negatively influencing the overall activity of the resultant catalyst [57,66].
In this way, it could be established that the presence of CoAl2O4 would negatively affect the intrinsic activity of the Co/Al2O3 samples while the formation of a stable Co-Mg solid solution would negatively impact on the kinetic behavior of the Co/MgO catalyst.Finally, the thermal and hydrothermal stability of the most active catalyst (Co/CeO 2 ) was studied over relatively prolonged periods of operation under both dry and humid conditions at 300 cm 3 CH 4 g −1 h −1 (30,000 cm 3 g −1 h −1 , about 60,000 h −1 ).A 1% CH 4 /10% O 2 /10% H 2 O/79% N 2 was used for the time intervals carried out in the presence of water.The dry and humid conditions where switched every 25 h over a total operation span of 150 h at a constant temperature of 550 • C. The results are shown in Figure 10.The catalyst underwent some fast thermal deactivation during the first 25 h of the test where the conversion dropped from 70% to 60%.The introduction of water to the feed stream had a significantly detrimental effect over the activity of the catalyst, and as a result conversion levels fell down to about 35%.However, when the dry conditions were re-established, the conversion recovered almost completely to the levels exhibited before the addition of water.This behavior was again seen during the following dry-humid operation cycles, and suggested that water inhibition was essentially caused by the coverage of the surface that limited the extent of the reaction methane and catalyst oxygen active species.This effect has been also observed in other works, and has been generally been linked to the weak adsorption potential that water molecules have on the surface of cobalt oxide [67,68].In particular, Geng et al. [69] observed a significant increase in the intensity of the DRIFTS absorption bands from hydroxyl groups in a Co/γ-Al 2 O 3 catalyst when water vapor was added to the feed.Additionally, H 2 O-TPD experiments in that work also proved that a high fraction of adsorbed water on the Co catalyst surface could be desorbed at low temperatures (<500 Finally, the thermal and hydrothermal stability of the most active catalyst (Co/CeO2) was studied over relatively prolonged periods of operation under both dry and humid conditions at 300 cm 3 CH4 g −1 h −1 (30,000 cm 3 g −1 h −1 , about 60,000 h −1 ).A 1% CH4/10% O2/10% H2O/79% N2 was used for the time intervals carried out in the presence of water.The dry and humid conditions where switched every 25 h over a total operation span of 150 h at a constant temperature of 550 °C.The results are shown in Figure 10.The catalyst underwent some fast thermal deactivation during the first 25 h of the test where the conversion dropped from 70% to 60%.The introduction of water to the feed stream had a significantly detrimental effect over the activity of the catalyst, and as a result conversion levels fell down to about 35%.However, when the dry conditions were re-established, the conversion recovered almost completely to the levels exhibited before the addition of water.This behavior was again seen during the following dry-humid operation cycles, and suggested that water inhibition was essentially caused by the coverage of the surface that limited the extent of the reaction methane and catalyst oxygen active species.This effect has been also observed in other works, and has been generally been linked to the weak adsorption potential that water molecules have on the surface of cobalt oxide [67,68].In particular, Geng et al. [69] observed a significant increase in the intensity of the DRIFTS absorption bands from hydroxyl groups in a Co/-Al2O3 catalyst when water vapor was added to the feed.Additionally, H2O-TPD experiments in that work also proved that a high fraction of adsorbed water on the Co catalyst surface could be desorbed at low temperatures (<500 °C).
Conclusions
Cobalt catalysts supported over gamma-alumina, magnesia and ceria were synthesized, characterized and examined for the oxidation of methane under lean conditions with the objective of determining the influence of the catalytic support on the properties and activity of the active Co3O4 phase.
The analysis of the samples evidenced a marked effect of the strong cobalt-support interaction on the catalytic performance.More specifically, when the selected support was -alumina, the high cobalt dispersion and strong cobalt-support interaction resulted in the partial fixation of deposited cobalt as poorly reducible cobalt aluminate (CoAl2O4) species.This led to a low activity despite the good structural and textural properties of the resultant catalyst.In the case of the Co/MgO sample,
Conclusions
Cobalt catalysts supported over gamma-alumina, magnesia and ceria were synthesized, characterized and examined for the oxidation of methane under lean conditions with the objective of determining the influence of the catalytic support on the properties and activity of the active Co 3 O 4 phase.
The analysis of the samples evidenced a marked effect of the strong cobalt-support interaction on the catalytic performance.More specifically, when the selected support was γ-alumina, the high cobalt dispersion and strong cobalt-support interaction resulted in the partial fixation of deposited cobalt as poorly reducible cobalt aluminate (CoAl 2 O 4 ) species.This led to a low activity despite the good structural and textural properties of the resultant catalyst.In the case of the Co/MgO sample, the cobalt-support interaction provoked the partial dissolution and insertion of cobalt cations into the MgO lattice with the subsequent formation of a highly stable Co-Mg solid solution.However, this catalyst was somewhat more active than the alumina counterpart, due to a higher oxygen mobility of the remaining free Co 3 O 4 phase.Lastly, when the chosen support was ceria, the resultant catalyst exhibited worse structural and textural properties, but interestingly a noticeably promoted reducibility and oxygen mobility caused by the partial insertion of Ce cations into the cobalt spinel lattice.These beneficial effects made this catalyst the most active sample among the examined cobalt catalysts.Additionally, while its thermal stability over a prolonged time interval was found to be high, the addition of water vapor to the feedstream provoked a reversible inhibition assigned to the coverage of the catalyst surface by water molecules.
Given the good activity showed by the Co/CeO 2 catalyst in spite of its poor textural and structural properties, future efforts will be focused on designing Co/CeO 2 with improved specific surface area and crystallite size, either by modification of the synthesis method of the ceria support or by selecting ceria as a promoter for Co/CeO 2 -Al 2 O 3 catalysts.
Figure 1 .
Figure 1.XRD patterns of the bare supports and the supported cobalt catalysts.
Figure 1 .
Figure 1.XRD patterns of the bare supports and the supported cobalt catalysts.
Figure 2 .
Figure 2. Pore size distributions of the bare supports and the supported cobalt catalysts.
Figure 1
Figure 1 includes the diffractograms of the cobalt catalysts.Their patterns were characterized by the presence of Co3O4 (2θ = 31.3,37.0, 45.1, 59.4 and 65.3°, JCPDS 00-042-1467) along with some weak signals corresponding to the respective support (2θ = 43.0 and 62.3° and 2 at 28.5, 47.5 and 56.4°, respectively).On the other hand, it is worth pointing out that the formation of CoAl2O4 is very frequent in Co/alumina systems due to the strong interaction between Co3O4 and the support at mild temperatures (>450 °C)[44,45].However, from the XRD analysis the extent of the formation of this undesired phase was not possible.Note that both spinel-like cobalt phases (Co3O4 and CoAl2O4, JCPDS 00-044-0160) crystallize in the cubic structure with comparable cell parameters, thereby showing very close 2 values in their diffraction patterns.In addition to that, the crystallinity of the support phases did not noticeably change since their crystallite size was similar before and after cobalt deposition (Table1).However, the crystallite size of the cobalt spinel was highly dependent
Figure 2 .
Figure 2. Pore size distributions of the bare supports and the supported cobalt catalysts.
Figure 1
Figure 1 includes the diffractograms of the cobalt catalysts.Their patterns were characterized by the presence of Co 3 O 4 (2θ = 31.3,37.0, 45.1, 59.4 and 65.3 • , JCPDS 00-042-1467) along with some weak signals corresponding to the respective support (2θ = 43.0 and 62.3 • and 2θ at 28.5, 47.5 and 56.4 • , respectively).On the other hand, it is worth pointing out that the formation of CoAl 2 O 4 is very frequent in Co/alumina systems due to the strong interaction between Co 3 O 4 and the support at mild temperatures (>450 • C)[44,45].However, from the XRD analysis the extent of the formation of this undesired phase was not possible.Note that both spinel-like cobalt phases (Co 3 O 4 and CoAl 2 O 4 , JCPDS 00-044-0160) crystallize in the cubic structure with comparable cell parameters, thereby showing very close 2θ values in their diffraction patterns.In addition to that, the crystallinity of the support phases did not noticeably change since their crystallite size was similar before and after cobalt deposition (Table1).However, the crystallite size of the cobalt spinel was highly dependent on the employed
Figure 3 .
Figure 3. Raman spectra of the supported cobalt catalysts.
Figure 3 .
Figure 3. Raman spectra of the supported cobalt catalysts.
Materials 2019 ,
12, x FOR PEER REVIEW 8 of 19 further evidenced by a slight increase in the cell parameter of the Co3O4 phase for both Co/MgO (8.083 Å ) and Co/CeO2 (8.082 Å ) catalysts with respect to the bulk Co3O4 sample (8.052 Å), where the distortion of the cobalt spinel was minimal.
Figure 6 .
Figure 6.Profiles of the supported cobalt catalysts (a).Close-up view of the 200-550 °C temperature range (b).
Figure 6 .
Figure 6.Profiles of the supported cobalt catalysts (a).Close-up view of the 200-550 • C temperature range (b).
Figure 7 .
Figure 7. Light-off curves of the supported cobalt catalysts.
Figure 8 .
Figure 8. Relationship between the activity and the redox properties of the supported catalysts.
Figure 7 .
Figure 7. Light-off curves of the supported cobalt catalysts.
−1 over Co/CeO 2 , 90 kJ mol −1 over Co/Al 2 O 3 and 102 kJ mol −1 over Co/MgO.When compared with the value obtained by the bulk Co 3 O 4 (78 kJ mol −1 ) a close similarity was found in relation to the Co/CeO 2catalyst.This finding was coherent with the fact that in both cases the nature of the active cobalt phase was the same, namely, Co 3 O 4 .A noticeable higher activation energy was noticed for the samples in which a mixture of cobalt phases was present, namely Co 3 O 4 /CoAl 2 O 4 (Co/Al 2 O 3 catalyst) and Co 3 O 4 /Co-Mg mixed oxide (Co/MgO catalyst).This behavior could lie in the contribution of these intrinsically less active phases to the reaction mechanism, especially in the case of the Co/MgO given its different activation entropy, thereby negatively influencing the overall activity of the resultant catalyst[57,66].
Figure 7 .
Figure 7. Light-off curves of the supported cobalt catalysts.
Figure 8 .
Figure 8. Relationship between the activity and the redox properties of the supported catalysts.
Figure 8 .
Figure 8. Relationship between the activity and the redox properties of the supported catalysts.
Figure 9 .
Figure 9. Order integral fit for the experimental kinetic data obtained over the supported cobalt catalysts.
Figure 9 .
Figure 9. Order integral fit for the experimental kinetic data obtained over the supported cobalt catalysts.In this way, it could be established that the presence of CoAl 2 O 4 would negatively affect the intrinsic activity of the Co/Al 2 O 3 samples while the formation of a stable Co-Mg solid solution would negatively impact on the kinetic behavior of the Co/MgO catalyst.
Figure 10 .
Figure 10.Stability of the Co/CeO2 catalyst under cycled dry/humid conditions with time on stream.
Figure 10 .
Figure 10.Stability of the Co/CeO 2 catalyst under cycled dry/humid conditions with time on stream.
Table 1 .
Textural properties of the supports and the supported cobalt catalysts.
Table 1 .
Textural properties of the supports and the supported cobalt catalysts.
Table 2 .
Redox properties of the supported cobalt catalysts derived from TPR analysis with H2 and CH4.
Table 2 .
Redox properties of the supported cobalt catalysts derived from TPR analysis with H 2 and CH 4 .
Table 3 .
Kinetic results of the oxidation of lean methane over the supported cobalt catalysts. | 2019-10-02T13:04:18.080Z | 2019-09-27T00:00:00.000 | {
"year": 2019,
"sha1": "f175f99634dffb33ae98bc9a1393ff601f1153e8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/12/19/3174/pdf?version=1569588517",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "efdbb28dec56b3bfb3f8ea49560c863b5401be24",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
1536920 | pes2o/s2orc | v3-fos-license | Non-communicable diseases, infection and survival in a retrospective cohort of Indigenous and non-Indigenous adults in central Australia
Objectives We hypothesise that rising prevalence rates of non-communicable diseases (NCDs) increase infection risk and worsen outcomes among socially disadvantaged Indigenous Australians undergoing a rapid epidemiological transition. Design Available pathology, imaging and discharge morbidity codes were retrospectively reviewed for a period of 5 years prior to admission with a bloodstream infection (BSI), 1 January 2003 to 30 June 2007. Participants 558 Indigenous and 55 non-Indigenous community residents of central Australia. Outcome measures The effects of NCDs on risk of infection and death were determined after stratifying by ethnicity. Results The mean annual BSI incidence rates were far higher among Indigenous residents (Indigenous, 937/100 000; non-Indigenous, 64/100 000 person-years; IRR=14.6; 95% CI 14.61 to 14.65, p<0.001). Indigenous patients were also more likely to have previous bacterial infections (68.7% vs 34.6%; respectively, p<0.001), diabetes (44.3% vs 20%; p<0.001), harmful alcohol consumption (37% vs 12.7%; p<0.001) and other communicable diseases (human T-lymphotropic virus type 1, 45.2%; strongyloidiasis, 36.1%; hepatitis B virus, 12.9%). Among Indigenous patients, diabetes increased the odds of current Staphylococcus aureus BSI (OR=1.6, 95% CI 1.0 to 2.5) and prior skin infections (adjusted OR=2.1, 95% CI 1.4 to 3.3). Harmful alcohol consumption increased the odds of current Streptococcus pneumoniae BSI (OR=1.57, 95% CI 1.02 to 2.40) and of previous BSI (OR=1.7, 95% CI 1.1 to 2.5), skin infection (OR=1.7, 95% CI 1.1 to 2.6) or pneumonia (OR=4.3, 95% CI 2.8 to 6.7). Twenty-six per cent of Indigenous patients died at a mean (SD) age of 47±15 years. Complications of diabetes and harmful alcohol consumption predicted 28-day mortality (non-rheumatic heart disease, HR=2.9; 95% CI 1.4 to 6.2; chronic renal failure, HR=2.6, 95%CI 1.0 to 6.5; chronic liver disease, HR=3.3, 95% CI 1.6 to 6.7). Conclusions In a socially disadvantaged population undergoing a rapid epidemiological transition, NCDs are associated with an increased risk of infection and BSI-related mortality. Complex interactions between communicable diseases and NCDs demand an integrated approach to management, which must include the empowerment of affected populations to promote behavioural change.
INTRODUCTION
Complex interactions between the demographic, economic and sociological determinants of disease result in changing patterns of health and disease over time. 1 The development of modern social and economic structures, for example, has been associated with a reduction in infectious diseases and nutritional deficiencies and a corresponding rise in non-communicable diseases (NCDs) that are associated with ageing and lifestyle. 1 In many developing countries, the rapidity of this 'epidemiological transition' has resulted in a dramatic increase in NCD prevalence among populations that have a substantial pre-existing infectious disease burden. 2 3 This phenomenon proceeds at different rates according to the socioeconomic status of particular subgroups within a given population and may reinforce established health inequalities. 4 5 Among Indigenous people, forced displacement, the collapse of Indigenous economies and the destruction of sociopolitical structures have been the shared experience of colonisation. 6 Indigenous people living within developed countries continue to live in poverty and experience a 'protracted' epidemiological transition 4 that is associated with a double burden of communicable diseases and NCDs, 7 8 similar to that of many developing countries. 2 In central Australia, for example, diabetes and other NCDs are the major contributors to racial disparities in mortality 8 and to a life expectancy that remains 14 years less for Indigenous Australian men relative to their non-Indigenous peers. 9 A high burden of infectious diseases persists in this Indigenous population. Incidence rates of sepsis, 10 bloodstream infections (BSIs) 11 and childhood pneumonia 12 and prevalence rates of bronchiectasis 13 are the highest reported worldwide. Strongyloidiasis and chronic viral infections, such as with hepatitis B virus (HBV) and the human T-lymphotropic virus type 1 (HTLV-1), are also common. 11 Population-based infection-related mortality rates for Indigenous adults in central Australia therefore remain higher than those of some African countries prior to the current HIV pandemic and the median age of in-hospital death is only 48 years. 14 Interactions between communicable diseases and NCDs have been little studied; however, an appreciable effect of NCDs on infection rates is likely where pathogen exposure is frequent. Diabetes, for example, contributes to the risk of serious bacterial infections including Streptococcus pneumoniae 15 and Staphylococcus aureus, 16 which are common pathogens in overcrowded Indigenous Australian communities. 11 The NCD burden may therefore have a substantial impact on infection rates and outcomes where these two epidemics coincide. Such an interaction could reverse health gains in populations undergoing a rapid epidemiological transition and exacerbate health inequalities among disadvantaged subgroups within developed countries. The recent description in New Zealand of an increasing divergence in infection-related hospitalisation rates according to social status is consistent with this possibility and challenges health-transition theory. 17 Central Australia is well placed to study interactions between poverty, NCDs and infectious diseases. Most Indigenous residents live in remote communities in conditions of considerable socioeconomic disadvantage, leaving a minority within the major regional township of Alice Springs. The latter have ready access to a well-resourced medical facility, Alice Springs Hospital (ASH), which has sophisticated diagnostic capabilities and provides specialist medical care to a region of 1 000 000 km 2 . Indigenous residents of Alice Springs dwell in either overcrowded 'town camps', which have poor amenities and limited refuse disposal, or are integrated with the majority of the non-Indigenous population within the township's suburbs. Indigenous adults living in town camps and remote communities are often unemployed and have limited education and poor health literacy. 18 Among Indigenous adult residents of town camps, nearly half have 8 years or less schooling, labour participation rates are less than 20% and only 12% are employed. 19 Despite an extremely complex regulatory framework and numerous Government attempts to minimise risk, harmful alcohol consumption in this setting remains common. 20 The Indigenous population of central Australia also has among the highest BSI incidence rates reported. 11 Living conditions that increase the risk of pathogen exposure 21 and high background rates of focal infections, which provide portals of entry for bacterial invasion, are likely to precede these life-threatening infections. BSI incidence rates therefore provide measurable endpoints to which environmental and host factors contribute. We report the infectious and NCD burden among community residents of central Australia who presented with a BSI and determine risk factors for infection and death after stratifying by ethnicity.
METHODS
We conducted a retrospective review of all positive blood cultures collected from adult patients (age≥15 years) admitted to ASH between 1 January 2001 and 31 June 2007. In July 2007, the Australian Federal Government suspended racial discrimination legislation and implemented an 'Emergency Response' that resulted in considerable uncertainty among Indigenous residents. 22 This raised concerns that the central Australian resident population could change as people moved interstate to escape these restrictions and no data were collected after this date. Data collected included organism, ethnicity, dates of birth, dates of death, indigenous status and place of residence. For patients who presented between 1 January 2003 and 31 June 2007, we also reviewed available International Classification of Diseases (ICD) morbidity codes and results of microbiological and radiological investigations for each admission for 5 years prior to the final BSI presentation. NCDs were derived from ICD-10 Australian Modification (AM) morbidity codes for diabetes, harmful alcohol consumption, >stage 2 chronic kidney disease, ischaemic heart disease, chronic liver disease and malignancy. Bronchiectasis was diagnosed radiologically using American College of Chest Physician criteria. Heart failure and valvular heart disease, including rheumatic heart disease (RHD), were diagnosed by transthoracic echocardiography. Ischaemic heart disease and cardiac failure were combined ('nonrheumatic heart disease') for statistical analysis.
Definitions
Residence Place of residence was categorised as (1) remote (>80 km from Alice Springs), (2) Alice Springs town camp and (3) urban (resident in Alice Springs, but not in a town camp). Nursing home residents were included in calculations of BSI incidence rates, but excluded from further analysis because the primary study objective was to determine risk factors for infection and death among community residents.
Infections A blood culture from which a pathogen was isolated was defined as a 'BSI episode'. Repeated culture of the same organism from blood cultures was regarded as a separate 'episode' only if blood samples were drawn more than 1 month apart. BSIs were defined as community-acquired if a pathogen was isolated from blood cultures drawn within 48 h of admission and nosocomial if isolated from blood cultures drawn after this time. Foci of infection were determined where possible from ICD morbidity codes in association with pathology and imaging results for each admission for 5 years prior to the final BSI during the study period. A diagnosis of pneumonia was made if there was radiological evidence of consolidation and this was attributed to the pathogen isolated from blood cultures if the same organism was also isolated from sputum or the blood culture isolate was an organism typically associated with pneumonia, such as S pneumoniae. BSIs exclude potential contaminants including coagulase negative staphylococci, Bacillus spp., coryneforms and viridans streptococci unless grown from more than one BC in a 24 h period and Acinetobacter sp. in the absence of an identifiable focus.
Statistics
All associations were assessed using data obtained for the final BSI admission within the study period. Univariate analysis for categorical data was performed using χ 2 statistics and Fisher's exact tests where appropriate. Multivariate analysis was performed using binary logistic regression. Short-term (28-day) and long-term survival analyses following the final BSI episode in the study period were performed using the log-rank statistic for univariate analysis and Cox regression for multivariate analysis. We calculated the annual population-based incidence rates for 2001-2006 for the combined Alice Springs and Anangu Pitjantjatjara Yankunyatjara (APY) land areas using the total number of BSI presentations each year as the numerator. The denominator used was the estimated adult resident population obtained from the Australian Bureau of Statistics 2006 census data for the Alice Springs region combined with that of the neighbouring APY land areas. To enable analysis according to place of residence, this population was further divided into that of (1) the Alice Springs urban area excluding town camps (Indigenous 2898, non-Indigenous 18471), (2) Alice Springs town camps, including that of a closely affiliated neighbouring community (Indigenous, 1482) and (3) the Alice Springs rural area (Indigenous, 8925; non-Indigenous, 2775), which included that for the APY land areas (Indigenous=1302, non-Indigenous=294).
Previous infections
Excluding Indigenous patients who were at increased risk of recurrent infection (haemodialysis, 83; bronchiectasis, 27) and those residing outside the Alice Springs region who could not be followed up (17)
Community-acquired BSI among Indigenous patients
NCDs including chronic liver disease, non-RHD and chronic kidney disease were independent predictors of
Nosocomial and community-acquired BSI among non-indigenous patients
In multivariate analysis, only non-RHD was an independent predictor of short-term mortality among nonindigenous patients with a community-acquired BSI (HR=12.5, 95% CI 1.0 to 150.3; p<0.05). There were 3 deaths within 28 days among 12 non-indigenous patients with non-RHD and 3 deaths among 56 patients without non-RHD. Numbers of nosocomial BSI among nonindigenous patients were too few (n=5) to attempt survival analysis.
Long-term mortality
One hundred and forty-five (26%) Indigenous and 15 (27.3%) non-Indigenous patients died during the 2056 years of follow-up at a mean±SD age of 47±15 and 68±21 years ( p<0.001), respectively. Among Indigenous patients, mortality rates were again highest among those from town camps (Log-rank χ 2 =5.05, p=0.08; figure 2). Among Indigenous patients, NCDs (non-RHD, chronic kidney disease, chronic liver disease and malignancy) and BSI with S aureus and S pneumoniae were independent predictors of long-term mortality following community-acquired BSI (table 5). Residence in a town camp (town camp, 7 of 11; urban residence, 0 of 6; remote areas, 14 of 30; χ 2 =6.5, 2df; p=0.04) and BSI with K pneumoniae BSI (HR=4.0, 95% CI 1.5 to 11.2; p=0.007) were the only univariate predictors of long-term mortality for nosocomial BSIs among Indigenous patients.
There were no independent predictors of long-term mortality for non-Indigenous patients with communityacquired infections and too few non-indigenous patients with nosocomial BSIs (n=5) to perform long-term survival analysis.
DISCUSSION
The Indigenous adult population of central Australia has among the highest BSI incidence rates worldwide. Relative to their non-Indigenous peers, rates for Indigenous adults were nearly 15-fold higher overall and 40-fold higher among Indigenous town camp residents. A high burden of other infections, particularly repeated respiratory and skin infections, provides portals of entry for life-threatening invasive bacterial disease. Nearly 70% of Indigenous patients required admission for an acute infection in the preceding 5 years, 24.4% experienced a prior BSI and a second unrelated bacterial infection was found in 12.4% of patients. Chronic viral and parasitic infections were also common. Among Indigenous adults who were tested, more than 60% had been infected with hepatitis B virus, 13% remained HBsAg positive, nearly half were HTLV-1 seropositive and 36% were S stercoralis seropositive. A similar burden of infection is experienced by Indigenous children among whom frequent coinfection with bacterial pathogens and parasites 23 contributes to 'failure-to-thrive'. 24 In our adult cohort, 26% of Indigenous patients died during the study period at a mean age of only 47 years. Although we were unable to attribute cause of death in the present study, 60% of Indigenous deaths at ASH are infection-related. 14 High prevalence rates of NCDs were also found in our Indigenous cohort. These included diabetes, harmful alcohol consumption, chronic lung disease and endstage kidney disease, all of which increase the risk of bacterial infection. 15 16 25 26 Invasive pneumococcal disease (IPD), for example, is 3, 5.6 and 7-11 times more common among patients with diabetes, 15 chronic lung disease 15 and alcohol dependence, 15 27 respectively. Alcohol dependence and diabetes also increase the risk of a BSI requiring intensive care nearly six-fold and haemodialysis increases risk several 100-fold, 26 largely due to prolonged central venous access. 28 In the present study, the rates of diabetes among Indigenous adults were nearly three times the reported background rates. 29 Diabetes was associated with S aureus BSI and with previous skin infections, but not with S pneumoniae BSI. Stages 3-4 chronic kidney disease, which is most often a complication of diabetes in our patient population, 30 was associated with any previous infection. Harmful alcohol consumption was associated with S pneumoniae BSI and with previous infection-related admissions. NCDs, including non-RHD, chronic kidney disease and chronic liver disease, were also major predictors of mortality after a BSI. However, once invasive infections were established, S aureus and S pneumoniae predicted death independently of any underlying medical condition.
The present study has compared the risk of NCDs among patients presenting with a BSI and cannot determine the population-based risks attributable to these conditions. Nevertheless, racial disparities in NCD prevalence are unlikely to fully account for the BSI incidence rate ratios reported here, and nor do regional differences in their prevalence 29 explain IPD incidence rates that are twice as high among Indigenous residents of central Australia relative to those of the tropical north. 31 In the USA, higher IPD incidence rates among Black Americans 15 32 are more robustly associated with poverty than race. 32 An increased risk of S aureus infection has also been reported among those of lower socioeconomic position [33][34][35] and infection-related hospital admissions in New Zealand are associated with social deprivation. 17 The socioeconomic circumstances of Indigenous Australians are therefore likely to further increase the infection risks associated with NCDs.
Social disadvantage predisposes to NCDs 36 37 while increasing pathogen exposure and limiting opportunities to implement behavioural strategies that ameliorate risk. 38 In some Indigenous Australian communities, the average number of people living per house is 17 39 and non-functioning health hardware leads to environmental conditions that are detrimental to householders. 21 Overcrowded housing 40 and an inability to maintain adequate skin hygiene 21 contribute to high rates of pyoderma. More than 40% of Indigenous patients in the present study were previously admitted with skin infections, which are the most common primary focus for S aureus bacteraemia in this population. 41 Scabies, a recognised cause of S aureus and Streptococcal pyoderma, 40 42 affected 4% of our cohort. Streptococcal pyoderma underlies most cases of RHD in the Northern Territory 39 and this was confirmed echocardiographically in 2% of our Indigenous cohort. Similarly, the transmission of respiratory pathogens is promoted by household crowding 43 and nearly 40% of Indigenous adults were admitted previously with pneumonia. Environmental contamination, 24 inadequate sanitation and unhygienic food preparation areas 21 contribute to infection with enteric pathogens and S stercoralis. The risks of complicated strongyloidiasis, crusted scabies 44 and bronchiectasis 13 are further increased by HTLV-1 infection; however, no attempt has been made to control transmission of this virus among Indigenous Australians. These effects are compounded by poor health literacy and Indigenous adults are less likely to engage with a conventional medical paradigm. 18 Delays in seeking care for uncomplicated urinary tract infections may therefore contribute to the very high Gram-negative BSI incidence rates reported here.
The retrospective design of this study results in a number of limitations. First, only limited demographic information is collected by ASH and the Indigenous population is relatively mobile. Residents of remote communities, for example, frequently stay in town camps and this is not recorded by ASH. The effects of a town camp residence may therefore be underestimated if large numbers of remote residents acquire infection during these visits. Although the foci of infection were determined by reviewing the results of microbiology and imaging for each presentation, these varied between patients according to the practice of the treating physician. The number of patients with concurrent bacterial infections and medical conditions, such as RHD, may therefore be underestimated. Similarly, seropositivity rates for infections such as HBV and HTLV-1 could only be determined for a subset of patients. A further limitation is the identification of NCDs and previous infections using ICD codes; however, coding errors are unlikely to vary systematically according to ethnicity or place of residence. The use of ICD codes does, however, limit our ability to study factors that are more difficult to define and that might also influence infection risk, such as nutrition and health literacy. Finally, the present study has demonstrated an increased risk of infection and death associated with town camp residence. This occurred despite better access to healthcare relative to remote residents and little difference in crude measures of socioeconomic deprivation. 7 For communityacquired BSIs, risk of death was strongly associated with NCDs; however, these conditions did not fully account for the increased risk following a nosocomial BSI. Unmeasured socioeconomic factors might contribute to increased mortality among town camp residents; however, recent research linking health outcomes to perceived racism 45 may also be relevant to this marginalised population.
The disease burden among the Indigenous population of central Australia is similar to that of many developing countries where NCD prevalence rates are rising rapidly in a setting of persistently high infection rates. 2 46 Recently, the validity of conventional health transition theory has been challenged by findings that infectionrelated hospitalisation rates are increasing among the most socially disadvantaged community members in a developed country. 17 The present study provides a possible explanation for this observation and further suggests that, in contrast to the orderly epidemiological transition envisaged by Omran (1971), 1 life expectancy may fall where social deprivation persists in the face of a rising prevalence of NCDs. High BSI incidence rates among Indigenous Australians were associated with a heavy burden of other infections that provide portals of entry for invasive bacterial disease. Improving life expectancy in this setting will require public health initiatives to reduce pathogen exposure in addition to controlling the burgeoning NCD burden. Diabetes, harmful alcohol consumption and organ damage resulting from these conditions increased both the likelihood of infection and the subsequent risk of death. Both conditions are included in proposed international management strategies to control the NCD crisis. 37 However, our findings also illustrate the complexity of interactions between communicable diseases and NCDs and support calls for an integrated approach to disease management. 47 The intimate association between these conditions and human behaviour renders the empowerment of affected populations to adopt protective health-related strategies critical to the success of any management programme. 47 | 2017-04-20T00:37:17.116Z | 2013-07-01T00:00:00.000 | {
"year": 2013,
"sha1": "4726f50a516827ba31c4c7d9dc656f0009c4df4d",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/3/7/e003070.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4726f50a516827ba31c4c7d9dc656f0009c4df4d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247287362 | pes2o/s2orc | v3-fos-license | Screening of Chitosan Derivatives-Carbon Dots Based on Antibacterial Activity and Application in Anti-Staphylococcus aureus Biofilm
Introduction Pathogenic bacteria, especially the ones with highly organized, systematic aggregating bacteria biofilm, would cause great harm to human health. The development of highly efficient antibacterial and antibiofilm functional fluorescent nanomaterial would be of great significance. Methods This paper reports the preparation of a series of antibacterial functional carbon dots (CDs) with chitosan (CS) and its derivatives as raw materials through one-step route, and the impact of various experiment parameters upon the optical properties and the antibacterial abilities have been explored, including the structures of the raw materials, excipients, and solvents. Results The CDs prepared by quaternary ammonium salt of chitosan (QCS) and ethylenediamine (EDA) exhibit multiple antibacterial effects through membrane breaking, DNA and protein destroying, and the production of singlet oxygen. The CDs showed excellent broad-spectrum inhibitory activity against a variety of bacteria (Gram-positive and negative bacteria), in particular, to the biofilm of Staphylococcus aureus with minimum inhibitory concentration at 10 µg/mL, showing great potential in killing bacteria and biofilms. The biocompatibility experiments proved that QCS-EDA-CDs are non-toxic to human normal hepatocytes and have low haemolytic effect. Furthermore, the prepared QCS-EDA-CDs have been successfully used in bacterial and biofilm imaging thanks to their excellent optical properties. Conclusion This paper explored the preparation and application of functional CDs, which can be used as the visual probe and therapeutic agents in the treatment of infections caused by bacteria and biofilm.
CDs have shown great potential applications in biomedical fields, and their antibacterial ability has also arisen great interests of researchers. Normally, CDs could affect physiological functions of bacteria and finally kill them through mechanisms like oxidative stress, 9 cell membrane damage, 10 induction of gene apoptosis, 11 adsorption and encapsulation. 12 However, CDs do not show effective inhibitory ability towards biofilms regardless of their strong ability to planktonic bacteria. 15 Li and his team 16 prepared a nanosystem based on CDs, whose minimum inhibitory concentration (MIC) against Staphylococcus aureus (S. aureus) was 250 μg/mL. However, when treating biofilm, its MIC rose to 1000 µg/mL. Therefore, the development of CDs with effective ability against biofilm is of great significant to clinical application.
For the presents, CDs destroy biofilm in two ways: to kill the resident bacteria to degrade the biofilm, or to directly destroy the materials on the biofilm, such as proteins, to affect its formation. For example, Liang et al 15 used tinidazole to prepare tinidazole CDs (TCDs), and the prepared TCDs could inhibit the growth of P. gingivalis with MIC at 50 μg/mL and completely inhibit the formation of biofilm at 100 μg/mL. Singh and his team synthesized biomass CDs with curcumin as raw material, and CDs could interact with matrix proteins and thus possess anti-biofilm ability through biofilm degrading behavior. 17 However, reports on anti-biofilm CDs are still limited, and the antibiofilm mechanisms need further exploration.
The excellent properties of CDs are the basic requirement for their applications in antibacterial and antibiofilm field. Traditional antibacterial function of CDs was realized through coupling, 18 but the one-step route to prepare antibacterial CDs would be more attractive due to its simple operation, environmental-friendliness, low cost and stable emission. 19 It was discovered that the inhibitory behavior of prepared CDs were influenced by the types of precursors, excipients (including carbon source, nitrogen source, sulfur source, reducing agent, passivator, doping agent and acid-base regulator, etc.) and the preparation environment parameters. Analysis and filtration for raw materials, especially the ones of the same type or of the different structure domains are crucially meaningful in CDs studies. Meanwhile, the excipients and the optimization of synthesis process could also adjust the electron density and group composition, and thus the properties of the product, including the particle size and optical properties, further affecting the biochemical behavior in their practical applications. For instance, the cationic CDs could easily penetrate through or aggregate on the biofilm, 20 so chitosan (CS) with cationic group on the surface is an ideal raw material to prepare cationic antibacterial CDs. 21 Travlou et al discovered that compared with S-doped CDs, N-Doped CDs exhibited higher antibacterial ability, which was relevant to the specific surface chemical properties and particle size. 22 Huang and his team discovered that the MIC of the prepared spermidine CDs (prepared at 260°C) against methicillin-resistant Staphylococcus aureus (MRSA) is at least ten times lower than that of CDs prepared at other temperatures. 23 The above researches provided meaningful basis for designing high-efficient antibacterial materials. Notably, with extraordinary optical properties and biocompatibility, CDs have also been used in microorganism monitoring. 24 The research introduces steps to prepare blue-fluorescent CDs with chitosan (CS) and its derivatives as carbon sources through one-step hydrothermal route. The impacts of raw materials and their structures, excipients and solvents towards the antibacterial ability in prepared CDs are presented, and numerous characterization methods are adopted to study the reasons to the antibacterial behaviors of different CDs. The experiment results showed that CDs (QCS-EDA-CDs) prepared by QCS and ethylenediamine (EDA) exhibited strong inhibitory ability to several types of bacteria. Scanning electron microscope and molecular biology methods have been used to explore the impact of QCS-EDA-CDs to the growth of the bacteria. Furthermore, optical experiments showed the induced reactive oxygen species (ROS) under daylight lamp, indicating the antibacterial ability is based on multiple mechanisms. Meanwhile, CDs also exhibited strong inhibitory behavior to the biofilm of Staphylococcus aureus (S. aureus), including the formation process of biofilm and mature biofilms. The prepared the prepared QCS-EDA-CDs have low toxicity in biocompatibility evaluation, which making them ideal material as multi-colored fluorescent probe to S. aureus cells and biofilm imaging.
Material Characterizations
Fluorescence spectra were measured by LS55 fluorescence spectrophotometer (PerkinElmer). Absorption spectra of Ultraviolet-visible (UV-Vis) were recorded using Lambda-35 UV-vis spectrophotometer (PerkinElmer). The transmission electron microscopy (TEM) and high-resolution TEM (HRTEM) images were analyzed using JEM-1400Plus transmission electron microscope (Japan Electron Optics Laboratory Co., Ltd). Zeta potentials for CDs were evaluated by zetasizer (Nano ZSE, Malvern Instruments, UK). VG Multilab 2000 X-ray photoelectron spectroscopy was gathered by surface analysis. Optical density (OD) in the cell was calculated using microplate reader (Thermo Scientific, England, UK). The circular dichroism spectra were gathered using Chirascan Plus spectropolarimeter (Applied Photophysics Ltd.). Scanning electron microscope (SEM) images for bacteria were conducted using Zeiss SIGMA scanning electron microscope (Carl Zeiss Jena). Fluorescence optical microscope (Nikon ECLIPSE Ti) was applied on images through laser-scanning confocal fluorescence.
Synthesis Method of CDs
The synthesis procedure of QCS-EDA-CDs was as follows: 200 mg of QCS was dissolved in 200 μL EDA and 20 mL DI water and the mixture was afterwards moved to Teflon-lined stainless-steel autoclave and preserved at 200°C in 4 h. Synthesis methods of other CDs are in the Supplementary Materials.
Inhibition of Biofilm Experiments
Fresh S. aureus culture (10 9 CFU mL −1 ) was diluted 1:10 in 200 μL TSB medium, which contained QCS-EDA-CDs at varying concentrations (5,10,15,20,25,30,40 μg/mL) on 96-well plates. Wells with no QCS-EDA-CDs were used for control. The mixture was cultured at 37°C. The OD value of per well was measured using the microplate reader at 600 nm within 48 hours. The time curve of OD 600 at different concentrations was drawn.
Live/Dead Cell Imaging
To evaluate the inhibitory effect of the sample on the formation of biofilms, fluorescence photography was performed using the confocal laser scanning microscope (CLSM). Upon incubation, the S. aureus biofilm was rinsed thrice in PBS (pH = 7.4). It could be detected on CLSM after staining with DAPI/PI (10 µg/mL) for 15 minutes. The fluorescence image was obtained when the sample was simultaneously excited with 350 and 533 nm, and the corresponding emission was at 461 and 615 nm, showing blue and red, respectively.
SOSG Oxidation in the Presence of 1 O 2
The reaction between SOSG and SOG produced from photoirradiation for QCS-EDA-CDs (1 mg/mL) in PBS was investigated. SOSG was added with a concentration of 2.5 µM. The control groups include: (1) Pure SOSG; (2) pure CDs. The tested solutions were irradiated with daylight lamp or placed in the dark for 30 minutes. The SOG was generated from irradiation at 671 nm. The SOSG fluorescence was recorded under the excitation at 494 nm, and the maximum was detected upon irradiation to decide sample SOG. Sample SOG was assessed through SOSG fluorescence enhancement in comparison with the background or control sample. 25
Results and Discussion
The Impact of Different Conditions Upon the Antibacterial Ability of CDs The Effects of Raw Material Type on CDs CS and its derivatives (carboxymethyl chitosan (CMCs) and oligochitosan quaternary ammonium salt (QCS)) have been used to prepare CDs, to study the impact of carbon source domain to the inhibitory abilities of prepared products. CS-CDs, QCS-CDs and CMCS-CDs have been prepared through one-step hydrothermal route (200°C, 4 h) (Scheme 1) with CS, CMCS and QCS as raw materials respectively. Since CS is slightly soluble in water, 1% acetic acid was used as the solvent when CS used as raw material. The antibacterial abilities of three prepared CDs against S. aureus are studied to explore the impacts of raw material structure to the antibacterial abilities of the products. According to Table 1, the minimum inhibitory concentrations (MIC) of CS-CDs, CMCS-CDs and QCS-CDs against S. aureus are 250, >1000 and 10 μg/mL. CMCS-CDs show almost no antibacterial ability to S. aureus, while QCS-CDs have the best antibacterial ability.
Scheme 1 shows the structural domain differences of CS and its derivatives. The CMCS and QCS are prepared by using carboxymethyl and 2,3-epoxypropyl trimethylammonium chloride to replace the hydrogen atom in amino group of CS. Some residual groups of these three raw materials still exist on the surface of prepared products after high temperature, dehydration, nucleation and carbonization. Since antibacterial abilities of the raw materials might be inherited into their products, the electrical properties of these three raw materials and their products are investigated to explore the reasons to the difference in antibacterial abilities ( Figure S2). The MICs of CS and QCS against S. aureus are both 25 μg/mL, while CMCS almost does not have any antibacterial ability. The experiment results show that CMCS and its product CMCS-CDs are both negatively charged, which would prevent their interactions with negative-charged bacteria. The positive charge of CS and QCS (zeta potentials: +65.8 and +59.1 mV) and their products CS-CDs and QCS-CDs (zeta potentials: +32.9 and +30.7 mV) promises their interactions with bacteria, penetrating cell membrane and finally killing the bacteria. Since QCS-CDs exhibit better water solubility and stronger antibacterial ability than other prepared CDs, it is thus chosen as the target of further studies.
The Influence of Excipients on the Synthesis of CDs
As is known that besides carbon source, other synthesis parameters, including the nitrogen source, sulfur source, reducing agent, passivator, doping agent and acid-base regulator, could also influence the prepared CDs' optical properties, but their influences upon the antibacterial behaviors of the products are often neglected. 26 The impacts of common auxiliary materials CA (50 mg) and ethylenediamine (EDA, 200 μL) upon the prepared QCS-CDs' properties are investigated. According to Figure 1A and B, UV-vis absorption peaks in QCS-CA-CDs and QCS-EDA-CDs are at 270 nm and 304 nm, attributed to the electron transition from σ and π orbits (occupying the highest molecule). At the excitation wavelength of 370 nm, their emission wavelengths reach 465 nm and 466 nm, almost the same as that of QCS-CDs (467 nm), showing that their luminescent groups are mainly from the carbon source QCS. Moreover, the QYs of QCS-EDA-CDs is calculated as 9.0% with quinine sulfate as reference, higher than that of QCS-CDs prepared with water as solvent (QYs=0.9%). The high fluorescence QY of EDA-passivated CDs may be attributed to the interplay between the holes trapped and the passivation of the CDs. 27 According to Table 1, the MIC in QCS-CA-CDS against S. aureus is 125 μg/mL, much lower than that of QCS-CDs (10 μg/mL), while the MIC of QCS-EDA-CDs is 10 μg/mL, similar to that of QCS-CDs.
941
Several characterization means have been employed to compare their functional groups of these CDs to study their antibacterial mechanisms. As shown in FTIR spectra ( Figure 1C), the peak of QCS-CA-CDs at 1375 cm −1 in infrared absorption spectra results from the in-plane bending in -OH bonds, while the peak of QCS-EDA-CDs at 1063 cm −1 is attributed to the vibrations of C-O bonds. 28 Besides the absorption peak of −N + (CH 3 ) 3 (1488 cm −1 ), QCS-CA-CDs also have absorption peaks of −COOH at 3398 and 1697 cm −1 . 29 The measured zeta potentials of these two CDs exhibit obvious difference. The zeta potential of QCS-CA-CDs is +11.7 mV ( Figure 1D), lower than that of QCS-EDA-CDs (+20.6 mV). When CA is used as carbon source in preparation, the carboxyl group in the structure of CA can react with quaternary ammonium group in QCS, leading to reduced zeta potential. The changed surface group and reduced zeta potential would decrease the binding ability of prepared CDs with negatively charged bacteria, and thus its antibacterial abilities.
The Impact of Solvent Upon the Antibacterial Ability of CDs
Though the reaction solvent has been regarded as the crucial synthesis parameter for optical properties of prepared products, 30,31 the impact of solvent upon the antibacterial ability of prepared CDs still needs detailed research. Water and organic solvent FO are used as solvent to prepare QCS-CDs and QCS-FO-CDs. As shown in Figure S3a, the absorption peak of QCS-FO-CDs in UV-vis absorption spectrum reaches 271 nm, and emission wavelength reaches 449 nm at the excitation wavelength of 370 nm, shorter than the emission peak of QCS-CDs.
With S. aureus as target bacteria, QCS-FO-CDs exhibit almost no antibacterial ability against bacteria (MIC>1000 μg/mL), much weaker than QCS-CDs prepared in water. The enhancement of other functional groups in CD skeleton,
Characterization of QCS-EDA-CDs
Several characterization methods are adopted to investigate QCS-EDA-CDs' properties. The emission wavelength of QCS-EDA-CDs increases from 340 nm to 390 nm as corresponding excitation wavelength increases gradually from 439 nm to 468 nm (Figure 2A), showing its excitation wavelength dependent property. The morphology of prepared QCS-EDA-CDs is presented with TEM. According to Figure 2B As shown in Table 2 The prepared QCS-EDA-CDs have been compared with other chitosan-derived nanoparticles and quaternized CDs reported in other references, in aspects of preparation methods and antibacterial properties ( Table 2). QCS-EDA-CDs show much better antibacterial behavior against S. aureus than chitosan nanoparticles (CS NPs), carvacrol-grafted chitosan nanoparticles (CSCA NPs) and eugenol-grafted chitosan nanoparticles (CSEU NPs) whose MICs against S. aureus are beyond 500 µg/mL. 35 Though the MIC of QCS-EDA-CDs is quite close to that of quaternized CDs, QCS-CDs reported in this paper require simpler and easier synthesis process.
Since Gram-negative bacteria are in an outer membrane structure that can be hardly penetrated and protect bacteria from the antibiotics, the infection caused by Gram-negative bacteria is harder to be cured than that caused by Grampositive bacteria. The above-mentioned references also reported the antibacterial ability of chitosan nanoparticles (CS NPs, CSCA NPs and CSEU NPs) against E. coli, 35 with their MIC at 0.5-1, 0.5-1 and 0.25-0.5 mg/mL respectively. QCS-EDA-CDs exhibit similar inhibitory ability as those reported nanoparticles.
The structural difference of these two types of bacteria might be the reason to such difference in exhibited antibacterial abilities of QCS-CDs. It is found that Gram-positive bacteria show greater sensitivity towards lipophilic molecules compared with Gram-negative bacteria. The cell wall of Gram-positive bacteria (such as S. aureus) and Gramnegative bacteria (such as E. coli) is in different chemical composition. 38 S. aureus and the corresponding bacteria have porous cell wall because of cross-linked peptidoglycan's distributed thick layer on the plasma membrane, which may promote QCS-EDA-CDs' interaction with S. aureus and MRSA, and thus encourage the antibacterial behavior of CDs. 39 However, the cell wall of E. coli contains an outer membrane and intermittently peptidoglycan network, hindering the interaction between CDs and E. coli and making the bacteria exhibited stronger resistance against the inhibitory behavior of CDs.
Inhibitory Abilities of QCS-EDA-CDs Against Biofilm
The inhibitory ability of prepared QCS-EDA-CDs against the bacterial biofilm is tested with pathogen S. aureus as the target bacteria. Compared with control group, QCS-EDA-CDs can significantly decrease biofilm amount in S. aureus (Figure 3), and the inhibition rate against biofilm is positively related to every antibacterial agent's concentration. When
945
Nowadays, some antibacterial nanoparticles with the activity of inhibiting biofilm have been developed. 16,[40][41][42][43][44][45] However, they often have some defects, such as too large antibacterial MIC or inhibiting biofilm in a much larger amount than MIC in a short period of time, which is not conducive to the application of antibacterial materials in actual medical treatment. With reference to previous studies (such as Table 3), QCS-EDA-CDs have long-term biofilm inhibition activity, and their MBIC and MIC are not much different. This shows the superiority of QCS-EDA-CDs in inhibiting biofilms.
Recent researchers have discovered the negative activity of chitosan towards the formation of biofilm of a variety of bacteria and fungi. 45 Chitosan can inhibit the formation of biofilm, and lead to the dispersion of pre-formed biofilm. The confocal laser scanning microscope (CLSM) images of Figure 4 further show the impact of QCS-EDA-CDs upon the formation of the biofilm of S. aureus. Under the low concentration of QCS-EDA-CDs (250 μg/mL) (Figure 4), the biofilm shows compact structure and the internal bacterial cells are well clustered, whereas under high concentration of QCS-EDA-CDs environment, the density of biofilm has significant decrease and the bacterial debris can be obviously found. The images prove the inhibitory effects to the growth and reproduction of biofilm, as well as to the mature biofilm.
To get more directly assessment of the impact of QCS-EDA-CDs upon the mature biofilm of S. aureus, CLSM is also used to observe the PI/DAPI-labeled biofilms of S. aureus treated by QCS-EDA-CDs at different concentrations. DAPI and PI are common staining agents for live and dead cells. The wall/membrane-damaged bacteria would be dyed as red-colored, and the wall/membrane intact bacteria cells as blue-colored. As shown in Figure 5A-C, the untreated biofilm of S. aureus exhibits blue fluorescence without any red fluorescence. In contrast, after the treatment of QCS-EDA-CDs, the biofilm shows strong red fluorescence, illustrating the death of large amount of bacteria. Meanwhile, the biofilm treated by two concentrations of QCDs-EDA-CDs for the same duration of time ( Figure 5) shows the gradual increase red fluorescence, and under the light field, the dense biofilm structure can be observed being destroyed. The result shows that the high concentration of QCS-
Antibacterial Mechanism of QCS-EDA-CDs
The antibacterial mechanism of QCS-EDA-CDs is explored with S. aureus as the model bacteria. As shown in SEM images ( Figure 6A), the mature bacterial biofilm shows aggregation of complete cells connected and coated with EPS, and the bacterial cells are full. After being treated by QCS-EDA-CDs for four hours, the biofilm shows obvious decrease in its density, and the decrease in bacteria amount ( Figure 6B). The bacterial cells are wrinkled, and the surface are seriously damaged and collapsed. The wrinkled and collapsed shape of bacteria prove the disruption of membrane integrity induced by QCS-EDA-CDs, while in control group, bacteria in normal cultivation condition have a spherical structure. The result shows that QCS-EDA-CDs kill bacteria mainly through rupturing its cell membranes.
The prepared QCS-EDA-CDs have been proven to possess excellent antibacterial capacity towards Gram-positive and negative bacteria. Considering the crucial role of QCS-EDA-CDs' surface-chemical properties in interaction activity 46 It is concluded that QCS-EDA-CDs realize the antibacterial activity through destroying the cell membrane and decomposing EPS. The small size and the hydrophobic organic chain of QCS-EDA-CDs make them easy to enter the substrate layer of biofilm, and the rich quaternary ammonium cations on the surface of CDs would interact with biofilm, destroying the matrix structure of biofilm and finally destroy the bacterial cells (Scheme 2).
Photodynamic therapy (PDT) belongs to a conventional therapy against bacteria. With oxygen, photosensitizer transfers absorbed photon energy to oxygen molecules nearby, and generates reactive oxygen species (ROS) with 1 O 2 47 SOSG has interactions with 1 O 2 to generate SOSG endoperoxides (SOSG-EP) and releases intense green fluorescence at the maximum intensity of 540 nm. 48 The daylight lamp could be used as the light source. As shown in Figure S5, in contrast to pure SOSG (2.5 μM) group, the solution added with the presence of QCS-EDA-CDs exhibit obviously stronger fluorescence intensity, showing the formation of 1 O 2 . The pure QCS-EDA-CDs do not exhibit any fluorescence at 540 nm after the illumination of 30 min, illustrating that the fluorescence intensity change originates from the combination of SOSG and reactive oxygen species.
Circular dichroism spectroscopy can be considered a meaningful approach to detect the secondary structure for DNA and proteins in the solution. 49 It was used to examine the impact of QCS-EDA-CDs on DNA. As shown in Figure 7A, in comparison with the control group, the decline of QCS-EDA-CDs group's peak suggests that QCS-EDA-CDs loosen DNA's double helix structure. Because of the nanoscale size, QCS-EDA-CDs penetrate into the bacterial cell upon penetrating the cell membrane, and loosen the naked DNA structure of bacteria, thus hindering the proliferation of bacteria. Total protein of S. aureus is retrieved using the bacterial total protein extraction kit, and circular dichroism spectroscopy is applied in the study on the interaction of QCS-EDA-CDs with the total protein of S. aureus. Without the presence of QCS-EDA-CDs, there exists a broad absorption peak at about 220 nm as the result of the n→π* transition in the α-helical structure ( Figure 7B). 50 After the incubation with QCS-EDA-CDs, the weakened CD signal proves the reduced α-helical structure of protein because of the interaction between CDs and protein, leading to the change in hydrogen bond network and secondary structures of proteins, and consequently the damages of bacteria activities. Upon protein incubation with QCS-EDA-CDs, three new protein absorption peaks are found at 202 and 203 nm, suggesting the presence of QCS-EDA-CDs influences protein structural characteristics. The experimental results verify the antibacterial activity of QCS-EDA-CDs is through changing the protein and DNA structure for bacteria.
Alternatively, as shown in Scheme 2, QCS-EDA-CDs damage cell membranes, intracellular protein and DNA, eventually resulting in bacterial death to exert antimicrobial effects. Admittedly, for better understanding the specific targets and detailed mechanism of QCS-EDA-CDs for bacteria, other related experiments and technologies should be also verified, such as the investigation on the anti-quorum sensing activity of CDs as fungal crude extracts, 51 to adjust the biofilm formation.
Biocompatibility Assay of QCS-EDA-CDs
Cytotoxicity is regarded as one of the important factors that determine the safety of biomedical materials, and the cytotoxicity and blood compatibility of QCS-EDA-CDs in the cell are therefore detected. LO2 cells are used as the target Figure 8A shows the proliferation rates of cells cultured at different doses of QCS-EDA-CDs in 48 h. MTT assay reveals that average cell viability exceeds 97% when QCS-EDA-CDs concentration is at 200 µg/mL. The finding promotes QCS-EDA-CDs as ideal candidates to use in cellular imaging for diagnosis purposes.
Moreover, an in vitro hemolysis assay is conducted on defibrinated human blood to explain the biocompatibility of QCS-EDA-CDs and QCS of red blood cells (RBCs) in biological solution. Spectrophotometric analysis on supernatants and photographs for treated RBCs are gathered after exposing RBCs to QCS-EDA-CDs and QCS at different concentrations in 3 h ( Figure 8B). Notably, the supernatant in the RBC suspension remains clear, with a measured concentration of 500 µg/mL. When the sample concentration exceeds 500 μg /mL, the hemolysis rates in QCS-EDA-CDs and QCS are less than 0.08% and 1.06%. A hemolysis rate below 5% is secure in biomedical applications. This result indicates that QCS-EDA-CDs do not lead to the rupture and hemolysis in red blood cells in physiological conditions and shows excellent blood compatibility.
Conclusion
In summary, with CS and its derivatives as raw materials, a variety of blue-fluorescent CDs are prepared through simple and fast one-step hydrothermal route, to study how carbon source affected CDs in optical properties and antibacterial activities. Through carefully selecting proper raw material type and synthesis parameters, QCS-EDA-CDs with high antibacterial ability are prepared. Numerous characterization means are adopted to examine the antibacterial mechanism and process of QCS-EDA-CDs against bacteria. The experiments show that the rich quaternary ammonium groups on QCS-EDA-CDs make CDs strong positively charged, leading to easy combination with the negative charges on the surface of bacterial cells. QCS-EDA-CDs could destroy the protein and the secondary structure of DNA, cause the disorder of normal functions, and finally kill bacterial cells. Meanwhile, QCS-EDA-CDs have been proven to induce single oxygen under daylight lamp and could realize antibacterial effect through multiple mechanisms. The prepared CDs could also inhibit the growth of biofilm of S. aureus. Due to their excitation wavelength dependency property, QCS-EDA-CDs have been applied in the imaging of bacterial cells and biofilm. Cytotoxicity and hemolysis tests also show their excellent biocompatibility. These results show the potential application values of QCS-EDA-CDs in novel fluorescence labeling, biomedical imaging and visual treatment of bacterial infection. | 2022-03-08T16:58:40.532Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "2550f3edd254881f61725113903564b76d734f4e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=78798",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ea03d7fb568df64145d69fb0b8389c2a7477d3b",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237279403 | pes2o/s2orc | v3-fos-license | A case of pulmonary sclerosing pneumocytoma diagnosed preoperatively using transbronchial cryobiopsy
Background The preoperative diagnosis of pulmonary sclerosing pneumocytoma (PSP) is complicated since PSP has several histological structural patterns in the same neoplasm; hence, it is sometimes pathologically misdiagnosed as adenocarcinoma or carcinoid. In recent years, with the prevalence of transbronchial cryobiopsy (TBLC), we are able to obtain larger specimens than previously. However, to date, there have been no reports describing PSP diagnosed using TBLC. Case reports A 43-year-old man was referred to our hospital for an abnormal lesion in the left lung discovered on routine health examination. A computed tomography scan of the chest revealed a 14-mm heterogeneous round nodule with surrounding ground-glass opacity in the left lower lobe. The tumor size increased to 18 mm in three weeks, and he developed bloody sputum. TBLC was performed using radial endobronchial ultrasonography and fluoroscopy. An occlusion balloon and prophylactic epinephrine were used to prevent severe bleeding. Histologically, epithelioid cells with solid proliferation, various papillary lesions, and hemosiderin-laden histiocytes were observed. Immunohistochemical staining revealed the histiocytes positive for thyroid transcription factor-1 and vimentin, and the type II pneumocyte-like-cells positive for cytokeratin 7. The tumor was preoperatively diagnosed as a PSP; the patient underwent left basal segmentectomy and consequently, a final diagnosed of PSP was formulated. Conclusion We report the first case of PSP preoperatively diagnosed using TBLC. Therefore, cryobiopsy could be beneficial in the preoperative diagnosis of PSP.
Introduction
Pulmonary sclerosing pneumocytoma (PSP) is a relatively rare lung tumor and is mainly seen in women in the age group of 40-70 years [1,2]. It is typically a benign tumor; however, lymph node recurrence, pleural and bone metastases and malignant transformation can occur [3][4][5][6][7][8]. This tumor is also known for its difficult preoperative diagnosis since it possesses four typical histological structural patterns in the same neoplasm: papillary, sclerotic, solid, and hemorrhagic [9]. Because of the histological complication, PSP is often misdiagnosed as adenocarcinoma or carcinoid [10][11][12]. In recent years, with the prevalence of transbronchial cryobiopsy (TBLC), we are able to obtain larger specimens than previously. To the best of our knowledge, there have been no reports describing the diagnosis of PSP using TBLC.
Case report
A 43-year-old man was referred to our hospital for an abnormal chest shadow observed during a routine checkup. He had a history of smoking 10 cigarettes per day for seven years. He was asymptomatic, and no abnormal findings were observed on physical examination and laboratory investigations. Chest radiography revealed a nodule in the left lower lung field (Fig. 1A), while computed tomography (CT) scan of the chest revealed a 14-mm heterogeneous round nodule with surrounding ground-glass opacity in the left lower lobe. (Fig. 1B and C). After threeweek follow-up, the patient developed bloody sputum, and the size of the tumor increased to 18 mm. Bronchoscopy was performed using a flexible bronchoscope under local anesthesia with 2% lidocaine as a bolus to the bronchi, following an intravenous injection of midazolam and pethidine hydrochloride. Bronchoscopic images revealed a blood clot found in the trachea (Fig. 1D). The lesion in the left lower lobe was approached with the aid of a radial endobronchial ultrasonography (R-EBUS) but the lesion was adjacent to the probe (Fig. 1E). Subsequent needle puncture was performed under fluoroscopic guidance in order to guide the R-EBUS probe to the target lesion and also so that we could check the bleeding tendency of the lesion. The probe was eccentrically oriented to the lesion (Fig. 1F) and there was minimal bleeding. We placed an occlusion balloon in the left B8 bronchus and administered prophylactic epinephrine before the biopsy to prevent severe bleeding. Then cryobiopsy was performed twice for 6 seconds each using the 1.9mm-diameter cryoprobe under fluoroscopic guidance. Bleeding was well-managed and there were no complications. Histologically, epithelioid cells with solid proliferation and a low nuclear to cytoplasmic ratio, and various papillary lesions covered by type II pneumocyte-like-cells ( Fig. 2A) were observed. Hemosiderin-laden histiocytes (Fig. 2B) were present in the superficial layer; immunohistochemical staining revealed the histiocytes positive for thyroid transcription factor-1 (TTF-1) (Fig. 2C) and vimentin, and the type II pneumocyte-like-cells positive for cytokeratin 7 (Fig. 2D). The Ki-67 labeling index was estimated to be approximately 5-10% in the hot spot. A diagnosis of PSP was formulated, and the patient was referred to the department of Respiratory Surgery. He underwent left basal segmentectomy of segments 8, 9, and 10, and the preoperative diagnosis was confirmed based on the surgical specimen.
Discussion
PSP is typically a benign tumor [1], which is derived from the primitive respiratory epithelium of the pulmonary alveolus, principally in type II alveolar cells [9]. Patients with PSP are usually asymptomatic and are detected incidentally during a routine checkup, while bloody sputum occurs in 8.6% of the cases [2]. PSP is known for its low preoperative diagnosis rate, and surgery is required for an accurate diagnosis and treatment of PSP. Accurate preoperative diagnosis of PSP is critical because limited resection is indicated for patients with PSP [13,14].
PSP is radiologically often described as a single solitary nodule or mass with smooth margins and with obvious enhancement on the CT scan [2,9,15]. The lesion is usually smaller than 30 mm in diameter and is often peripherally located; 44.7% of the tumors are juxtapleural or juxtafissural [2,16]. It is sometimes misdiagnosed as lung cancer, pulmonary carcinoid, pulmonary hamartoma, tuberculoma, bronchial cyst, or inflammatory nodule [17][18][19]. 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) is often useful in differentiating benign and malignant tumors; however, PSP shows various FDG accumulations [20]. It has been reported that the size of a round and oval PSP with well-defined borders was correlated with FDG accumulation [21]. In addition, symptomatic patients with PSP showed a higher maximum standardized uptake value than the asymptomatic group [21]. Actually, it is difficult to distinguish lung cancer from PSP using FDG-PET.
Bronchoscopic diagnosis is sometimes difficult when the lesion is small and is located in peripheral lesion, such as in the present case. The American College of Chest Physicians guidelines report that the sensitivity of bronchoscopy for diagnosing for peripheral lung lesions <2cm was 34% in 2013 [22]. In these cases, the lesion is endoscopically invisible and localizes with respiratory movement. We used an intravenous injection of midazolam and pethidine hydrochloride to reduce the respiratory movement as much as possible. To further improve the diagnostic yield, we used R-EBUS and virtual bronchoscopy navigation (VBN) software, SYNAPSE VINCENT® (Fujifilm, Tokyo, Japan). A meta-analysis reported that the diagnostic yield for 1067 peripheral pulmonary lesions ≤ 2 cm was 60.5% when R-EBUS was used [23]. A review showed that the diagnostic yield for lung lesions ≤ 2 cm was 67.4% using VBN [24]. Even if a sample is collected, pathological diagnosis of PSP is often challenging if an inadequate amount of tissue is obtained. PSP is composed of four major histologic patterns: papillary, sclerotic, solid, and hemorrhagic [9]. When the papillary component is predominant, it is often pathologically misdiagnosed as adenocarcinoma [10]. When the solid component is predominant, it could be misdiagnosed as carcinoid [11]. These reports indicate that obtaining larger specimens are crucial; therefore, diagnosis using traditional methods such as percutaneous needle biopsy, transbronchial forceps biopsy, or transbronchial aspiration biopsy is difficult [25,26].
To ensure an accurate diagnosis, obtaining large samples is essential. However, transbronchial approach is often difficult since PSP is derived from the pulmonary epithelium cells and less likely exposed to the bronchial lumen. Cryobiopsy allows obtaining larger specimens with less disturbance compared to forceps biopsy [27]. Unlike forceps biopsy, cryobiopsy enables tissues to be obtained by the probe in a 360 • manner. However, the diagnostic yield of TBLC varies depending on the location of the lesion and the R-EBUS probe. A previous study reported that when the lesion was adjacently, eccentrically, and concentrically oriented, the diagnostic yields of TBLC were 66.7%, 80.0%, and 85.7%, respectively [28]. In our case, the probe was initially adjacent to the lesion. Then the affected bronchial wall was punctured with a needle under fluoroscopic guidance in order to place the guide sheath and the R-EBUS probe inside the tumor. A previous study of transbronchial needle aspiration through a guide sheath with endobronchial ultrasonography showed that this procedure can be performed without an excessive risk of pneumothorax or bleeding [29]. Finally, the R-EBUS probe was eccentrically oriented to the lesion. Cryoprobes are stiff and often stick to cartilage. Using fluoroscopy, we adjusted the position of the cryoprobe to match the position of the ultrasonic probe. The sizes of our two samples were 3.5 mm × 4.5 mm and 3.5 mm × 3.5 mm, respectively. The specimens were of adequate size and of quality for making an accurate diagnosis even if the lesion was adjacent to the bronchial wall. Immunohistochemical staining is useful; however, no specific immunohistochemical markers have been identified thus far. PSP is often positive for TTF-1, epithelial membrane antigen, and cytokeratin 7 [30]. The mean Ki-67 labeling index of PSP is reportedly lower than that of adenocarcinoma [27]. Intraoperative frozen sections are also beneficial; however, the rate of accurate diagnosis is relatively low (44.1%) [31,32]. To maximize the diagnostic yield, it is important to collect a sample of sufficient size for immunochemical staining, so TBLC is a useful method.
In our case, there were no bleeding complications. Histologically, PSP has some hemorrhagic features, but life-threatening bleeding is rarely reported with transbronchial lung biopsy or transbronchial needle aspiration. There is a report of a transbronchial biopsy that failed due to bleeding; however the PSP was large (13.5 cm in diameter) in that case [33]. In addition, in our case, the amount of bloody sputum was scant, and the patient did not take any antiplatelets or anticoagulants. Thus, we did not consider our patient to be at particularly high risk of bleeding on bronchoscopic procedures [34]. However, the risk of moderate-to-severe bleeding is higher in TBLC than in transbronchial lung biopsy [35]. An occlusion balloon and prophylactic epinephrine were used to prevent severe bleeding [34]. A retrospective multicenter study showed that the frequency of moderate-to-severe bleeding reduced by using an occlusion balloon with TBLC (1.8% vs 35.7%; adjusted odds ratio, 0.02) [36]. Furthermore, we confirmed that the lesion did not have an excessive bleeding tendency when needle puncture was performed prior to the TBLC. Using these techniques, the TBLC was safely conducted.
Cryobiopsy is considered beneficial in the preoperative diagnosis of PSP. Nonetheless, caution should be exercised in the interpretation of our report. In our case, the location of the PSP lesion adjacent to the bronchus and using needle puncture may have facilitated its diagnosis using TBLC. This implies that TBLC may be useful in cases where the target lesion is in close proximity to the bronchus. Therefore, further multicenter prospective studies are warranted to evaluate the usefulness of preoperative TBLC in diagnosing PSP.
Conclusions
We report the first case of PSP diagnosed preoperatively using TBLC. Cryobiopsy is considered beneficial in the preoperative diagnosis of PSP.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Ethics approval and consent to participate
Not applicable.
Declaration of competing interest
None.
Acknowledgments
We would like to thank Editage (www.editage.com) for English | 2021-08-25T05:25:21.022Z | 2021-08-11T00:00:00.000 | {
"year": 2021,
"sha1": "81b235ee9577b5d25d91213215bc5d29dd1385b3",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.rmcr.2021.101494",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81b235ee9577b5d25d91213215bc5d29dd1385b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119649938 | pes2o/s2orc | v3-fos-license | An explicit volume formula for the link $7_3^2 (\alpha, \alpha)$ cone-manifolds
We calculate the volume of the $7_3^2$ link cone-manifolds using the Schl\"afli formula. As an application, we give the volume of the cyclic coverings over the link.
Introduction
Let us denote the link complement of 7 2 3 in Rolfsen's link table by X. Note that it is a hyperbolic knot. Hence by Mostow-Prasad rigidity theorem, X has a unique hyperbolic structure. Let ρ ∞ be the holonomy representation from π 1 (X) to PSL(2, C) and denote ρ ∞ (π 1 (X)) by Γ, a Kleinian group. X is a (PSL(2, C), H 3 )-manifold and can be identified with H 3 /Γ. Thurston's orbifold theorem guarantees an orbifold, X(α) = X(α, α), with underlying space S 3 and with the link 7 2 3 as the singular locus of the cone-angle α = 2π/k for some nonzero integer k, can be identified with H 3 /Γ for some Γ ∈ PSL(2, C); the hyperbolic structure of X is deformed to the hyperbolic structure of X(α). For the intermediate angles whose multiples are not 2π and not bigger than π, Kojima [10] showed that the hyperbolic structure of X(α) can be obtained uniquely by deforming nearby orbifold structures. Note that there exists an angle α 0 ∈ [ 2π 3 , π) for the link 7 2 3 such that X(α) is hyperbolic for α ∈ (0, α 0 ), Euclidean for α = α 0 , and spherical for α ∈ (α 0 , π] [19,8,10,20]. For further knowledge of cone-manifolds a reader can consult [1,7]. Even though we have wide discussions on orbifolds, it seems to us we have a little in regard to cone-manifolds. Explicit volume formulae for hyperbolic cone-manifolds of knots and links are known a little. The volume formulae for hyperbolic cone-manifolds of the knot 4 1 [8,10,11,15], the knot 5 2 [13], the link 5 2 1 [16], the link 6 2 2 [17], and the link 6 2 3 [2] have been computed. In [9] a method of calculating the volumes of two-bridge knot cone-manifolds was introduced but without explicit formulae. In [7,6], explicit volume formulae of cone-manifolds for the hyperbolic twist knot and for the knot with Conway notation C(2n, 3) are computed. Similar methods are used for computing Chern-Simons invariants of orbifolds for the twist knot and C(2n, 3) knot in [5,4].
The main purpose of the paper is to find an explicit and efficient volume formula of hyperbolic cone-manifolds for the link 7 2 3 . The following theorem gives the volume formula for X(α). Theorem 1.1. Let X(α), 0 ≤ α < α 0 be the hyperbolic cone-manifold with underlying space S 3 and with singular set the link 7 2 3 of cone-angle α. X(0) denotes X. Then the volume of X(α) is given by the following formula where for A = cot α 2 , V (Re(V ) ≤ 0 and Im(V ) ≥ 0 is the largest) is a zero of the Riley-Mednykh polynomial P = P (V, A) for the link 7 2 3 given below.
The following corollary gives the hyperbolic volume of the k-fold strictly-cyclic covering [12,18] over the link 7 2 3 , M k (X), for k ≥ 3. Corollary 1.2. The volume of M k (X) is given by the following formula where for A = cot α 2 , V (Re(V ) ≤ 0 and Im(V ) ≥ 0 is the largest) is a zero of the Riley-Mednykh polynomial P = P (V, A) for the link 7 2 3 . In Section 2, we present the fundamental group π 1 (X) of X with slope 9/16. In Section 3, we give the defining equation of the representation variety of π 1 (X). In Section 4, we compute the longitude of the link 7 2 3 using the Pythagorean theorem. And in Section 5, we give the proof of Theorem 1.1 using the Schläfli formula.
2. Link 7 2 3 Link 7 2 3 is presented in Figure 1. It is the same as W 3 from [2]. The slope of this link is 7/16. The link with slope 9/16 is the mirror of the link 7 2 3 . Since the volume of the link with slope 7/16 is the same as the volume of link with slope 9/16, in the rest of the paper, the link with slope 9/16 is used.
The following fundamental group of X is stated in [2] with slope 7/16.
(PSL(2, C), H 3 ) structure of X(α)
Let R = Hom(π 1 (X), SL(2, C)). Given a set of generators, s, t, of the fundamental group for π 1 (X), we define a set R (π 1 (X)) ⊂ SL(2, C) 2 ⊂ C 8 to be the set of all points (h(s), h(t)), where h is a representaion of π 1 (X) into SL(2, C). Since the defining relation of π 1 (X) gives the defining equation of R (π 1 (X)) [21], R (π 1 (X)) is an affine algebraic set in C 8 . R (π 1 (X)) is well-defined up to isomorphisms which arise from changing the set of generators. We say elements in R which differ by conjugations in SL(2, C) are equivalent. A point on the variety gives the (PSL(2, C), H 3 ) structure of X(α). Let Then h becomes a representation if and only if A = cot α 2 and V = cosh ρ satisfies a polynomial equation [21,14]. We call the defining polynomial of the algebraic set {(V, A)} as the Riley-Mednykh polynomial for the link 7 2 3 . Thoughout the paper, h can be sometimes any representation and sometimes the unique hyperbolic representation.
Given the fundamental group of X, . Then the trace of S and the trace of T are both 2 cos α 2 . Lemma 3.1. For n ∈ SL(2, C) which satisfies nS = S −1 n, nT = T −1 n, and n 2 = −I, Proof.
From the structure of the algebraic set of R (π 1 (X)) with coordinates h(s) and h(t) we have the defining equation of R (π 1 (X)). The following theorem is stated in [2, Proposition 4] with slope 7/16.
h is a representation of π 1 (X) if V is a root of the following Riley-Mednykh polynomial P = P (V, A) which is given below.
We can find two n's in SL(2, C) which satisfies nS = S −1 n and n 2 = −I by direct computations. The existence and the uniqueness of the isometry (the involution) which is represented by n are shown in [3, p. 46]. Since two n's give the same element in PSL(2, C), we use one of them. Hence, we may assume Recall that P is the defining polynomial of the algebraic set {(V, A)} and the defining polynomial of R (π 1 (X)) corresponding to our choice of h(s) and h(t). By direct computation P is a factor of tr(SW n) = −4i sinh ρ(2V 2 + A 4 + 2A 2 − 1)P . As in [2], P can not be sinh ρ or have only real roots. Also, P can not have only purely imaginary roots similarly. P in the theorem is the only factor of tr(SW n) which is different from sinh ρ and has roots which are not real or purely imaginary. P is the Riley-Mednykh polynomial.
Longitude
Let l s = ws and l t = (t −1 [t, s] 2 [t, s −1 ] 2 )t. Then l s and l t are the longitudes which are null-homologus in X. Let L S = h(l s ) and Let L T = h(l t ).
Proof. Since
The second statement can be obtained in a similar way.
Definition. The complex length of the longitude l (l s or l t ) of the link 7 2 3 is the complex number γ α modulo 4πZ satisfying tr(h(l)) = 2 cosh γ α 2 .
Note that l α = |Re(γ α )| is the real length of the longitude of the cone-manifold X(α). , and the following normalized line matrices of T (resp. L T ) which share the fixed points with T (resp. L T ).
which give the orientations of axes of T and L T . Now, we are ready to prove the following theorem which gives Theorem 4.3. Recall that γ α modulo 4πZ is the complex length of the longitude l s or l t of X(α). The following theorem is a particular case of Proposition 5 from [2]. Proof.
where the first equality comes from [3, p. 68], the sixth equality comes from the Cayley-Hamilton theorem, and the seventh equality comes from Lemma 4.1. Therefore, we have Pythagorean theorem 4.2 gives the following theorem which relates the eigenvalues of h(l) and V = cosh ρ for A = cot α 2 .
Theorem 4.3. Recall that l is the longitude. By conjugating if necessary, we may assume h(l) is upper triangular. Let L = h(l) 11 . Let A = cot α 2 . Then the following formulae show that there is a one to one correspondence between the the eigenvalues of h(l) and V = cosh ρ: Proof. By Theorem 4.2, If we solve the above equation, for L, we have | 2019-04-12T04:00:48.766Z | 2016-07-27T00:00:00.000 | {
"year": 2016,
"sha1": "81ace060f78f45c1c5c217a0b8eb7ab05f4d2843",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "81ace060f78f45c1c5c217a0b8eb7ab05f4d2843",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
27028665 | pes2o/s2orc | v3-fos-license | Phosphorylation of SAV 1 by mammalian ste 20-like kinase promotes cell death
The mammalian ste20-like kinase (MST) pathway is important in the regulation of apoptosis and cell cycle and emerges as a novel tumor suppressor pathway. MST-induced phosphorylation of Salvador homolog 1 (SAV1), which is a scaffold protein, has not been evaluated in detail. We performed a mass spectrometric analysis of the SAV1 protein that was co-expressed with MST2. Phosphorylation was detected at Thr-26, Ser-27, Ser-36 and Ser-269. Although single or double mutations had little effects, the mutation of all four residues in SAV1 to Ala (SAV1-4A) had inhibitory effects on the MST pathway. MST2-mediated induction of SAV1-4A protein levels, SAV1-4A interaction with MST2 and the self-dimerization of SAV1-4A were weaker compared to those of wild-type SAV1. SAV1-4A inhibited MST2and K-RasG12V-induced cell death of MCF7 cells. These results suggest that MST-mediated phosphorylation of four residues within SAV1 may be important in the induction of cell death by the MST pathway. [BMB reports 2011; 44(9): 584-589]
INTRODUCTION
The mammalian ste20 kinase (MST) pathway, which is known as the Hippo pathway in Drosophila, is a potent regulator of organ size, and deregulation of this pathway leads to tumorigenesis (1).The MST pathway negatively regulates proliferation and promotes cell death (1).The MST pathway is composed of a Ser/Thr protein kinase MST1/2, the scaffolding protein Salvador homolog 1 (SAV1 or WW45) and a Ser/Thr protein kinase Large tumor suppressor (LATS), which are homologs of the Drosophila proteins Hippo, Salvador and Warts, respectively.
There are two mammalian MST genes, MST1 and MST2, which are almost identical in their kinase domains and exhibit a high degree of homology throughout the proteins (2).Although MST1 is known to activate apoptosis in cell cultures (3,4), MST1-knockout mice showed only a mild phenotype in T cell physiology (5,6).However, the MST1/2 double knockout is embryonic lethal, suggesting a functional redundancy of MST1 and MST2 (7).
Studies in Drosophila and mammalian systems have reported that SAV1 recruits LATS to MST to regulate MST-mediated phosphorylation of LATS (15,16) and that SAV1 is required for the correct cellular localization and function of MST (17).SAV1 has domains that permit protein-protein interaction, including 2 WW domains and a coiled-coil motif in its C-terminus, which suggest that SAV1 functions as a scaffold in a multimeric complex.MST and SAV1 interact through this coiledcoil domain, which is called the SARAH domain (18).LATS binds to the WW domain of SAV1 (19).Disruption of SAV1 in mice results in embryonic lethality with epithelial hyperplasia that is accompanied by defects in the terminal differentiation in various organs (17).SAV1 has been reported to be phosphorylated by MST (20), and several phosphorylation residues of SAV1 were reported by using a large-scale proteomic approach (21,22).However, the exact MST phosphorylation sites within SAV1 and the effects of MST phosphorylation on SAV1 function in the MST pathway have not been previously studied.
In the current report, we identified four Ser/Thr residues in SAV1 that were phosphorylated by MST2, and using a SAV1 mutant in which all these residues were mutated to Ala, we showed that the phosphorylation of these residues within SAV1 are required for the induction of cell death by the MST pathway.
Identification of MST2 phosphorylation sites within SAV1
MST has been reported to phosphorylate SAV1, however, this phosphorylation has not been previously investigated in detail (20).MST2 increased the level of SAV1 protein and the inter- action between MST2 and SAV1 (Fig. 1A), which was consistent with a previous report (20).A kinase-defective mutant of MST2 (MST2-KD) did not display these increases as efficiently as wild-type MST.These results suggest that the MST2mediated phosphorylation of SAV1 may be partially required for the effect of MST2 on SAV1.
To identify the exact residues within SAV1 that are phosphorylated by MST2, we over-expressed human SAV1 containing a 6XHis-tag and MST2 in HEK 293 cells and purified SAV1 using a Ni 2+ affinity column.The purified SAV1 protein was analyzed using ion-trap mass spectrometry.The results show that Thr-26, Ser-27, Ser-36 and Ser-269 in SAV1 are phosphorylated by MST2 (Fig. 1B).Phosphorylation at Thr-26 and Ser-27 within SAV1 has been reported in a large-scale proteomic analysis; however, the other sites are novel (21,22).Thr-26, Ser-27 and Ser-36 are located very close to one another, and a phospho-peptide with phosphorylated Thr-26 and Ser-27 was identified.These results indicate that Thr-26 and Ser-27 may be simultaneously phosphorylated by MST2.Ser-269 is very close to the end of the second WW domain in SAV1, and therefore, the phosphorylation at Ser-269 may have some effect on the interaction of SAV1 with other proteins that have the PPXY motif.
Single or double mutations of the phosphorylated residues into Ala or Glu had little effect on MST2-induced increases in SAV1 protein levels (Fig. 1D).Therefore, a SAV1 mutant, SAV1-4A, was created by mutating all four residues to Ala to completely inhibit phosphorylation.MST2-mediated increases in SAV1-4A protein expression was significantly lower compared to MST2-mediated increases in wild-type SAV1 protein expression.In addition, MST2 bound to SAV1-4A more weakly than to wild-type SAV1 (Fig. 1E).The data suggest that the phosphorylation at theses four residues may have additive effects.These results clearly show that the MST2-mediated phosphorylation of SAV1 is essential for SAV1 function.
Phosphorylation of SAV1 is required for MST2-induced increases in protein level and self-dimerization of SAV1
To evaluate the phosphorylation effects of SAV1, we mutated the four residues of interest into Glu to create a SAV1 mutant, SAV1-4E that mimics the phosphorylated state of SAV1.As shown above, MST2 slightly increased SAV1-4A protein but not as efficiently as MST2-induced increases in wild-type SAV1 protein.Interestingly, MST2-induced increases in SAV1-4E protein were significantly higher than those of wild-type SAV1 protein (Fig. 2A).
SAV1 protein was reported to dimerize, however, the function of this dimerization has not been investigated (20).The dimerization of wild-type SAV1 was strongly increased upon co-expression of MST2 compared to that of SAV1-4A (Fig. 2B).The dimerization of SAV1-4E was slightly stronger than wildtype SAV1.In the absence of MST2, SAV1-4E expression was very low and the dimerization of SAV1-4E was not observed (Fig. 2B).These results suggest that phosphorylation alone is not enough to increase protein levels and induce dimerization of SAV1 and that the catalytic activity and physical interaction of MST2 with SAV1 are essential.This result is consistent with previous reports suggesting that the physical interaction rather than catalytic activity is required for the action of MST (20) or other protein kinases such as PDK1 and MEK1 (23,24).Our results also show that the action of MST2 on SAV1 requires both phosphorylation and their direct interaction.
Phosphorylation of SAV1 is required for MST2-induced cell death of MCF-7 cells
It is well known that the MST pathway promotes cell death.Therefore, we postulated that MST-induced phosphorylation of SAV1 may be required for the induction of cell death by the MST pathway.We measured the cell viability of MCF-7 cells, which were transiently co-transfected with SAV1-WT or -4A and MST2-WT or -KD.The catalytic activity of MST2 was critical in promoting the cell death of MCF-7 cells, especially in cooperation with SAV1 (Fig. 3A).MST2-stimulated cell death was almost completely inhibited with the co-expression of SAV1-4A (Fig. 3B).These results clearly show that phosphorylation of SAV1 at the four residues of interest is essential for the induction of cell death by the MST pathway.
Phosphorylation of SAV1 is required for K-RasG12V-induced cell death
MST was reported to mediate the pro-apoptotic activity of active Ras via binding to Nore1 or Rasff1 (25,26).To investigate the role of SAV1 phosphorylation in Ras-induced cell death, we analyzed the effect of an active Ras mutant, K-RasG12V, on the cell death of MCF-7 cell.K-RasG12V augmented the MST2-induced increase in wild-type SAV1 protein (Fig. 4A).However, this effect was much weaker in SAV1-4A (Fig. 4A), suggesting that the phosphorylation of SAV1 by Ras-stimulated MST may be required for the stabilization of SAV1 protein.
The co-expression of K-RasG12V resulted in a significant increase in the cell death of MCF-7 cells that was induced by MST2 and SAV1 (Fig. 4B).The co-expression of SAV1-4A with K-RasG12V significantly inhibited this increase in cell death (Fig. 4B), indicating that MST2-induced phosphorylation of SAV1 is required for the K-RasG12V-mediated cell death.Our results suggest that MST2-mediated phosphorylation of SAV1 has dual functions as follows: to stabilize SAV1 protein and to increase the interaction of SAV1 with MST2 or SAV1 itself.These results, together with a previous report showing that the interaction of SAV1 and MST increased their protein levels (20), suggest that the phosphorylation-induced increase of SAV1 protein may be a secondary effect of the increased interaction of SAV1 with MST2.However, SAV1 phosphorylation may directly inhibit ubiquitin ligases and protea-Fig.3. Phosphorylation of SAV1 is required for MST2-induced cell death in MCF-7 cells.(A) The catalytic activity of MST2 was required for the stimulation of cell death in MCF-7 cells.MCF-7 cells were co-transfected with SAV1 and wild-type MST2 or catalytically inactive MST2 mutant (KD).After 72 h, the cell viability was detected using trypan blue exclusion assay.(B) The phosphorylation of SAV1 was required for the MST2-mediated cell death in MCF-7 cells.MCF-7 cells were co-transfected with MST2 and wild-type SAV1 or SAV1 mutant (4A).After 72 h, the cell viability was detected using trypan blue exclusion assay.Fig. 4. Phosphorylation of SAV1 is required for K-RasG12V-induced cell death.(A) The phosphorylation of SAV1 was required for K-RasG12V-induced increases in SAV1 protein levels.HEK 293 cells were co-transfected with HA-K-RasG12V, MST2 and Flag-SAV1-WT or -4A.After 48 h, the protein expression was detected by immunoblotting.(B) The phosphorylation of SAV1 was required for MST2-and K-RasG12V-mediated cell death in MCF-7 cells.MCF-7 cells were co-transfected with HA-K-RasG12V, MST2 and Flag-SAV1-WT or -4A.After 72 h, the cell viability was detected using trypan blue exclusion assay.some-mediated degradation.The mechanism regulating the protein levels of members of the MST pathway requires further investigation.
A previous report showed that SAV1 is phosphorylated by MST1 or MST2 and that the stabilization of SAV1 is independent of its phosphorylation (20).However, a careful re-examination of their data revealed that MST2-KD but not MST1-KD failed to increase SAV1 protein levels as efficiently compared to wild-type MST2, which is consistent with our results concerning MST2.These data suggest that MST2-mediated phosphorylation of SAV1 may function differently compared to MST1-mediated phosphorylation of SAV1.SAV1 can interact with several target proteins other than members of MST pathway, such as MST and LATS.Therefore, the dimerization or multimerization of SAV1 may bring MST or LATS in contact with their downstream substrate proteins, which are bound to SAV1.We showed that SAV1 phosphorylation is required for MST2-induced increases in SAV1 dimerization (Fig. 2B).Therefore, SAV1 phosphorylation increases SAV1 dimerization, which may facilitate the signal trans-duction of the MST pathway.A single SAV1 protein may interact with MST and a downstream protein simultaneously.The phosphorylation of SAV1 increased the binding affinity of SAV1 with MST2 (Fig. 1E).The affinity of MST-bound SAV1 for a downstream target protein may be increased by phosphorylation to facilitate the signal transduction of the MST pathway.
In the current report, we identified four Ser/Thr residues within SAV1 that were phosphorylated by MST2, and showed that these phosphorylations are required for the activation of SAV1 and the induction of cell death by the MST pathway.
Cell culture and transfection
HEK 293 cells and MCF-7 cells were maintained in Dulbecco's modified Eagle's medium (Welgene, Korea) that was supplemented with 10% fetal bovine serum (Invitrogen, USA) and 100 units/ml of penicillin-streptomycin (Sigma, USA) at 37 o C in a humidified atmosphere with 5% CO2.Transient transfections were performed using the Lipofectamine Plus reagent (Invitrogen, USA) or Welfect reagent (Welgene, Korea).
Identification of phosphorylation sites by mass spectrometry
The 6XHis-SAV1 and MST2 or MST2-KD constructs were over-expressed in HEK 293 cells, and the 6XHis-SAV1 proteins were purified using Ni 2+ column.The purified protein was separated by 10% SDS-PAGE, and the corresponding SAV1 band was in-gel digested using trypsin and analyzed using tandem mass spectrometry (MS/MS) with an LTQ linear ion trap mass spectrometer (help of Dr. Edward P. Feener at Joslin Diabetes Center, Boston, USA).The assignment of MS/MS data was performed using SEQUEST software (Thermo Electron).Resultant matches were entered and compiled into a MySQL database, and proteomic computational analyses were performed using a Hypertext Preprocessor (PHP)-based program.
Immunoprecipitation and western blot analysis
The cells were lysed with cold lysis buffer [50 mM Tris-HCl (pH 7.4), 120 mM NaCl, 1% NP-40, 12 mM β-glycerophosphate, 10 mM NaF, 0.5 mM PMSF, 5 μg/ml leupeptin, 5 μg/ml aprotinin, 1 μg/ml pepstatin, and 100 μM Na3VO4].Cell lysates were incubated with antibodies at 4 o C for 2 h and complexes were subsequently retrieved with protein G-Sepharose beads (Amersham, UK).The immunoprecipitates were resolved by SDS-PAGE and transferred to a polyvinylidene difluoride membrane (Millipore, US).The membrane was immunoblotted with the indicated primary antibodies and visualized by ECL (Elpis Biotech, Korea).⁓Cell death assay: MCF-7 cells were transiently transfected with plasmids.After 72 h, the cells were harvested and the viability was determined using the trypan blue exclusion method.
Fig. 1 .
Fig. 1.Identification of MST phosphorylation sites within SAV1.(A) The catalytic activity of MST2 was required for the stimulation of SAV1 expression and MST2-SAV1 interaction.HEK 293 cells were transfected with Myc-SAV1 and HA-tagged wild-type (WT) or inactive (KD) MST2.The interaction between SAV1 and MST2 was detected by immunoprecipitation with an anti-Myc antibody followed by immunoblotting with an anti-HA antibody.(B) Phospho-peptides were identified by mass spectrometry.T or S in bold characters followed by an asterisk indicates phosphorylated residues.(C) A schematic diagram of human SAV1 protein.Two WW domains (WW) and a coiled-coil domain (CC) are indicated.(D) Single or double mutation of the phosphorylation sites within SAV1 had little effect on its protein level.HEK 293 cells were transfected with HA-MST2 and various Myc-tagged SAV1 mutants as indicated.The expression of SAV1 was detected by immunoblotting with an anti-Myc antibody.(E) The phosphorylation of SAV1 was required for MST-mediated stimulation of SAV1 expression and MST2-SAV1 interaction.HEK 293 cells were co-transfected with HA-MST2 and Myc-tagged wild-type (WT) or mutant (4A) SAV1 and analyzed as described in (A).
Fig. 2 .
Fig. 2. Phosphorylation of SAV1 is required for MST2-induced increases in SAV1 protein level and self-dimerization of SAV1.(A) Phosphorylation of SAV1 was required for MST2-induced increases in SAV1 expression.HEK 293 cells were transfected with HA-MST2 and various amounts (μg) of Myc-tagged SAV1 mutants.The expression profiles of SAV1 and MST2 were detected by immunoblotting.(B)The phosphorylation of SAV1 was required for the MST2-mediated self-dimerization of SAV1.HEK 293 cells were co-transfected with HA-MST2, Flag-SAV1 and Myc-tagged wild-type SAV1 (WT) or mutant SAV1 (4A or 4E).The dimerization of SAV1 was detected by immunoprecipitation with an anti-Flag antibody followed by immunoblotting with an anti-Myc antibody. | 2018-04-03T00:51:42.066Z | 2011-09-01T00:00:00.000 | {
"year": 2011,
"sha1": "c5c36066e40e76d006cd649cc823012e658a71b7",
"oa_license": "CCBYNC",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201128762648429&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c5c36066e40e76d006cd649cc823012e658a71b7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
115979282 | pes2o/s2orc | v3-fos-license | Tribological properties of textured stator and PTFE-based material in travelling wave ultrasonic motors
This study fabricated textures on the stator surface of a traveling wave ultrasonic motor (USM) using laser and investigated the tribological behavior of a polytetrafluoroethylene (PTFE) composite friction material and stator. Initially, the effect of textures with different densities was tested. As the results suggested, the generation of large transfer films of PTFE composite was prevented by laser surface texturing, and adhesive wear reduced notably despite the insignificant decrease in load capacity and efficiency. Next, the 100-h test was performed to further study the effects of texture. Worn surface and wear debris were observed to discuss wear mechanisms. After 100 h, the form of wear debris changed into particles. The wear mechanisms of friction material sliding against the textured stator were small size fatigue and slight abrasive wear. The wear height of friction material decreased from 3.8 μm to 1.1 μm. This research provides a method to reduce the wear of friction materials used in travelling wave USMs.
Introduction
The ultrasonic motor (USM) is a type of piezoelectric actuator that drives a rotor by the frictional force at the interface between the rotor and stator [1][2][3]. When compared with an electromagnetic motor, the USM has the advantages of compact structure, low speed, high torque, quick response, and high-power density. It is widely applied in aerospace, micro-electromechanical systems, and optical precision engineering [4][5][6][7][8]. However, owing to the driving principle, the wear of stator and rotor is inevitable, which strongly impacts the lives and performances of USMs [9,10]. These tribological issues in USMs have attracted substantial attention, and many interesting results have been achieved.
Polytetrafluoroethylene (PTFE), a plastic material, exhibits excellent anti-wear property and low frictional coefficients. Even at cryogenic temperatures, PTFE possesses good wear resistance [11]. Because low values of friction coefficient and wear rate are required in travelling wave USMs, PTFE filled with additives is usually used as friction material for travelling wave USMs. Fan et al. [12] investigated the wear properties of a PTFE material used in USMs with different contents of filled potassium titanate whiskers (PTWs), and a preferable content was found. Ding et al. [13] proposed polyvinylidene fluoride composites as friction material for USMs, and the anti-irradiation property of the material was studied. Wang et al. [14] investigated the impact of fillers and counter-face topography on the wear behavior of PTFE polymers for USMs. Song et al. [15] investigated the tribological performance of a filled PTFE-based friction material for a USM under different temperature and vacuum degrees, and found that the adhesive wear was prone to take place under high vacuum degrees. Li et al. [16,17] investigated the wear properties of a PTFE composite used in USMs and found that the wear mechanism of rotor friction material is very different from that of stator friction material owing to the differences in their contact mechanisms.
Although, the studies mentioned above have presented an in-sight of the tribological problems in the actuators, only a few practical ways have been proposed to improve the tribological properties of friction materials applied in USMs.
Recently, laser surface texturing has emerged as a potential new method to improve the tribological properties of mechanical components. Laser surface texturing is used to improve the tribological properties of coatings, ceramics, and metallic materials [18][19][20][21]. The effects of laser surface texturing on the thickness of lubricant film were also investigated [22][23][24]. Gropper et al. [25] reviewed the key findings in texture design and modelling techniques, which provides an important contribution to the research of texture. Laser surface texturing was also used to research the relationship between the tactile friction and perceptual attributes [26,27].
The beneficial and detrimental effects of surface texture depend certainly on the types of tribo-pairs. The same surface texture may exhibit an inverse effect under different conditions. Vlădescu et al. [28,29] researched the flow behavior of a lubricant in a reciprocating contact, simulating a piston ring−cylinder liner pair. They concluded that an appropriate choice of surface texture pattern is capable of reducing not only piston-cylinder liner friction but also automotive oil consumption, and the pocket spacing on piston liners should be varied as a function of the reciprocating sliding speed. Furthermore, it was found that the pockets tend to increase fluid entrainment and reduce any asperity contact but pockets at reversal tend to increase friction dramatically [30]. Braun et al. [31] investigated the tribological behavior of steel sliding pairs with dimple diameters, ranging from 15 mm to 800 mm, in a mixed lubrication regime in a pin-on-disk experiment. The results showed that the dimple diameters leading to the highest friction reduction significantly depend on the oil temperature.
The tribo-pairs of rotor and stator in USMs are special by the virtue of ultrasonic vibrations. The method to design the texture on a stator surface to obtain better tribological properties is still not clear, which is the purpose of this study. The dimple textures were fabricated on the stator surface of a travelling wave USM using laser. To study the effects of laser surface texturing on the tribological performances of USMs, the speed and efficiency characteristics were tested, and the worn surface and wear debris were observed. This research also introduces a wear reduction method for travelling wave USMs.
Tribo-pairs and surface texturing
A travelling wave USM (USM60-2, Xi'an Chuanglian Ultrasound Technology Co., Ltd) was applied in this study. The tribo-pairs of stator and rotor are shown in Fig. 1. The stator and friction material were made of bronze and PTFE-based composite, respectively. The Shore hardness of the PTFE-based friction material was nearly 75, as tested using a Shore hardness tester. Dimples on the stator surface were fabricated using Nd: YAG laser. The electric current was set as 220 A, pulse width as 0.2 ms, and frequency as 2 Hz. The textures are shown in Fig. 2. Two types of textures were fabricated on the stator: a one-line dimple texture (texture-1L) ( Fig. 2(a)) and three-line dimple texture (texture-3L) ( Fig. 2(b)). The texture area densities of the one-line dimple and three-line dimple textures were approximately 7.95% and 23.85%, respectively.
After investigating the characteristics of the stator employing texture-1L, the stator was cleaned using an ultrasonic cleaner, and then texture-3L was fabricated. The topography of a dimple is shown in Fig. 2(c) and its profile is shown in Fig. 2(d). The height and width of the dimple were nearly 180 μm and 340 μm, respectively.
Experimental setup
An experimental setup, capable of controlling the preload between the stator and rotor accurately, was used to test the performance of the USM. The experimental setup is shown in Fig. 3. The principle and structure of the experimental setup have been introduced in our previous research [16]. The experimental setup primarily consisted of dovetail rails, a micrometer head, pressure sensor, speed-torque sensor, and magnetic brake. The speed and torque of the USM can be measured under different preloads using this experimental setup.
Performance of USM
The preload applied on the stator and rotor was set as 250 N, and the frequency of drive voltage was set as 39.6 kHz. Next, the speed-torque and efficiencytorque characteristics were tested, as shown in Fig. 4. The speed-torque characteristics of the textured stator decreased as compared to the non-textured stator. An apparent drop in speed (17.09 rpm) occurred when a torque of 0.35 N·m was applied on the stator with texture-3L, as shown in Fig. 4(a). The efficiency-torque characteristics of the motor also decreased with the increase in texture density. The maximum efficiencies of the non-textured stator and stator with texture-3L differed by 4.81%. However, the stall torques of the stator with different textures were similar. When the torque was small, the efficiency of the stator with texture-3L was less than that of the stator with texture-1L, and when the torque was increased, the efficiency of the stator with texture-3L became higher than that of the stator with texture-1L.
Worn surfaces
The worn surfaces of the non-textured, texture-1L, and texture-3L stators are shown in Fig. 5. Large pieces of transfer film were found on the surface of the non-textured stator (Figs. 5(a) and 5(d)). A few debris was found on the external edge of wear scar on the stator with texture-1L ( Fig. 5(b)), and a very few particles of debris fell into the dimples (Fig. 5(e)). Fewer debris was found on the external edge of wear scar on the stator with texture-3L (Fig. 5(c)), and some debris fell into the dimples (Fig. 5(f)). The results suggested that the texture fabricated on the stator surface could greatly decrease the transfer of friction material, which, consequently, could increase the wear of friction material.
100-h test
To further study the effects of texture, the stator with no texture and the stator with texture-3L sliding against friction material were tested for 100 h. During the time period of 100 h, the wear debris and the surface topography of the stators were observed every 5 h from 0-40 h and every 20 h from 40-100 h. The preload applied on the stator and rotor was 250 N, the frequency of drive voltage was 39.6 kHz, and no torque was applied in the motor except when the speed-torque characteristics were tested.
Speed and torque
During the time period of 100 h, the speed of the motor was measured under the sampling frequency of 0.5 Hz.
The speed curves of the stators with no texture and texture-3L are shown in Fig. 6. The preload was set as 250 N. In the beginning, the speed of the stator with texture-3L was approximately 10 rpm less than that of the non-textured stator. The speed of the stator with texture-3L increased marginally with time, and the difference decreased to approximately 3 rpm after 100 h. Moreover, fluctuations in the speed of the textured stator were smaller than those of the nontextured stator, which suggested that the texture on the stator surface increased the stability of the USM.
The load-torque and efficiency-torque characteristics were tested after 5 h and 100 h, as shown in Fig. 7. After 5 h, the difference in the efficiency of the two speed curves decreased with the increase in torque, as shown in Fig. 7(a). After 100 h, the two speed curves appeared very close, as shown in Fig. 7(b). The difference in efficiencies decreased with time. After 5 h, the maximum efficiency of the non-textured stator was 22.9% with torque of 0.5 N·m, whereas the value of the textured stator decreased to 18.2% with torque of 0.65 N·m. The difference in maximum efficiencies was about 4.7% after 5 h, whereas the difference decreased to 3.7% after 100 h (Figs. 7(a) and 7(b)).
Evolution of surface topography
During the 100-h test, the surface topography of the stator was observed under a digital microscope (KH-8700, QUESTAR Co., Ltd.), as shown in Fig. 8. After 5 h and 20 h, large pieces of transfer film were found on the surface of the non-textured stator (Figs. 8(a) and 8(b)). Even 100 h later, some transfer films were found, as shown in Fig. 8(c). A large transfer film could be not generated on the textured stator surface. After 5 h, only a few particle-like debris existed outside the wear scar, as shown in Fig. 8(d). After 20 h, the debris became fewer (Fig. 8(e)), and a few fog-like debris were found in the dimples after 100 h (Fig. 8(f)). It was found that the transfer film of friction material decreased remarkably by the texture fabricated on the stator surface, as compared to the evolution of the surface topography.
After the 100-h test, the worn surface of friction material was observed using SEM, as shown in Fig. 9. Many cracks were observed on friction material sliding against the non-textured stator, as shown in Fig. 9(a). After 3,000 times magnification, cracks and debris were seen clearly around the cracks (Fig. 8 (b)). Cracks on friction material sliding against the textured stator decreased immensely, as shown in Fig. 9(c). Although some cracks were found on friction material, the surface was relatively flat, and no obvious debris existed around the cracks ( Fig. 9(d)). According to the results, the surface and subsurface of friction material were not seriously damaged when friction material slid against the textured stator. Accordingly, large pieces of adhesive material were avoided on the surface of the textured stator.
The thickness of the rotor was measured before and after the 100-h test using a digital micrometer with an accuracy of ±0.002 mm. Eight equally spaced positions of the rotor were selected to measure the thickness, as shown in Fig. 10. Rotor-1 was slid against the non-textured stator, and rotor-2 was slid against the textured stator. The average wear heights of friction materials bonded on rotor-1 and rotor-2 were 3.8 μm and 1.1 μm, respectively. The wear decreased significantly owing to the texture fabricated on the stator. The negative values of wear may be attributed to the plastic deformation.
Evolution of wear debris
The wear debris was collected and observed using SEM after being sprayed with Platinum, as shown in Fig. 11. The debris generated after 5 h from friction material sliding against the non-textured stator was a multilayer agglomerate sheet, as shown in Fig. 11(a). After 100 h of experiment, the size of debris decreased, and its form changed to particle-like ( Fig. 11(b)). The size of debris generated from friction material sliding against the textured stator was smaller than that of friction material sliding against the non-textured stator, as shown in Fig. 11(c). The size of debris collected from the stator surface ( Fig. 8(f)) after 100 h was below serval micrometers. Since the wear debris generated after 100 h from friction material sliding against the textured stator was very meager, a picture with more multiples (× 3,000) was used to study the details, as shown in Fig. 11(d).
The debris of friction material shown in Fig. 11(b) was analyzed using the energy dispersive spectroscopy (EDS). The observed map spectrum is shown in Fig. 12. The contents of fluorine (F), carbon (C), and copper (Cu) elements are mentioned below the figures. Fluorine accounted for 61.1 at% and 62.5 wt%, and carbon accounted for 34.7 at% and 23.8 wt%, which suggested that most of the debris was generated from friction material. Copper, which came from stator only, accounted for 3.6 at% and 12.3 wt%, which suggested that there was a slight wear of bronze stator.
Effect of surface texture on performances of USM
Two types of surface textures were fabricated on the stator surface, and the speed-torque and efficiencytorque characteristics were tested and compared with the characteristics of non-textured stator. It can be seen from Fig. 4 that the performance of USM decreased by a small extent when the textures were fabricated on the stator surface. However, during the 100-h test, the non-load speed of the USM with the textured stator increased with time, and its speed was very close to that of the motor with the non-textured stator after 100 h (Fig. 6). After the test, the speed-torque characteristics of the textured stator and non-textured stator were also very close, except when the value of torque reached over 0.8 N·m (Fig. 7(c)). The difference in efficiency-torque characteristics decreased, however, a difference of 3.7% in maximum efficiencies still existed. These were primarily attributed to the friction reduction effect of surface texture. It is known that the USMs are driven by frictional force, and the reduction in frictional force decreases their speed and efficiency.
The friction coefficients of friction material sliding against the textured and non-textured stators were measured without vibrations using the preload controlled USM test device (Fig. 3). The magnetic brake was replaced by a direct current (DC) motor. The normal load was set as 200 N and the speed was set as approximately 30 rpm in the test. The friction coefficients of friction material sliding against stators with no texture, texture-1, and texture-3L were 0.180, 0.179, and 0.175, respectively. This result confirmed that the textures fabricated on the stator surface reduced the friction.
Effect of surface texture on wear characteristics
The texture can significantly reduce the adhesive wear of friction material, as suggested by the results shown in Fig. 5. The changes in debris suggest the changes in wear mechanisms. The main wear mechanisms of PTFE-based material are adhesive wear and abrasive wear under no vibration friction condition [32,33]. Because of the alternating contact of stator and rotor, the main wear mechanisms of PTFE-based friction material used in USMs are adhesive wear and fatigue wear [16]. In this study, the adhesive wear was prevented by the textures, and the main wear mechanism of friction material changed to small size fatigue and slight abrasive wear. The contact and wear mechanism of friction material is shown in Fig. 13. The cracks attributed to the fatigue occurred easily on the surface and subsurface of the material owing to ultrasonic vibrations as the internal bond force of the material caused a decrease in the crack growth. Small pieces of material were peeled off from the surface of friction material under the influence of the adhesive force.
Furthermore, the adhesive force can increase the growth of cracks. As a result, large pieces of transfer material were found on the surface of stator (Figs. 5(a) and 5(d), and Figs. 8(a), 8(b), and 8(c)), and the wear debris appeared as a multilayer agglomerate sheet (Figs. 11(a) and 11(b)). There were numerous cracks on friction material, and the material around the cracks was warped (Figs. 9(a) and 9(b)). The adhesive force between stator surface and friction material decreased after dimples were fabricated on the stator surface. The cracks attributed to the fatigue still occurred. However, the crack propagation was limited by the texture, as shown in Fig. 13(b). No large sheets of debris were found on the textured surface (Figs. 5 and 8). At the end of the 100-h test, only small wear particles were observed on the surface (Fig. 11(d)). Only a few cracks were observed on friction material, and the material around the cracks was comparatively smooth (Figs. 9(c) and 9(d)). The wear mechanism of friction material sliding against the textured stator changed to a small size fatigue and slight abrasive wear, which explained the decrease in the wear of friction material from 3.8 μm to 1.1 μm in the 100-h test (Fig. 10).
The wear problems of PTFE composites applied in USMs may differ largely from the wear problems of the material applied in normal conditions. One of the differences is that large pieces of transfer film should not be generated because a sufficient amount of friction force is needed to drive the rotor. In this study, a large piece of transfer film was formed during the run-in stage when the smooth stator was used, as shown in Fig. 8. The films were peeled off and replenished, which resulted in a high wear rate of friction material. When the textured stator was employed, the presence of a transfer film decreased remarkably and was replaced by small particles, as shown in Figs. 5 and 8. The underlying cause of this result was that the adhesive force between stator and PTFE composite decreased owing to the laser surface texturing to a value that was less than the cohesive strength of the material. In addition, the ultrasonic vibrations reduced the transfer film tenacity, resulting in the removal of transfer film in both the run-in and steady stages. Thus, if transfer film generation was prevented by laser surface texturing, the wear rate of PTFE composite would have decreased.
In the steady stage, the PTFE composite fatigued under the alternating stress, and then the texture reduced the size of particles, which were peeled off from the composite, by weakening the adhesive force. To conclude, the laser surface texturing can reduce the wear of PTFE composite applied to travelling wave USMs in both the run-in and steady stages, which is experimentally demonstrated in this study.
Conclusion
In this paper, the effects of the laser surface texturing on the tribological properties of the textured stator and PTFE-base composite in a USM were studied. The critical findings are concluded as follows: (1) The speed and efficiency of the USM decreased when the laser texture was fabricated on the stator surface, and the values decreased further with the increase in texture density. The reason is the friction reduction effect of the surface texture. However, after 100 h, the non-load speeds of the textured stator and non-textured stator became very close, and the difference in their maximum efficiencies decreased from 4.7% to 3.7%.
(2) In the run-in wear stage, the generation of a large transfer film of PTFE composite was prevented by laser surface texturing, and the adhesive wear decreased predominantly. Large pieces of adhesive material were not found on the surface of the textured stator, and the form of wear debris changed to particle-like.
(3) In the steady wear stage, the laser surface texturing reduced the size of particles, which were peeled off from the composite, by weakening the adhesive force. The wear mechanisms of friction material were small size fatigue and slight abrasive wear. The wear height of friction material sliding against the non-textured stator was 3.8 μm, and this further decreased to 1.1 μm after the texture was fabricated. This showed that the wear height decreased by approximately 71%.
The generation of large pieces of PTFE composite transfer film was prevented using laser surface texturing, and adhesive wear was reduced notably despite the insignificant decrease in load capacity and efficiency, so that the service life of the motor could be extended. This study introduces a wear reduction method for travelling wave USMs. We believe that the insignificant decrease in load capacity and efficiency does not affect the applications of USMs. Our next study seeks to optimize the design of a texture that can reduce wear and increase motor performance. | 2019-04-16T13:29:24.931Z | 2019-01-18T00:00:00.000 | {
"year": 2020,
"sha1": "bd5c90c0c1630a57f15cf28b712dc2ec130704d8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-018-0253-3.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1f17714aa6c0cd3b7cc00b292f678923eaf41085",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260487794 | pes2o/s2orc | v3-fos-license | Development of a Prediction Score for Evaluation of Extubation Readiness in Neurosurgical Patients with Mechanical Ventilation
Background: There is no widely accepted consensus on the weaning and extubating protocols for neurosurgical patients, leading to heterogeneity in clinical practices and high rates of delayed extubation and extubation failure−related health complications. Methods: In this single-center prospective observational diagnostic study, mechanically ventilated neurosurgical patients with extubation attempts were consecutively enrolled for 1 yr. Responsive physicians were surveyed for the reasons for delayed extubation and developed the Swallowing, Tongue protrusion, Airway protection reflected by spontaneous and suctioning cough, and Glasgow Coma Scale Evaluation (STAGE) score to predict the extubation success for neurosurgical patients already meeting other general extubation criteria. Results: A total of 3,171 patients were screened consecutively, and 226 patients were enrolled in this study. The rates of delayed extubation and extubation failure were 25% (57 of 226) and 19% (43 of 226), respectively. The most common reasons for the extubation delay were weak airway-protecting function and poor consciousness. The area under the receiver operating characteristics curve of the total STAGE score associated with extubation success was 0.72 (95% CI, 0.64 to 0.79). Guided by the highest Youden index, the cutoff point for the STAGE score was set at 6 with 59% (95% CI, 51 to 66%) sensitivity, 74% (95% CI, 59 to 86%) specificity, 90% (95% CI, 84 to 95%) positive predictive value, and 30% (95% CI, 21 to 39%) negative predictive value. At STAGE scores of 9 or higher, the model exhibited a 100% (95% CI, 90 to 100%) specificity and 100% (95% CI, 72 to 100%) positive predictive value for predicting extubation success. Conclusions: After a survey of the reasons for delayed extubation, the STAGE scoring system was developed to better predict the extubation success rate. This scoring system has promising potential in predicting extubation readiness and may help clinicians avoid delayed extubation and failed extubation–related health complications in neurosurgical patients.
Evaluate Extubation in Neurosurgical Patients
2][3] The patient's response to early initiation of weaning, followed by successful extubation, indicates effective mechanical ventilation and better outcomes.5][6][7] Therefore, an efficient clinical strategy is urgently warranted to predict the right timing for effective weaning and subsequent extubation steps to avoid unnecessary use of mechanical ventilation, as well as minimizing the health risks associated with failed weaning and repeated extubation or intubation processes.
Clinical practice guidelines in general critically ill patients recommend a well defined, albeit imperfect, protocolized weaning and extubation procedure, including evaluation of weaning readiness, spontaneous breathing trial assessment, extubation, and consideration of prophylactic noninvasive ventilation or high-flow nasal oxygen. 8,9owever, the lack of strong clinical evidence has been a roadblock in establishing a globally accepted weaning procedure, including the extubation step, 10 which often leads to heterogeneity in clinical outcomes of mechanical ventilation, with an increasing rate of extubation failure in neurosurgical patients. 113][14] However, the decision to extubate usually relies on the clinical judgment of the responsible attending physician. 2,15,16Consequently, the rate of delayed extubation is relatively higher in neurosurgical patients than in general critically ill patients. 15,17lthough several grading models with solid methods and reasonable results have been proposed, 13,18,19 a predictive scoring system focusing particularly on the level of consciousness and airway-protecting function with simplicity to perform at the bedside is still lacking.Therefore, further research on developing strategies for effective extubation in this subset of the patient population is strongly required.
In this study, we surveyed physicians' opinions and reasons for their decision on delayed extubation in a cohort of prospectively enrolled mechanically ventilated neurosurgical patients.In addition to routine extubation practice at the physician's discretion, we developed a diagnostic scoring system for the assessment of extubation readiness, termed the Swallowing, Tongue protrusion, Airway protection reflected by spontaneous and suctioning cough, and Glasgow Coma Scale Evaluation (STAGE) score, and we evaluated the usefulness of this scoring process in predicting extubation success in neurosurgical patients.
Study Population and ethics
Every admission was consecutively screened at two ICUs having 70-bed capacity at Beijing Tiantan Hospital, Capital Medical University (Beijing, China) from November 1, 2020, to October 31, 2021.The study protocol (KY2022-063-02) was approved by the Institutional Ethics Committee.Written informed consent was obtained from either the patients or their legal representatives.Neurosurgical patients who were subjected to invasive mechanical ventilation longer than 24 h were eligible.Neurosurgical patients were defined a priori as patients with cerebral tumor, head trauma, or subarachnoid or intracerebral hemorrhage undergoing craniotomy.Patients with extubation attempts were finally included.Excluded subjects had: (1) age less than 18 years, (2) pregnancy, (3) existing spinal cord injury, (4) extubation in association with the withdrawal of life-sustaining therapy, or (5) no extubation attempt during their ICU stay (reasons documented).There was no repetition in the patient inclusion process.This study followed the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Statement for observational studies. 20
routine Practice of Weaning and extubation
In our center, routine practice proceeds as follows: a fourstep daily weaning and extubating process in collaboration with the clinicians and respiratory therapists: Step I: Checking Readiness for Initiation of the Weaning Process The following aspects were evaluated by clinicians: (1) improvement of the cause of intubation; (2) no signs of intracranial hypertension or brain swelling; (3) hemodynamic stability; (4) positive end-expiratory pressure of 5 cm H 2 O or less; (5) partial pressure of alveolar oxygen/fraction of inspiration oxygen (Pao 2 /Fio 2 ) of 200 mmHg or more; and (6) no planned surgery under general anesthesia within the ensuing 72 h.If all aspects were satisfied, the patient was moved to step II for spontaneous breathing trial assessment.
Step II: Spontaneous Breathing Trial Spontaneous breathing trial was conducted once daily by respiratory therapists using the low-level pressure support ventilation (pressure support of 8 cm H 2 O or less, positive end-expiratory pressure of 5 cm H 2 O or less) or T-tube for 30 to 120 min.Failure of spontaneous breathing trial might include: (1) respiratory rate (RR) of greater than 35 breaths/min for more than 5 min; (2) oxygen saturation measured by pulse oximetry (Spo 2 ) of less than 90%; (3) heart rate (HR) of greater than 140 beats/min or a sustained change in HR of more than 20%; (4) systolic blood pressure greater than 180 mmHg or less than 90 mmHg; and (5) signs of anxiety, agitation, or diaphoresis. 8,9If the CritiCal Care MediCine spontaneous breathing trial was passed, the patient was moved to step III for extubation.For unqualified patients in step I or II, we re-evaluated them from step I on the next day.
Step III: Extubation Decision For neurosurgical patients, extubation was performed only after reaching a consensus between ICU physicians and neurosurgeons.Otherwise, the patient was re-evaluated the next day from step I.
Step IV: Postextubation Assessment After extubation, oxygen was delivered through Venturi masks.The flow of gas was adjusted to maintain Spo 2 greater than or equal to 92%.If any sign of respiratory distress occurred, noninvasive respiratory support would be used under the supervision of the responsible physicians.Reintubation was needed when: (1) RR was greater than 35 breaths/min for more than 5 min; (2) Spo 2 was less than 90%; (3) HR was greater than 140 beats/min, or there was a sustained change in HR greater than 20%; (4) Pao 2 was less than 80 mmHg with a Fio 2 of 50% or more; (5) Paco 2 was greater than 45 mmHg, or there was a change in Paco 2 of 20% or more after extubation, with a pH of less than 7.33; or (6) the patient exhibited signs of respiratory muscle fatigue or increased work of breathing. 8
Data Collection and outcome measures
Demographics and baseline data were collected for all enrolled patients.Physiologic parameters were recorded on the first day of mechanical ventilation and from the first successful spontaneous breathing trial to extubation, including vital signs, total and subscores on the Glasgow Coma Scale (verbal component was deemed as 1), ventilator modes and parameters, 24-h fluid input and output, blood gas analysis, and the use of sedatives and analgesics.In addition, recorded clinical outcome parameters were as follows: number of extubation failures, tracheostomy rate, duration of mechanical ventilation, length of stay in ICU and hospital, nosocomial pneumonia, [21][22][23] mortality rate, and costs.Details of data collection are shown in Supplemental Digital Content 1: Supplemental Text 1 (https://links.lww.com/ALN/D259).All patients were routinely followed up until their hospital discharge, death, or 60 days postenrollment, whichever occurred first.
extubation Decision Survey
For each of the spontaneous breathing trial-qualified patients, a survey on the extubation decision from respective ICU physicians and neurosurgeons was conducted by one of the investigators who was not involved in the decision-making process.The reasons for delayed extubation were documented daily as long as the patient was not extubated after a successful spontaneous breathing trial.The questionnaire for the survey of reasons for the delay can be found in Supplemental Digital Content 2: Supplemental Text 2 (https://links.lww.com/ALN/D260).
Derivation of the model Items through the Nominal Group Technique
First, we employed the nominal group technique to carefully select model items.Subsequently, utilizing the data gathered, we assigned a value to each of these selected items through the application of the multiple logistic regression method.
Before the study, a nominal group of 17 experts from 13 provinces covering high-, middle-, and low-income regions was organized by the National Center for Healthcare Quality Management in Neurocritical Care.This group has been devoted to neurocritical care quality control since 2018 and has developed key performance indicators for neurocritical care quality control.The nominal group consisted of six neurointensivists, four general intensivists, three neurosurgeons, one neurologist, two respiratory therapists, and one neurocritical care nurse all with working experience exceeding 15 yr.
Relevant items of consciousness and airway-protecting function were listed after the literature review and clinical consultation.After five rounds of online face-toface nominal group meetings and one round of comments and iterative review, 24 four airway-protecting functions assessments consisting of swallowing, tongue protrusion, and spontaneous and suctioning cough, as well as one consciousness assessment (motor response in the Glasgow Coma Scale) reached a consensus.The nominal group methods and consensus results are presented in Supplemental Digital Content 3: Supplemental Text 3 (https://links.lww.com/ALN/D261).The assessment criteria were as follows: (1) Swallowing
Evaluate Extubation in Neurosurgical Patients
(5) Motor response in Glasgow Coma Scale • Strong: Can obey commands or localize the pain • Poor: Withdrawal, flexion, extension, or no response to pain After the first successful spontaneous breathing trial, assessments of the five items were performed daily by two respiratory therapists, according to the criteria shown above, and the results were recorded without informing clinicians to avoid interfering with clinical practice.In clinical practice, the extubation decision was made based on the four-step assessment by the physicians in charge who were blinded to the study.After the data collection phase, the STAGE model was developed by assigning specific scores to five selected items, which were determined through the multiple logistic regression analysis.
Definitions of Delayed extubation and extubation Failure
Timely extubation was defined as extubating within 24 h of the first successful spontaneous breathing trial; otherwise, the extubation was recognized as delayed. 15,25Extubation failure was defined as reintubation within 72 h of a failed extubation; 25 otherwise, it was deemed as an extubation success. 2,15
Statistical Analysis
Parametric comparisons were performed between the extubation success and failure groups.Continuous variables were tested for their normal distribution and presented as mean and SD or median (25th, 75th percentiles), as appropriate.Comparisons of continuous variables were performed by Student's t test in case of a normal distribution or by the Mann-Whitney U test for non-normal distribution.Categorial variables were presented as numbers and percentages and analyzed by the chi-square or Fisher's exact test.The missing data in our study were minute ventilation, tidal volume (V T ), and rapid shallow breathing index for 141 patients using the T-tube on the extubation day.Each of the three parameters had 141 missing data.The missing data rate was 62% (141 of 226) in all the enrollments.All three parameters had 62% (113 of 183) and 65% (28 of 43) missing data rates in the extubation success and failure groups, respectively.Considering the equal distribution of missing data, we choose not to replace them.
A multiple logistic regression model, including five items derived through the nominal group technique, was constructed to predict the extubation success.Results are presented with an odds ratio and 95% CI.The score for each item was given to the nearest integer based on the weighting of odds ratio values, thus creating the STAGE model. 13,18The ability of the score to predict extubation success was evaluated by the receiver operating characteristic curve analysis by calculating the area under the receiver operating characteristics curve (AUC).The calibration curve was developed to assess calibration ability. 26Youden's index, sensitivity, specificity, and positive and negative predictive values were calculated at different cutoff points of the receiver operating characteristic curve.We used the bootstrap method to perform internal validation of scores.From the original data set, multiple samples were randomly drawn with a replacement 1,000 times.The AUC of the score was corrected to avoid overoptimism. 27urthermore, we performed two sensitivity analyses.First, we sequentially removed one, two, three, and four items from the model and repeated the multiple logistic regression analyses.Then, we compared the AUC for each modification with that of the original model.Second, predictive values were compared between the STAGE score and Glasgow Coma Scale alone for all the patients included.We also divided patients into subgroups to test the prediction potential of our scoring system based on the extubation timing (timely or delayed) and the corresponding motor response in Glasgow Coma Scale (5 or greater or less than 5).
Data management and analyses were conducted using Stata v15.0 software (StataCorp, USA) and R 4.1.2(R Foundation for Statistical Computing, Austria) with the val.prob package.A two-sided P value of less than 0.05 was considered statistically significant.A data analysis and statistical plan was written and filed with a private entity before the data were accessed.
results
A total of 3,171 patients were screened consecutively during 1 year from November 2020 to October 2021.Among those, 334 adult neurosurgical patients with mechanical ventilation for more than 24 h were eligible.According to the inclusion criteria, we excluded 2,945 patients, including 108 patients who met the readiness of the weaning processes but never had extubation attempts during the ICU stay (fig.1).Ultimately, 226 neurosurgical patients undergoing extubation were included in the final analysis.The baseline characteristics of the enrolled patients are summarized in table 1.
Delayed extubation occurred in 25.2% (57 of 226) of enrollments for a total number of 160 days.The median (25th, 75th percentiles) time of delay was 2 (1 to 4) days, ranging from 1 to 8 days (fig.2).A total of 20 ICU physicians and 30 neurosurgeons were surveyed.Of 160 days for delayed extubation, 131 (82%) days were delayed solely by the ICU physicians, 13 (8%) by neurosurgeons, and 16 (10%) by both sides.According to the questionnaire of the extubation decision survey, the top two reasons provided by each side were the same: weak airway-protecting function and poor consciousness (fig.3).No significant difference was found in the rate of extubation failure between the timely and delayed extubation groups (17.8% vs 22.8%, P = 0.437).
Compared with the extubation success group, the failure group presented a higher prevalence of cerebrovascular diseases in the past, lower Glasgow Coma Scale, and higher Acute Physiology and Chronic Health Evaluation Score (APACHE) II at ICU admission and a longer duration of mechanical ventilation before the initiation of the weaning process and the first successful spontaneous breathing trial (table 1).On the extubation day, higher RR, rapid shallow breathing index, lower Glasgow Coma Scale and motor response in Glasgow Coma Scale, and poorer airway protection functions were observed in the extubation failure group (table 2).In addition, these patients exhibited a higher rate of tracheostomy and pneumonia, longer duration on mechanical ventilation, extended ICU or hospital length of stay, and increased healthcare costs (table 3).
Dichotomized swallowing, tongue protrusion, spontaneous cough, suctioning cough, and motor response in Glasgow Coma Scale were included in the multiple logistic regression analysis.A scoring system with weighting according to odds ratios is shown in table 4. The AUC for the total STAGE score to predict extubation success was 0.72 (95% CI, 0.64 to 0.79; fig.5).After internal validation using bootstrap, AUC was 0.71 (95% CI, 0.64 to 0.79).The receiver operating characteristic curve of the STAGE score after internal validation with the cutoff point, sensitivity, specificity, and positive and negative predictive values is available in Supplemental Digital Content 4: Supplemental Figure 1 (https://links.lww.com/ALN/D262).The calibration curve demonstrated high goodness of fit between predicted probability and observed proportion (Supplemental Digital Content 5: Supplemental Figure 2, https://links.lww.com/ALN/D263).
CritiCal Care MediCine
In the sensitivity analyses, all AUC values in the reconstructed models with the sequential removal of one, two, three, and four items were lower than the original model including all five items of the STAGE score (Supplemental Digital Content 7: Supplemental Table 2, https://links.lww.com/ALN/D265).In addition, the STAGE score showed better predictive value in subgroups of high motor response in Glasgow Coma Scale (AUC = 0.76 in patients with motor response in Glasgow Coma Scale of 5 or higher; P = 0.020).No significant differences were found between the timely and delayed extubation groups (fig.7, A and B).Of all the patients included, the STAGE score showed superior predictive value compared with Glasgow Coma Scale alone (AUC = 0.72 for the STAGE score and 0.58 for Glasgow Coma Scale; P = 0.024).The AUC of the STAGE score for patients with Glasgow Coma Scale scores of less than 5 was 0.56 (95% CI, 0.23 to 0.89).
discussion
In this diagnostic study, neurosurgical patients undergoing the weaning process and extubation attempts were investigated.The major findings included the following: (1) decisions of delayed extubation were made in one-fourth of patients mainly by the ICU physicians; (2) the STAGE scoring system combining the airway-protecting function and consciousness assessment results was developed to predict the success rate of extubation; and (3) when the cutoff point of the STAGE score was set at 6, it could predict extubation success and exclude extubation failure with acceptable overall value.However, a STAGE score higher than 9 might have a better probability of predicting extubation success.
Limited data on weaning and extubation of neurosurgical patients lead to high rates of decisions to delay.Here, we defined delayed extubation with a time window of 24 h after a successful spontaneous breathing trial for the reason
Evaluate Extubation in Neurosurgical Patients
that extubation assessment and decision-making were performed every 24 h in our routine practice, as well as based on the studies of McCredie et al. 15 and Taran et al. 25 The rate of delayed extubation in our cohort (25%) was comparable in other brain-injured patients (27 to 30%) but higher than that of patients admitted to the general ICUs. 25lthough the rate of delayed extubation was higher in this cohort, the reasons for nonextubating decisions by the physicians were poorly reported.Our questionnaire survey revealed that consideration of the airway-protecting function and consciousness were the major determinants in extubation decision-making.Compared with general ICU patients, due to severe brain injury or some special lesions in the brainstem, neurosurgical patients present with much more complicated conditions of consciousness and airway clearance capacity.Even with a successful spontaneous breathing trial, there are possibilities of extubation failure that were consistent with the recommendations. 10nterestingly, we found that delayed extubation events were mostly decided by the ICU physicians (82%) but not the neurosurgeons (fig.3).Neurosurgeons tend to prioritize neurologic function related to the surgical aspect, whereas ICU physicians might also consider other factors such as the high possibility of airway obstruction, potential decreased respiratory function or consciousness in respiratory or intracranial infection, and the workloads during holidays, as shown in figure 3.
Eleven patients with successful spontaneous breathing trials underwent direct tracheostomy for different reasons (fig.1).Some were due to persistently low consciousness or weak airway function, while some patients and families chose direct tracheostomy instead of waiting for extubation to shorten the length of stay in the ICU or hospital, expediting the transfer of patients to local primary institutions, where patients could receive care from their family members and benefit from the lower cost of living in another city.We are not sure of their actual extubation outcomes if they were extubated and excluded them from our study. 15,19The extubation failure rate (19%) of the final enrollments was in the range of 10 to 38%, as reported by previous studies. 3,14,16,19,28owever, whether the delay would affect extubation failure remains to be discussed.Coplin et al. 17 The continuous data are presented as means ± SD for normal distribution and median (25th, 75th percentiles) for non-normal distribution.*on the extubation day, there were 141 patients using the T-tube.Therefore, each of the minute ventilation, v T , and rapid shallow breathing index had 141 missing data, respectively.The missing data rate was 62% (141 of 226) in all the enrollments.All three parameters had equal missing data distribution with 62% (113 of 183) and 65% (28 of 43) of the missing data rates in the extubation success and failure groups, respectively.Fio 2 , fractional inspired oxygen tension; Hr, heart rate; mAP, mean arterial pressure; PeeP, positive end-expiratory pressure; rr, respiratory rate; v T , tidal volume.
CritiCal Care MediCine
found that extubation failure rates did not vary between the timely and delayed extubation groups, which was consistent with our findings.Since prolonging the duration of mechanical ventilation alone, even after meeting the extubation criteria, might not benefit neurosurgical patients, identifying an objective evaluation strategy is of great importance.Due to the scarcity of evidence and the variation in extubation assessment practices for neurosurgical patients, extubation decisions have often been subjectively determined by physicians.In our study, we sought to address this issue by employing the nominal group technique to minimize deviations in item selection.This approach allowed us to incorporate the perspectives of a broader community in a democratic and transparent manner. 24,29Moreover, the statistical significance of all five selected items between the extubation success and failure groups, as demonstrated by chi-square tests, enhances the credibility of our selection process.We proceeded to assign scores to each item based on the results of multiple logistic regression analysis, resulting in the formation of our final model.The STAGE score exhibited favorable values for both the AUC and the calibration curve, indicating acceptable prediction and calibration capabilities.
According to the highest Youden index, the cutoff point of the STAGE score was set at 6 to predict rates of extubation success and failure.For patients with STAGE scores greater than or equal to 6 and already meeting other extubation readiness criteria, clinicians could extubate in a timely manner to reduce the chance of any unnecessary delay.Notably, a STAGE score greater than or equal to 9 revealed its promising potential in predicting extubation success (100% specificity and positive predictive value), indicating immediate readiness for extubation.However, an increase in the wait time for the score to 9 might be associated with more delayed extubation.Therefore, the extubation decision should be carefully weighed against the advantage of a higher extubation success rate and the disadvantage of prolonged mechanical ventilation time.
To date, several evaluation batteries have been investigated to successfully predict extubation readiness.In the Extubation strategies in Neuro-Intensive care unit patients and associations with Outcomes (ENIO) study, 19 the AUCs for predicting extubation success were both 0.79 in two models including 20 and 7 variables, respectively.It was the largest multicenter study with external validation, and the results were highly reliable for extrapolation. 19However, this model was somewhat difficult to perform at the bedside.In addition, some simplified models have been developed.The model of Godet et al. 13 including cough, deglutition, gag reflex, and Coma Recovery Scale-Revised visual had an AUC of 0.82.They considered cough as one parameter either spontaneously and/or during suctioning. 13In our CritiCal Care MediCine clinical practice, we have observed that extubation success rates differed between the patient with either one and/or both.Since these two reflexes had different brain circuits and motor commands in the central nervous system, 30 the two items were separated in our study.The Visual pursuit, Swallowing, Age, Glasgow for Extubation (VISAGE) score by Asehnoune et al. 18 , had an AUC of 0.75, consisted of age, visual pursuit, swallowing, and the Glasgow Coma Scale.Different from these models, considering that some parameters with clinical significance might be missing
Evaluate Extubation in Neurosurgical Patients
without reaching "statistical significance," the five items of the STAGE score were selected based on experts' opinions rather than the solo statistical results.Whether the Glasgow Coma Scale alone could affect the extubation outcome remains controversial.Although several studies suggest that a low Glasgow Coma Scale may be an independent risk factor for extubation failure, 15,18,28,31,32 some studies report different opinions.Coplin et al. 17 report that 80% (39 of 49) of enrolled subjects with Glasgow Coma Scale scores of 8 or lower and 91% (10 of 11) with Glasgow Coma Scale scores of 4 or lower underwent extubation successfully.The study of Manno et al. 33 demonstrates that for comatose patients with adequate airway function and successful spontaneous breathing trial, it might be safe to extubate early, which is in agreement with our results.In this study, 38 comatose patients had STAGE scores greater than or equal to 6.Although lacking spontaneous cough and tongue protrusion, these patients had strong swallowing and suctioning cough to clear secretions with an acceptable rate (33 of 38, 87%) of extubation success.
Types of brain injury might affect weaning and extubation outcomes due to impaired airway protection or consciousness level.In our study, heterogeneity of extubation failure rates could be seen among patients with head trauma (29%), cerebral tumor (20%), and subarachnoid or intracerebral hemorrhage (15%).Further research is required to clarify the relationship between different types of brain injury and extubation outcomes.
Our study also suffers from several limitations.First, this is a single-center study with internal validation; further studies with external validation are needed in the future.Second, the items of the STAGE score were obtained after the nominal group technique.Five items all met the statistical differences after group comparisons enhancing the credibility of our selection process.Our model demonstrated acceptable prediction and calibration values.While acknowledging the potential for selection bias in our study, we believe these findings might offer a new clue for developing models using the nominal group technique.We are looking forward to future research that evaluates the feasibility of this method in diagnostic predictive studies.Third, delayed extubation was a subjective decision.Different clinicians might determine timely or delayed extubation for different reasons based on their personal clinical experiences.Since rates of delayed extubation and extubation failure were consistent with previous studies, we assumed that our results might be convincing.Fourth, sample size estimation was not conducted during the study design, and patients were enrolled consecutively for 1 year.After finishing our study, we read the methods proposed by Riley et al. 34 and calculated the sample size, which is shown in Supplemental Digital Content 8: Supplemental Text 4 (https://links.lww.com/ALN/D266).Fifth, in our daily practice, we perform cuff leak tests only for patients at higher risk of upper airway obstruction according to the guideline. 10Therefore, we did not include these results in our study.
Conclusions
The rates of delayed extubation and extubation failure are high in critical neurosurgical patients.According to the most common reasons for extubation delay, we developed the STAGE scoring system by combining the assessment points of airway-protecting function and level of consciousness.Therefore, the STAGE score could guide further in deciding on the patient's extubation readiness and may help clinicians minimize extubation failure-associated long-term adverse consequences.
Xu et al.
•
Strong: No visible accumulation of saliva for either conscious or comatose patients • Poor: Visible accumulation of saliva for either conscious or comatose patients (2) Tongue protrusion • Strong: Can protrude outside the mouth • Poor: Cannot protrude outside the mouth (3) Spontaneous cough: • Strong: Vigorous • Poor: Weak or none (4) Suctioning cough • Strong: Vigorous • Poor: Weak or none Xu et al.
Fig. 2 .
Fig. 2. Delayed extubation days from the first successful spontaneous breathing trial to extubation.
Fig. 3 .
Fig. 3. reasons for the delay provided by the intensive care unit (ICU) physician and neurosurgeon groups.
Fig. 5 .
Fig. 5.The receiver operating characteristic curve of the Swallowing, Tongue protrusion, Airway protection reflected by spontaneous and suctioning cough, and Glasgow Coma Scale evaluation (STAGe) score to predict extubation success rate.The area under the receiver operating characteristics curve is 0.72 (95% CI, 0.64 to 0.79).
Fig. 6 .
Fig. 6. extubation success rates in different Swallowing, Tongue protrusion, Airway protection reflected by spontaneous and suctioning cough, and Glasgow Coma Scale evaluation (STAGe) score groups.
Fig. 7 .
Fig. 7. Comparison of areas under the receiver operating characteristics curve (AUCs) for the Swallowing, Tongue protrusion, Airway protection reflected by spontaneous and suctioning cough, and Glasgow Coma Scale evaluation (STAGe) score in different subgroups.(A) Comparison of AUCs for the STAGe score between the timely and delayed extubation groups.P = 0.720.(B) Comparison of AUCs for the STAGe score between patients with motor responses in the Glasgow Coma Scale less than 5 and those with motor responses in the Glasgow Coma Scale 5 or greater.P = 0.020.
table 1 .
baseline Characteristics of extubation Success and Failure Groups The continuous data are presented as means ± SD for normal distribution and median (25th, 75th percentiles) for non-normal distribution.ICU, intensive care unit; APACHe, Acute Physiology and Chronic Health evaluation score.
table 2 .
15d McCredie et al.15Comparison of Parameters on extubation Day between extubation Success and Failure Groups
table 3 .
Comparison of Clinical outcomes between extubation Success and Failure Groups
table 4 .
results of multiple Logistic regression Analysis Associated with extubation Success Tongue protrusion, Airway protection reflected by spontaneous and suctioning cough, and Glasgow Coma Scale evaluation score.
Evaluate Extubation in Neurosurgical Patients | 2023-08-05T06:17:20.262Z | 2023-08-03T00:00:00.000 | {
"year": 2023,
"sha1": "c72a7c63871788d8c4f8d668281af915b5dbe439",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.asahq.org/anesthesiology/article-pdf/doi/10.1097/ALN.0000000000004721/690557/aln.0000000000004721.pdf",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "bbdd4473575dd39c685ff06ca9965746569adda1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253656193 | pes2o/s2orc | v3-fos-license | Pentapeptide-Zinc Chelate from Sweet Almond Expeller Amandin Hydrolysates: Structural and Physicochemical Characteristics, Stability and Zinc Transport Ability In Vitro
To promote the application of almond expellers, sweet almond expeller globulin (amandin) was extracted for the preparation of bioactive peptides. After dual enzymatic hydrolysis, Sephadex G-15 gel isolation, reverse-phase high-performance liquid chromatography purification and ESI-MS/MS analysis, two novel peptides Val-Asp-Leu-Val-Ala-Glu-Val-Pro-Arg-Gly-Leu (1164.45 Da) and Leu-Asp-Arg-Leu-Glu (644.77 Da) were identified in sweet almond expeller amandin hydrolysates. Leu-Asp-Arg-Leu-Glu (LDRLE) of excellent zinc-chelating capacity (24.73 mg/g) was selected for preparation of peptide-zinc chelate. Structural analysis revealed that zinc ions were mainly bonded to amino group and carboxyl group of LDRLE. Potential toxicity and some physicochemical properties of LDRLE and Val-Asp-Leu-Val-Ala-Glu-Val-Pro-Arg-Gly-Leu (VDLVAEVPRGL) were predicted in silico. The results demonstrated that both LDRLE and VDLVAEVPRGL were not toxic. Additionally, zinc solubility of LDRLE-zinc chelate was much higher than that of zinc sulphate and zinc gluconate at pH 6.0–10.0 and against gastrointestinal digestion at 37 °C (p < 0.05). However, incubation at 100 °C for 20–60 min significantly reduced zinc-solubility of LDRLE-zinc chelate. Moreover, the chelate showed higher zinc transport ability in vitro than zinc sulphate and zinc gluconate (p < 0.05). Therefore, peptides isolated from sweet almond expeller amandin have potential applications as ingredient of zinc supplements.
Introduction
About one third of the global population, especially pregnant women and children, is zinc deficient [1]. Zinc deficiency can cause anorexia, an immunocompromised state, growth retardation and cognitive impairment for people, especially infants and children [2]. In recent years, food protein-derived peptide-zinc chelate has received attentions because it is more easily absorbed and more stable against gastrointestinal digestion and interference by other nutrients. Moreover, peptide-zinc chelate has no side effect in comparison with other inorganic or organic zinc supplements [3]. Furthermore, some peptide-zinc chelates showed healthcare benefits for humans, such as a hypolipidemic effect, resistance to radiation and hypoglycemic effect [4]. It is thought that, regarding bioavailability in vivo, the zinc fortification effect of peptide-zinc chelate is related to adsorption mechanisms, security, physicochemical properties, and stability under different conditions and against the gastrointestinal barrier [5,6]. Some gastrointestinal proteases can degrade the amino acid sequence of bioactive peptides and cause collapse of peptide-zinc chelate structure [7]. Moreover, some food-processing technologies such as thermal treatment, acid or alkali treatment, and reaction with sugar, salt or other nutrients can reduce the zinc solubility and absorption rate of peptides-zinc chelate [8]. Additionally, potential allergenicity or toxicity means that peptides-zinc chelate is not fit for use in the food or pharmaceutical industries [9]. Apart from these, physicochemical properties, especially hydrophilicity and 2 of 14 hydrophobicity, significantly affect the zinc-chelating ability of peptides and applications of peptide-zinc chelate in various food systems [10]. Although an increasing number of studies have focused on preparation, absorption mechanisms and in vivo bioavailability of food-derived peptide-zinc chelates, few data have referred to the physicochemical properties, security and stability of peptide-zinc chelate.
Sweet almond (Prunus Amygdalus dulcis) expeller is the main byproduct of sweet almond oil manufacture and contains high levels of protein (around 37 g/100 g) [11]. The annual production of sweet almond is about 1.7 million tons worldwide [12]. Moreover, the yield of sweet almond expeller has increased with increasing worldwide almond production and demand for apricot oil. Almond protein has a relatively desirable amino acid composition and good emulsifying, foam and gelling properties [13][14][15]. In addition, some bioactive peptides such as antioxidant peptide, hypoglycemic peptide and hypotensive peptide have been identified in almond protein hydrolysates [16][17][18]. Previous study revealed that globulin (amandin) was the predominant fraction in almond protein (accounting for 65-70 g/100 g). However, bitter almond amandin is of allergenic potential [19]. The preexperiment of this study found that sweet almond amandin possessed good zinc-chelating ability (16.32 mg/g). The main reason is that sweet almond amandin is a typical hexamer (360 kDa) consisting of six subunits which are linked by disulfide bonds [20]. Moreover, amandin is rich in acidic amino acids (Glu and Asp, 27.37 and 9.28 g/100 g, respectively) and sulfur amino acids (including Met and Cys, 2.67 g/100 g) [18]. The disulfide bond, acidic amino acids (Asp and Glu, both contain γ-carboxyl group) and sulfur amino acids (containing sulfhydryl) are all ideal chelating sites for metal ions [21]. Therefore, the objectives of the current study are to (i) identify peptides of excellent zinc-chelating capacity from sweet almond expeller amandin hydrolysates (SAEAH) and in silico prediction of security and physicochemical properties of the identified peptides; (ii) study the optimum preparation conditions, structure characteristics and zinc transportation capacity of sweet almond expeller peptides-zinc chelate; and (iii) investigate the stability of peptides-zinc chelate against different food-processing conditions and gastrointestinal digestion.
Isolation and Purification of Peptides of Excellent Zinc-Chelating Ability
After digestion with dual enzymes (Flavourzyme:Alcalase = 1:2), the hydrolysis degree of sweet almond expeller amandin was 32.67% ± 4.56%, which was consistent with the result of Li et al. [22]. The zinc-chelating ability of SAEAH was 14.32 ± 1.32 mg/g ( Figure 1). After ultrafiltration with a membrane of 0.45 µm and Sephadex G-15 gel purification, six major subfractions were isolated from SAEAH. SAEAH-4 showed the highest zinc-chelating ability among these subfractions, so it was pooled, freeze-dried and analysed using reversephase high-performance liquid chromatography. The RP-HPLC profile of SAEAH-4 is shown in Figure 2. Obviously, there were four major subfractions isolated from SAEAH-4. Since SAEAH-4-B exhibited better ability in zinc-chelating than SAEAH-4-A, SAEAH-4-C and SAEAH-4-D (p < 0.05), SAEAH-4-B was pooled, freeze-dried and used for analysis of peptide sequences.
Characteristics of Peptide Sequence
According to the mass spectrometry data, peptides Leu-Asp-Arg-Leu-Glu and Val-Asp-Leu-Val-Ale-Glu-Val-Pro-Arg-Gly-Leu were identified in SAEAH-4-B. The molecular weights of these peptides are shown in Table 1. The primary and secondary ESI-MS/MS spectra of peptide LDRLE are shown in Figure 3. LDRLE showed excellent zinc-chelating ability (24.73 mg/g); while VDLVAEVPRGL exhibited very poor zinc-chelating ability (4.33 mg/g). Therefore, LDRLE was selected for preparation of
Characteristics of Peptide Sequence
According to the mass spectrometry data, peptides Leu-Asp-Arg-Leu-Glu and Val-Asp-Leu-Val-Ale-Glu-Val-Pro-Arg-Gly-Leu were identified in SAEAH-4-B. The molecular weights of these peptides are shown in Table 1. The primary and secondary ESI-MS/MS spectra of peptide LDRLE are shown in Figure 3. LDRLE showed excellent zinc-chelating ability (24.73 mg/g); while VDLVAEVPRGL exhibited very poor zinc-chelating ability (4.33 mg/g). Therefore, LDRLE was selected for preparation of
Characteristics of Peptide Sequence
According to the mass spectrometry data, peptides Leu-Asp-Arg-Leu-Glu and Val-Asp-Leu-Val-Ale-Glu-Val-Pro-Arg-Gly-Leu were identified in SAEAH-4-B. The molecular weights of these peptides are shown in Table 1. The primary and secondary ESI-MS/MS spectra of peptide LDRLE are shown in Figure 3. LDRLE showed excellent zinc-chelating ability (24.73 mg/g); while VDLVAEVPRGL exhibited very poor zinc-chelating ability (4.33 mg/g). Therefore, LDRLE was selected for preparation of peptide-zinc chelate.
Studies on structure-chelating ability relationship of oligopeptides revealed that zinc ions could effectively bond to groups of relatively strong negative polarity in peptides such as γ-carboxyl group (-COOH) and ε-amino group (-NH 2 ) [21]. Moreover, free sulfhydryl of Cys residue and guanidino group of Arg residue in peptides were also the main chelating sites for zinc ions [23]. In the case of LDRLE, the Asp, Glu and Arg residues were mainly responsible for its excellent zinc-chelating capacity. Although VDLVAEVPRGL contained Asp and Glu residues, the larger molecule weight (1164.45 Da) reduced its zinc-chelating ability. Previous studies found that peptide of short chain always showed higher zincchelating ability than peptides of larger molecular weight [2,24]. Moreover, large peptides cannot pass through the gastrointestinal barrier to exert activity in vivo [10]. peptide-zinc chelate. Studies on structure-chelating ability relationship of oligopeptides revealed that zinc ions could effectively bond to groups of relatively strong negative polarity in peptides such as γ-carboxyl group (-COOH) and ε-amino group (-NH2) [21]. Moreover, free sulfhydryl of Cys residue and guanidino group of Arg residue in peptides were also the main chelating sites for zinc ions [23]. In the case of LDRLE, the Asp, Glu and Arg residues were mainly responsible for its excellent zinc-chelating capacity. Although VDLVAEVPRGL contained Asp and Glu residues, the larger molecule weight (1164.45 Da) reduced its zinc-chelating ability. Previous studies found that peptide of short chain always showed higher zinc-chelating ability than peptides of larger molecular weight [2,24]. Moreover, large peptides cannot pass through the gastrointestinal barrier to exert activity in vivo [10]. Non-Toxin Non-Toxin Non-Toxin ND a EDTA and glutathione were used as positive control for zinc chelating capacity and antioxidant activity, respectively. b From the National Center for Biotechnology Information (NCBI). c Physicochemical characteristics and potential toxicity were predicted separately using the AHTPDB database (http://crdd.osdd.net/raghava/ahtpdb/, accessed on 21 April 2022) and the database Toxin-Pred (www.imtech.res.in/raghava/toxinpred/, accessed on 5 May 2022). ND: not measured. Different lowercase letters ( d-f ) in the same line denote significant difference (p < 0.05).
Physicochemical Characteristics and Toxicity Analysis of SAEAH Peptides
As shown in Table 1, the hydrophilicity of LDRLE (1.08) was much higher than that of VDLVAEVPRGL (p < 0.05), mainly attributed to its high content of hydrophilic amino acids (60% , Table 1). Moreover, the high hydrophilicity was responsible for the excellent zinc-chelating capacity of LDRLE (24.73 mg/g, Table 1). This is because polar amino acids such as Glu, Asp and Arg in peptides could chelate with zinc ions through ionic bonds or negative charge attraction [25]. The isoelectric point (pI) of LDRLE was 4.38, mainly attributed to the high content of Asp (pI of 2.97) and Glu (pI of 3.22). At the isoelectric point, the net charge on protein surface was zero and peptides tend to clump together, thereby reducing the electrostatic attraction between peptides and zinc ions, resulting in decrease in adsorption of zinc ions on peptides [10]. Moreover, the high amphiphilicity of LDRLE (0.74) meant that it had potential applications in emulsion food systems [7]. In addition, the in silico predicted result demonstrated that LDRLE and VDLVAEVPRGL were not toxic peptides. However, more studies in vitro and in vivo regarding the safety of these peptides are needed.
Scanning Analysis with Ultraviolet Wavelength
As shown in Figure 4, the ultraviolet adsorption peaks of LDRLE and ZnSO 4 were at 200 and 195 nm, respectively. The ultraviolet adsorption peak of LDRLE moved from 200 nm to 220 nm after the zinc chelation. This redshift on the adsorption peak demonstrated a combination between zinc ions and LDRLE [2]. Chelation with metal ions such as zinc or ferrous ions can cause electronic transitions in peptide molecules or changes in some chromophoric groups, thereby resulting in red or blue shifts of the ultraviolet adsorption peak of peptides [25]. A similar trend was noted by Sun et al. [6].
Physicochemical Characteristics and Toxicity Analysis of SAEAH Peptides
As shown in Table 1, the hydrophilicity of LDRLE (1.08) was much higher than that of VDLVAEVPRGL (p < 0.05), mainly attributed to its high content of hydrophilic amino acids (60% , Table 1). Moreover, the high hydrophilicity was responsible for the excellent zinc-chelating capacity of LDRLE (24.73 mg/g, Table 1). This is because polar amino acids such as Glu, Asp and Arg in peptides could chelate with zinc ions through ionic bonds or negative charge attraction [25]. The isoelectric point (pI) of LDRLE was 4.38, mainly attributed to the high content of Asp (pI of 2.97) and Glu (pI of 3.22). At the isoelectric point, the net charge on protein surface was zero and peptides tend to clump together, thereby reducing the electrostatic attraction between peptides and zinc ions, resulting in decrease in adsorption of zinc ions on peptides [10]. Moreover, the high amphiphilicity of LDRLE (0.74) meant that it had potential applications in emulsion food systems [7]. In addition, the in silico predicted result demonstrated that LDRLE and VDLVAEVPRGL were not toxic peptides. However, more studies in vitro and in vivo regarding the safety of these peptides are needed.
Scanning Analysis with Ultraviolet Wavelength
As shown in Figure 4, the ultraviolet adsorption peaks of LDRLE and ZnSO4 were at 200 and 195 nm, respectively. The ultraviolet adsorption peak of LDRLE moved from 200 nm to 220 nm after the zinc chelation. This redshift on the adsorption peak demonstrated a combination between zinc ions and LDRLE [2]. Chelation with metal ions such as zinc or ferrous ions can cause electronic transitions in peptide molecules or changes in some chromophoric groups, thereby resulting in red or blue shifts of the ultraviolet adsorption peak of peptides [25]. A similar trend was noted by Sun et al. [6].
FT-IR Analysis
FT-IR spectra of pure LDRLE and LDRLE-zinc chelate are shown in Figure 5. Significant differences were found on FT-IR spectrogram of LDRLE-zinc chelate in comparison with that of pure LDRLE. After zinc chelation, the adsorption peak of LDRLE occurred at 3497 cm −1 (indicative of the stretching of -N-H) moved to 3474 cm −1 [26]. In addition, a new adsorption peak appeared at 2438 cm −1 (reflecting the vibration of -C-Nbond) after the zinc chelation. These results suggested that zinc ions have been chelated with amide bond of LDRLE [4]. Moreover, adsorption peaks at 2943 cm −1 and 1614 cm −1 in
FT-IR Analysis
FT-IR spectra of pure LDRLE and LDRLE-zinc chelate are shown in Figure 5. Significant differences were found on FT-IR spectrogram of LDRLE-zinc chelate in comparison with that of pure LDRLE. After zinc chelation, the adsorption peak of LDRLE occurred at 3497 cm −1 (indicative of the stretching of -N-H) moved to 3474 cm −1 [26]. In addition, a new adsorption peak appeared at 2438 cm −1 (reflecting the vibration of -C-N-bond) after the zinc chelation. These results suggested that zinc ions have been chelated with amide bond of LDRLE [4]. Moreover, adsorption peaks at 2943 cm −1 and 1614 cm −1 in the spectrum of LDRLE (both corresponding to the stretching of -C = O) shifted to 2939 and 1618 cm −1 , respectively. In addition, new peaks appeared at 1815 and 864 cm −1 (repre-senting the deformation vibrations of -C-O-and -COO-, respectively) after zinc chelation, demonstrating that the zinc ions have bonded to the carboxyl group of LDRLE [26]. In general, zinc ions mainly bonded to the amino group and carboxyl group of LDRLE. the spectrum of LDRLE (both corresponding to the stretching of -C = O) shifted to 2939 and 1618 cm −1 , respectively. In addition, new peaks appeared at 1815 and 864 cm −1 (representing the deformation vibrations of -C-O-and -COO-, respectively) after zinc chelation, demonstrating that the zinc ions have bonded to the carboxyl group of LDRLE [26]. In general, zinc ions mainly bonded to the amino group and carboxyl group of LDRLE.
Microstructure
The microstructure of LDRLE was irregular and rougher and numerous loose fragments accumulated on the surface (Figure 6a). In comparison, the microstructure of LDRLE-zinc chelate was more compact with aggregated particles on the surface ( Figure 6b), suggesting that the zinc chelation promoted the aggregation of LDRLE. It has been demonstrated that intermolecular aggregation may occur between some active groups of proteins in aqueous solutions. These active groups, such as the carboxyl group, sulfhydryl group and amino group, can bond with each other through hydrogen bonds [27]. A previous study found that zinc chelation can promote this aggregation [3]. After zinc chelation, the hydrophilic groups of peptides participating in the chelation with zinc ions will be hidden inside the peptide chains. This trend was beneficial for the intermolecular aggregation of peptides [27].
Microstructure
The microstructure of LDRLE was irregular and rougher and numerous loose fragments accumulated on the surface (Figure 6a). In comparison, the microstructure of LDRLE-zinc chelate was more compact with aggregated particles on the surface (Figure 6b), suggesting that the zinc chelation promoted the aggregation of LDRLE. It has been demonstrated that intermolecular aggregation may occur between some active groups of proteins in aqueous solutions. These active groups, such as the carboxyl group, sulfhydryl group and amino group, can bond with each other through hydrogen bonds [27]. A previous study found that zinc chelation can promote this aggregation [3]. After zinc chelation, the hydrophilic groups of peptides participating in the chelation with zinc ions will be hidden inside the peptide chains. This trend was beneficial for the intermolecular aggregation of peptides [27].
Microstructure
The microstructure of LDRLE was irregular and rougher and numerous loose fragments accumulated on the surface (Figure 6a). In comparison, the microstructure of LDRLE-zinc chelate was more compact with aggregated particles on the surface (Figure 6b), suggesting that the zinc chelation promoted the aggregation of LDRLE. It has been demonstrated that intermolecular aggregation may occur between some active groups of proteins in aqueous solutions. These active groups, such as the carboxyl group, sulfhydryl group and amino group, can bond with each other through hydrogen bonds [27]. A previous study found that zinc chelation can promote this aggregation [3]. After zinc chelation, the hydrophilic groups of peptides participating in the chelation with zinc ions will be hidden inside the peptide chains. This trend was beneficial for the intermolecular aggregation of peptides [27]. Zinc solubility is one of the important factors affecting the bioavailability of zinc ions in the body [28]. The zinc-solubility profile of LDRLE-zinc chelate against heating at 100 • C is shown in Figure 7. In general, LDRLE-zinc chelate showed poorer stability in zincsolubility than zinc sulphate and zinc gluconate. Within the heating time of 10-60 min, the zinc solubility of zinc sulphate and zinc gluconate was not reduced significantly (p > 0.05). In contrast, the zinc solubility of LDRLE markedly decreased after incubation at 100 • C for 20-60 min (p < 0.05). This result contradicted the results of previous studies [3,6]. Although the chelation with peptides can protect zinc ions against precipitation caused by thermal treatment [3], peptide sequences may be degraded under prolonged heating at high temperatures and the structure of peptide-zinc chelate may collapse, resulting in a reduction in zinc-solubility.
Effects of Thermal Treatment on Zinc-Solubility
Zinc solubility is one of the important factors affecting the bioavailability of zinc ions in the body [28]. The zinc-solubility profile of LDRLE-zinc chelate against heating at 100 °C is shown in Figure 7. In general, LDRLE-zinc chelate showed poorer stability in zinc-solubility than zinc sulphate and zinc gluconate. Within the heating time of 10-60 min, the zinc solubility of zinc sulphate and zinc gluconate was not reduced significantly (p > 0.05). In contrast, the zinc solubility of LDRLE markedly decreased after incubation at 100 °C for 20-60 min (p < 0.05). This result contradicted the results of previous studies [3,6]. Although the chelation with peptides can protect zinc ions against precipitation caused by thermal treatment [3], peptide sequences may be degraded under prolonged heating at high temperatures and the structure of peptide-zinc chelate may collapse, resulting in a reduction in zinc-solubility. Figure 7. Zinc solubility stability of LDRLE-zinc chelate, zinc sulphate and zinc gluconate against incubation at 100 °C for 10-60 min. Different lowercase letters (a-d) above the bars or near the lines indicate significant differences (p < 0.05).
Effect of Various pH Values
As shown in Figure 8, LDRLE-zinc chelate showed higher zinc solubility under acidic conditions (pH 2.0-6.0) than under alkaline conditions (pH 8.0-10.0) (p < 0.05). A similar trend was also observed on zinc sulphate and zinc gluconate. As pH increased, an insoluble zinc slat can form when OHgenerated in aqueous solutions reacts with zinc ions [6]. LDRLE-zinc chelate exhibited a higher zinc solubility than zinc sulphate at pH 6.0-10.0 (p < 0.05), demonstrating that zinc ions bonding to peptides can be protected effectively from precipitation under alkaline conditions. Moreover, these results reflected that the chelation with LDRLE can improve the stability of zinc ions when they are transported from the stomach (pH 2.0) to the intestine (pH 7.0) [29].
Effect of Various pH Values
As shown in Figure 8, LDRLE-zinc chelate showed higher zinc solubility under acidic conditions (pH 2.0-6.0) than under alkaline conditions (pH 8.0-10.0) (p < 0.05). A similar trend was also observed on zinc sulphate and zinc gluconate. As pH increased, an insoluble zinc slat can form when OHgenerated in aqueous solutions reacts with zinc ions [6]. LDRLE-zinc chelate exhibited a higher zinc solubility than zinc sulphate at pH 6.0-10.0 (p < 0.05), demonstrating that zinc ions bonding to peptides can be protected effectively from precipitation under alkaline conditions. Moreover, these results reflected that the chelation with LDRLE can improve the stability of zinc ions when they are transported from the stomach (pH 2.0) to the intestine (pH 7.0) [29].
Effect of Gastrointestinal Digestion
Gastrointestinal digestion is an obstacle that zinc fortifiers need to overcome to exhibit physiological functions in vivo [25]. As shown in Figure 9, zinc solubility decreased significantly after LDRLE-zinc chelate entered the intestinal simulated digestion stage (91-240 min) from the gastric digestion stage (0-90 min) (p < 0.05). The same trend was observed on zinc gluconate and zinc sulphate. When zinc ions enter the intestinal tract from the stomach, a part of Zn 2+ can be converted to insoluble zinc salt with increasing pH value [24]. In the case of LDRLE-zinc chelate, the peptide sequence (LDRLE) may be degraded under gastrointestinal digestion [7], which can weaken the interactions between peptides and zinc ions, leading to a lower zinc solubility. However, further research regarding changes in LDRLE structure under the gastrointestinal digestion is needed. More importantly, LDRLE-zinc chelate showed much higher zinc-solubility than zinc gluconate and zinc sulphate at the intestinal digestion stage (p < 0.05), suggesting that the chelation with LDRLE improved stability of zinc ions against gastrointestinal digestion. A similar trend was noted by previous studies [6,26].
Effect of Gastrointestinal Digestion
Gastrointestinal digestion is an obstacle that zinc fortifiers need to overcome to exhibit physiological functions in vivo [25]. As shown in Figure 9, zinc solubility decreased significantly after LDRLE-zinc chelate entered the intestinal simulated digestion stage (91-240 min) from the gastric digestion stage (0-90 min) (p < 0.05). The same trend was observed on zinc gluconate and zinc sulphate. When zinc ions enter the intestinal tract from the stomach, a part of Zn 2+ can be converted to insoluble zinc salt with increasing pH value [24]. In the case of LDRLE-zinc chelate, the peptide sequence (LDRLE) may be degraded under gastrointestinal digestion [7], which can weaken the interactions between peptides and zinc ions, leading to a lower zinc solubility. However, further research regarding changes in LDRLE structure under the gastrointestinal digestion is needed. More importantly, LDRLE-zinc chelate showed much higher zinc-solubility than zinc gluconate and zinc sulphate at the intestinal digestion stage (p < 0.05), suggesting that the chelation with LDRLE improved stability of zinc ions against gastrointestinal digestion. A similar trend was noted by previous studies [6,26].
Effect of Gastrointestinal Digestion
Gastrointestinal digestion is an obstacle that zinc fortifiers need to overcome to exhibit physiological functions in vivo [25]. As shown in Figure 9, zinc solubility decreased significantly after LDRLE-zinc chelate entered the intestinal simulated digestion stage (91-240 min) from the gastric digestion stage (0-90 min) (p < 0.05). The same trend was observed on zinc gluconate and zinc sulphate. When zinc ions enter the intestinal tract from the stomach, a part of Zn 2+ can be converted to insoluble zinc salt with increasing pH value [24]. In the case of LDRLE-zinc chelate, the peptide sequence (LDRLE) may be degraded under gastrointestinal digestion [7], which can weaken the interactions between peptides and zinc ions, leading to a lower zinc solubility. However, further research regarding changes in LDRLE structure under the gastrointestinal digestion is needed. More importantly, LDRLE-zinc chelate showed much higher zinc-solubility than zinc gluconate and zinc sulphate at the intestinal digestion stage (p < 0.05), suggesting that the chelation with LDRLE improved stability of zinc ions against gastrointestinal digestion. A similar trend was noted by previous studies [6,26].
Zinc Transportation across Caco-2 Cells
The result of the current study demonstrated that ZnSO 4 , zinc gluconate and LDRLEzinc chelate had no significant (p > 0.05) cytotoxicity toward Caco-2 cells. As shown in Figure 10, LDRLE-zinc chelate showed a higher zinc transport amount than zinc sulphate from an incubation time of 60 min (p < 0.05); while the chelate showed a higher transport amount than zinc gluconate at 120 min (p < 0.05), suggesting that LDRLE-zinc chelate can improve zinc transportation across the intestinal membrane [27]. One of the reasons was that, compared to zinc sulphate and zinc gluconate, LDRLE-zinc chelate had a higher zinc solubility than ZnSO 4 and zinc gluconate under pH 6.0-8.0 ( Figure 8) and gastrointestinal digestion (Figure 9). When zinc fortifiers move from the stomach to the intestine, the increased pH value and gastrointestinal digestion both reduce the zinc solubility of the zinc fortifiers, resulting in poor zinc transportation [28]. Chelation with PDRLE improved solubility of zinc ions under pH 6.0-80 and gastrointestinal digestion, so LERLE showed a higher ability for zinc transport. However, there is much work to be done to investigate the absorption and transportation mechanism of the chelate in vivo.
The result of the current study demonstrated that ZnSO4, zinc gluconate and LDRLE-zinc chelate had no significant (p > 0.05) cytotoxicity toward Caco-2 cells. As shown in Figure 10, LDRLE-zinc chelate showed a higher zinc transport amount than zinc sulphate from an incubation time of 60 min (p < 0.05); while the chelate showed a higher transport amount than zinc gluconate at 120 min (p < 0.05), suggesting that LDRLE-zinc chelate can improve zinc transportation across the intestinal membrane [27]. One of the reasons was that, compared to zinc sulphate and zinc gluconate, LDRLE-zinc chelate had a higher zinc solubility than ZnSO4 and zinc gluconate under pH 6.0-8.0 ( Figure 8) and gastrointestinal digestion (Figure 9). When zinc fortifiers move from the stomach to the intestine, the increased pH value and gastrointestinal digestion both reduce the zinc solubility of the zinc fortifiers, resulting in poor zinc transportation [28]. Chelation with PDRLE improved solubility of zinc ions under pH 6.0-80 and gastrointestinal digestion, so LERLE showed a higher ability for zinc transport. However, there is much work to be done to investigate the absorption and transportation mechanism of the chelate in vivo. Figure 10. Zinc transport contents at the basolateral sides of Caco-2 cell monolayers of LDRLE-zinc chelate, zinc sulphate and zinc gluconate. Different lowercase letters (a-e) above the bars indicate significant differences (p < 0.05).
Preparation of Sweet Almond Cake Amandin Hydrolysates
Following the modified method of Souza et al. [30], sweet almond expeller was ground and passed through a 40-mesh sieve. The powder was degreased using n-hexane Figure 10. Zinc transport contents at the basolateral sides of Caco-2 cell monolayers of LDRLE-zinc chelate, zinc sulphate and zinc gluconate. Different lowercase letters (a-d) above the bars indicate significant differences (p < 0.05).
Preparation of Sweet Almond Cake Amandin Hydrolysates
Following the modified method of Souza et al. [30], sweet almond expeller was ground and passed through a 40-mesh sieve. The powder was degreased using n-hexane (1:15, m/v) two times. The obtained defatted powder (25 g) was dispersed into 20 mmol/L of Tris-HCl buffer (pH 8.0, 500 mL) and then stirred at 35 • C and 175 r/min for 120 min. Afterwards, the dispersions were filtered with filter paper and the filtrate solution was pooled and centrifuged at 9500× g for 25 min. The supernatant was pooled and adjusted to pH 3.5 with 0.1 mol/L of HCl or 0.1 mol/L of NaOH, and then incubated at 4 • C overnight. After centrifugation at 6000× g at 4 • C for 35 min, the precipitate was collected and dialyzed against deionized water at 4 • C for 8 h. Then, the dialysate was lyophilized and sweet almond expeller amandin (SAEA) was obtained.
The obtained SACA (2 g) was dispersed in 20 mmol/L of Tris-HCl buffer (90 mL) and adjusted to pH 8.0 with 0.1 mol/L NaOH, and then Flavourzyme (0.05 g) and Alcalase (0.1 g) were added. The mixture was stirred in a water bath (120 r/min) at 50 • C for 125 min, and heated in boiling water for 8 min to deactivate the enzymes. Afterwards, the reaction solution was centrifuged at 15,500× g and 4 • C for 16 min using the TDL-20 centrifuge. The supernatant was pooled and freeze-dried to obtain sweet almond expeller amandin hydrolysates (SAEAH). In addition, the trinitrobenzenesulfonic acid method was employed to determine the degree of hydrolysis [31].
Purification of SAEAH
SAEAH was resolved in ultrapure water (2 mg/mL) and passed through a W-45 ultrafiltration membrane (0.45 µm, Daning Co., Dalian, China). The filtrate was purified using gel column chromatography (Φ1.2 × 80 cm) with Sephadex G-15 as the stationary phase and distilled water as elution solution (2.6 mL/min). The monitored wavelength was 220 nm. After a five-minute interval, the effluent fractions were collected, lyophilized (with a LGJ-10N freeze-dryer, Keya Instrument Co., Beijing, China) and subjected to determination of their zinc-chelating capacity. Subfractions of excellent zinc-chelating ability were isolated using reversed-phase high-performance liquid chromatography (RP-HPLC). The RP-HPLC isolation was conducted with a Zorbax analytical C 18 column (4.6 × 250 mm, Agilent Technologies, Palo Alto, CA, USA). The elution solvent contains acetonitrile (mobile phase B) and trifluoroacetate (mobile phase A, 0.1%, v/v). From 0 to 30 min, the RP-HPLC isolation was performed with increasing concentrations of acetonitrile (from 5% to 35%, v/v), and then performed with a constant concentration of acetonitrile (35%, v/v) for 10 min. The monitored wavelength was 220 nm. Subfractions corresponding to the elution peaks were separately pooled and lyophilized, and then used for analysis of their zinc-chelating ability. Peptide sequences of the subfractions possessing excellent zinc-chelating ability were analysed.
Zinc-Chelating Ability Assay
Zinc-chelating ability was determined using the PAR colorimetric method [32]. Briefly, 250 µL sample solution (dissolved in 0.1 mol/L of HEPES-KOH buffer) were mixed with 125 µL DTT (8 mmol/L), 125 µL zinc sulphate (ZnSO 4 , 250 µmol/L) and 2 mL ultrapure water. The mixed solution was stirred (180 r/min) at 37 • C for 10 min, and then 250 µL PAR (0.2 mmol/L, pH 7.5) was added. After incubation at 37 • C for 3 min, the absorbance at 500 nm was read. Zinc content was quantified by regression of the zinc sulphate standard curve (A = 0.0901n(C) + 0.1012; R 2 = 0.9802; where A represents the absorbance at 500 nm, and C is the concentration of Zn 2+ , µg/ mL). Zinc-chelating capacity was calculated as follows: where, C c (µmol/L) is the zinc concentration in the reaction solution without samples; C s is the zinc concentration in reaction solution after the chelating reaction (µmol/L); V is the volume of the reaction solution; M is the mass of the samples (g); and 65.38 is the mol mass of zinc (g/mol).
Identification, Synthesize and Physicochemical Characteristics of Peptide Sequence
Identification of the SAEAH peptide sequence was conducted on a hybrid quadrupole orbitrap mass spectrometer (Q Exactive, Thermo Fisher, Bremen, Germany) using the same parameters as described by Xu et al. [33]. The obtained mass spectrometry data were analysed using Peak-Studio-7.5-De-Novo™ software (Bioinformatics Solutions, Inc., Waterloo, Canada). Moreover, the obtained amino acid sequences were verified using the National Center for Biotechnology Information database (Bethesda, MD, USA). Chemical synthesis of peptide sequences was done in Yaoshan Biological Tech. Co. (Shaoxing, China). In addition, physicochemical characterization of the obtained peptide sequences was conducted using the database AHTPDB (http://crdd.osdd.net/raghava/ahtpdb/, accessed on 21 April 2022) [34].
Preparation of Peptide-Zinc Chelate
Preparation of peptide-zinc chelate was conducted following the description of Sun et al. [6]. Briefly, SAEAH peptide was mixed with 250 µmol/L ZnSO 4 ·7H 2 O (25:1, m/m) and adjusted to pH 7.6. The reaction solution was stirred (135 r/min) at 27 • C for 50 min, and then centrifuged at 4500× g for 25 min. The supernatant was pooled and precipitated with anhydrous ethanol (1: 4, v/v) at 4 • C for 30 min. After centrifugation at 12,000 × g for 12 min, the precipitate was lyophilized and SAEAH peptide-zinc chelate was obtained.
Fourier-Transform Infrared Spectroscopy (FT-IR)
Briefly, dry KBr (around 0.1 g) was blended thoroughly with SAEAH peptide-zinc chelate or purified SAEAH peptide (2 mg). The mix powder was ground and pelleted into a table of 1-2 mm, and then scanned using a FT-IR-850 spectrometer (Atomic Instruments, Suzhou, China). The scanning wavenumbers ranged from 4000 to 400 cm -1 .
Surface Microstructure Analysis
After being sprayed with a 10 nm-thick layer of gold, the microstructure of SAEAH peptide-zinc chelate was investigated with a 7500F scanning electron microscope (JSM, Tokyo, Japan) [33]. Micrographs were taken with a scale bar of 1 µm. Moreover, the magnification was 5000× and the acceleration voltage was 10 kV, respectively.
Zinc Solubility at Different pH Values
Effects of different pH values (2.0-10.0) on zinc-solubility of SAEAH-zinc chelate were investigated using the same procedure as Xu et al. [33]. Zinc content of the chelate solution before and after each treatment was determined using the PAR method [24], and zinc solubility was calculated by Equation (2). Both zinc sulphate (100 µg/mL) and zinc gluconate (100 µg/mL) were used as comparisons.
Effect of the Gastrointestinal Digestion
Zn solubility of SAEAH-zinc chelate against simulated gastrointestinal digestion was investigated according to the description of Xu et al. [33]. Briefly, SAEAH peptide-zinc chelate was first treated with the simulated gastric fluid (composed of 4.5 mg/mL of Trypsin, 62.5 mg/mL of NaHCO 3 and 30 mg/mL of pig bile salt) at 37 • C and pH 2.0 for 90 min. Then the chelate was treated with the simulated intestinal fluid containing pepsin (0.4 mg/mL) and NaCl (8.77 mg/mL) at 37 • C and pH 6.8 for 150 min. During the simulated gastrointestinal digestion, 1 mL of the reaction solution was sucked out at 0, 10, 30, 60, 90, 120, 150, 180 and 240 min, respectively, and then incubated in boiling water for 6 min. After incubation at 4500× g for 25 min, the supernatant was pooled and the zinc content was determined. Based on this, zinc solubility was calculated using Equation (2). Both zinc sulphate (100 µg/mL) and zinc gluconate (100 µg/mL) were used as comparisons.
Zinc Transport across Caco-2 Cells
As per the method of Wang et al. [2] with some modifications, Caco-2 cells were seeded on Transwell plates (1.5 × 10 5 cells/cm 2 ) and cultured in DMEM containing penicillinstreptomycin-neomycin mixture (1 mg/mL) and foetal bovine serum (20 mg/mL) at 37 • C in a humidified atmosphere of 5% CO 2 . The transepithelial electrical resistance (TEER) was determined using Millicell-ERS-00002 system (Millipore Co., Burlington, MA, USA). A cell monolayer was established if TEER was more than 400 Ω·cm 2 . Then 0.6 mL of HBSS buffer (without calcium and magnesium) was added to both the apical (AP) side and the basolateral (BL) side. After incubation at 37 • C for 30 min, the HBSS buffer was sucked out. Immediately, peptide-zinc chelate (300 µg/mL, dissolved in HBSS buffer) was added to the AP side (0.4 mL/well), whereas HBSS buffer was added to the BL side (0.6 mL/well). The cells were cultured at 37 • C for 120 min. At 30 min intervals, 50 µL of sample solution was sucked out from the BP side for zinc content determination [32], and 50 µL of HBSS was filled up immediately. Zinc transported was calculated as the amount of zinc at the BP side [2]. Zinc content was determined using the PAR colorimetric method [24]. Zinc gluconate and zinc sulphate (5 mmol/ L) were used as comparisons, while the blank control was only treated with HBSS buffer. In addition, effects of ZnSO 4 /zinc gluconate and peptide-zinc chelate on cell viability were measured using the MTT method [38].
Data Analysis
All the tests were carried out in triplicate at least (n ≥ 3). The significance of differences among data was analysed with a one-way analysis of variance. Multiple comparisons were carried out using IBM SPSS Statistics software (Version 16, Chicago, IL, USA) with a significant level at p < 0.05.
Conclusions
Two novel peptides VDLVAEVPRGL and LDRLE were identified in sweet almond expeller amandin hydrolysates. LDRLE of excellent zinc-chelating capacity (24.73 mg/g) was selected to prepare peptide-zinc chelate. Both the amino group and carboxyl group of LDRLE were the main bonding sites for zinc ion. Moreover, the chelation with LDRLE improved significantly zinc-solubility of zinc sulphate under pH 6.0-10.0 and gastrointestinal digestion (p < 0.05). In addition, LDRLE-zinc chelate showed a higher capacity to improve zinc transportation than zinc sulphate and zinc gluconate (p < 0.05). These results shed light on applications of peptides identified in sweet almond expeller amandin hydrolysates as ingredients of functional foods to improve zinc bioavailability.
Author Contributions: Conceptualization, methodology, investigation, writing-original draft preparation, funding acquisition, J.Z.; Data curation, writing-review and editing, validation, Z.Y. All authors have read and agreed to the published version of the manuscript. | 2022-11-19T16:14:30.525Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "0be0313db29594ae75418aa40019b638cf688e52",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/22/7936/pdf?version=1668607673",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91d17930ff5811aa14bb633eae5e9e4d36b31dc4",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
247989088 | pes2o/s2orc | v3-fos-license | AKVANO®: A Novel Lipid Formulation System for Topical Drug Delivery—In Vitro Studies
A novel formulation technology called AKVANO® has been developed with the aim to provide a tuneable and versatile drug delivery system for topical administration. The vehicle is based on a water-free lipid formulation where selected lipids, mainly phospholipids rich in phosphatidylcholine, are dissolved in a volatile solvent, such as ethanol. With the aim of describing the basic properties of the system, the following physicochemical methods were used: viscometry, dynamic light scattering, NMR diffusometry, and atomic force microscopy. AKVANO formulations are non-viscous, with virtually no or very minute aggregates formed, and when applied to the skin, e.g., by spraying, a thin film consisting of lipid bilayer structures is formed. Standardized in vitro microbiological and irritation tests show that AKVANO formulations meet criteria for antibacterial, antifungal, and antiviral activities and, at the same time, are being investigated as a non-irritant to the skin and eye. The ethanol content in AKVANO facilitates incorporation of many active pharmaceutical ingredients (>80 successfully tested) and the phospholipids seem to act as a solubilizer in the formulation. In vitro skin permeation experiments using Strat-M® membranes have shown that AKVANO formulations can be designed to alter the penetration of active ingredients by changing the lipid composition.
Introduction
Topically administered pharmaceutical and cosmetic products could be designed to exert their action locally, regionally, or systemically. Within each of these categories, the target site could be more or less defined. Products for local action could, for instance, target the skin surface, viable epidermis, dermis, or skin appendages such as hair follicles. The target could also be nerves or muscles or to achieve absorption by the circulatory system (transdermal delivery). Each of these delivery routes demands a well-designed formulation system, which also must take into consideration aspects such as solubility and stability of the active ingredient.
This is the first publication describing a novel topical delivery system called AKVANO ® (an abbreviation for "water-free"), which is aimed to be tuneable to target the desired site, while minimizing local irritation and providing a convenient mode of application for the user. The AKVANO technology is a formulation platform for topical delivery of various pharmaceutical ingredients but has also found use in consumer health care products
Composition of AKVANO
The composition of AKVANO has been developed with the aim of integrating lipid soluble ingredients into the polar lipids while achieving a clear appearance and a low viscosity of the resulting cutaneous solution. The major portion of the total content of this solution (up to 80% or more) rapidly evaporates within minutes after application to skin, resulting in formation of a polar lipid film containing the active pharmaceutical ingredient. The principal pharmaceutical excipients in AKVANO are described below.
Lipids
The polar lipid component is normally based on phospholipids from soybean or other plant material. In addition, single chain lipids such as monoglycerides, isopropyl myristate (IPM), or other fatty acid alcohol esters can be added. The total concentration of lipids is normally in the range of 5% to 25% by weight.
Alcohol
The preferred alcohol is ethanol and, normally, an anhydrous quality is used to avoid unnecessary addition of water. A typical concentration of alcohol is from 20% up to as high as 95% by weight. Ethanol is an efficient preservative for AKVANO and, accordingly, no additional preservative is needed to be added to the formulation. For non-pharmaceutical products, and depending on national legislation, ethanol may have to be denatured. A denaturing system based on different short chain alcohols such as 2-propanol and tertbutanol is usually preferred.
Keratolytic Agents
In some applications, as is the case with other topical dosage forms, addition of keratolytic agents can be beneficial. Examples of keratolytic agents suitable to be used with AKVANO are αand β-hydroxy acids, such as glycolic acid, lactic acid, malic acid, salicylic acid, and their salts. Another suitable keratolytic substance is urea.
Silicone Oil
Part of the volatile solvent system can consist of volatile silicone oil such as a cyclomethicone, or specifically decamethylcyclopentasiloxane (also denoted as cyclomethicone D5). It is a fully methylated cyclic siloxane containing five repeating units of the formula ((CH 3 ) 2 SiO-). It is used in a concentration of up to 60% by weight. Despite a rather high boiling point, cyclomethicone D5 has a high volatility due to its low enthalpy of vaporization. This gives a 'dry' feeling when applied to the skin (no cooling).
Active Agents
As mentioned earlier, many active substances have been tested in AKVANO and some are listed in the patent applications. The successfully tested substances in our lab typically have a positive logK ow and a molecular weight below 1000 g/mol. Examples of drugs that have been tested are anti-psoriatic, anti-acne, anti-eczema, antimicrobial, anti-inflammatory, antifungal, and wound healing agents, as well as local anaesthetics and non-steroidal anti-inflammatory drugs (NSAIDs).
Additional Components
Depending on the desired properties of the final formulation, other substances such as fragrances, essential oils, and thickeners can be added.
Preparation of AKVANO
In brief, a general procedure to prepare an AKVANO formulation can be described as follows. Phospholipids with additional lipids are weighed and mixed with part of the ethanol until a clear solution is obtained. In a separate vial, active pharmaceutical or cosmetic ingredient(s) are weighed and dissolved in the remaining ethanol. The mixture is stirred using a magnetic stirrer or impeller until a clear solution is obtained and, finally, active solution and, optionally, a silicone oil (such as cyclomethicone) is added. Depending on the compatibility, keratolytic agents and other optional ingredients can be added to either of the before mentioned solutions prior to mixing them. The procedure may be further adjusted depending on the requirements of the specific formulation, and especially when the batch size is scaled up.
Physicochemical Characterization of AKVANO Formulations
These characterizations were carried out to obtain a better understanding of the extent of lipid aggregation and influence of water on AKVANO formulations based on a solvent mixture of ethanol (EtOH) and cyclomethicone D5. The formulations were based on two different types of lipids: DMPC, which is known to form bilayer membrane structures, or DOPE, which is more prone to form reversed hexagonal structures and the corresponding formulations with added water (~1.5%). The formulations were characterized using viscometry, dynamic light scattering (DLS), and nuclear magnetic resonance (NMR) diffusometry.
Viscometry
Viscosity measurements were performed on formulations with and without the addition of water, as well as on a sample without lipids on a Lovis 2000M Microviscometer (Anton Paar, Graz, Austria) at 25 • C. A thin capillary (1.59 mm in diameter) was filled with the sample liquid using a syringe. A small steel ball (1.5 mm in diameter) was then introduced, and the capillary was sealed off with a lid making sure that no air bubbles were present. During measurements, the capillary was turned at an angle of 80 • and the time taken by the steel ball to descend through the capillary was measured with an accuracy of 0.05%. The capillary was then turned to 80 • in the opposite direction so that the ball falls back and the time for the return was measured. Four consecutive measurements were made on each sample to ensure good accuracy.
Dynamic Light Scattering (DLS)
DLS experiments were performed on a Malvern Zetasizer Nano ZS (Malvern Instruments, Malvern, UK) at 25 • C. The formulation samples were filtered with Minisart SRP 25 PTFE membrane (0.45 µm) (Sartorius, Göettingen, Germany) before measurements to remove larger particles, such as dust. The samples were equilibrated for 5 min before the experiments and three measurements were performed on each sample. The viscosity was set to the value determined for the pure silicon oil/EtOH (74.7/25.3%) solvent mixture, i.e., 2.37 mPa·s, and a refractive index of 1.48 was used for the dispersed phase and all samples were analysed in triplicates.
NMR Diffusometry
The experiments were performed on a Bruker AVII-200 spectrometer equipped with a Bruker DIFF-25 probe and a Bruker GREAT 1/40 gradient amplifier (Bruker, Billerica, MA, USA). The temperature control was calibrated using a thermocouple immersed in an NMR tube to measure the actual temperatures at the position of the sample. The self-diffusion coefficients were determined using the pulsed gradient stimulated echo method [5] with a pulsed-field gradient width of 1 ms, a diffusion time (∆) of 20 ms, and an acquisition time of 1 s. The gradient strength was linearly ramped in a range selected to obtain an appropriate decay of the spin echo in the respective experiments. For each spectrum, eight scans were recorded with a repetition delay of 1 s. Each experiment was preceded by four dummy scans.
Characterization of the Film Formed after Evaporation by Atomic Force Microscopy (AFM)
This study was carried out using a XE-100 (Park Instruments, Suwon, Korea) to investigate whether the lipids dissolved in silicone oil and ethanol forms lipid bilayers when dried on a hydrophilic substrate. This was conducted by letting a small drop of formulation dry on a hydrophilic silica surface and then measuring the layer thickness with AFM. The formulations were first diluted 100 times in the ethanol (22.2%)/cyclomethicone (77.8%) solution to reduce the overall thickness of the coating formed, and thus improving the chances of finding single bilayers deposited on the substrate. A drop of diluted formulation was applied to a clean silica substrate (boiled in acid and base just prior to use) inside a laminar air flow (LAF) bench, and the substrates were then leaned at a 45 • angle, thereby creating a thickness gradient, with the thinnest coating at the upper part of the substrate. The substrates were left in the LAF bench to dry overnight. AFM images were produced by scanning the surfaces in intermittent contact mode in air (using PPP-NCHR cantilever from Park Systems, Suwon, Korea) at scan speeds 0.7-1 Hz, while recoding the topography, amplitude, and phase signal. Images were evaluated in the XEI Park instruments software (Park Systems, Suwon, Korea), where profile lines were drawn in selected locations to measure the step-height for the lipid bilayer structures.
Antibacterial Activity
The test was carried out by QACS Ltd. Laboratory (Athens, Greece) as per the European Standard test method EN 1500:2013 [6]. The method was specified for verifying hygienic hand rub where the test product (PP), when rubbed onto artificially contaminated hands of volunteers, should reduce the release of transient flora. The live test organism (Escherichia coli K12 NCTC 10538) was applied and recovered to obtain a baseline count. The test product (PP)/reference product (RP) is later applied to terminate the effect of any residual disinfectant before recovering any surviving test organisms in sampling broth containing neutralizers. AKVANO skin disinfectant formulations were used as formulation products and 2-propanol, 60% in water (v/v) was used as a reference. The organisms were enumerated, counts transposed to the log 10 (log) system, and the difference between the numbers recovered from the AKVANO or reference formulations, and baseline counts were established and statistically analysed for any significance. The larger the difference is between the two counts, the less effective is the product. Each of the volunteers repeated the procedure for the reference first and the product to be evaluated after, and then for the product first and the reference after. AKVANO foot spray formulation was also tested for antibacterial activity by Lab-test laboratorium S.C. (Katowice, Poland) according to European Standard [7]. In brief, the test method is dilution-neutralization with neutralizer D/E broth, at clean conditions (0.3 g/L bovine albumin), contact time 30 s, and test temperature 20.0 • C ± 0.6 • C, diluted in distilled water against Pseudomonas aeruginosa (ATCC 15442), Staphylococcus aureus (ATCC 6538), Enterococcus hirae (ATCC 10541), and Escherichia coli K12 (NCTC 10538).
Antifungal Activity
AKVANO formulation was tested for antifungal activity by Lab-test laboratorium S.C. (Katowice, Poland) according to European Standard [8]. This European Standard specifies a test method and the minimum requirements for fungicidal activity of chemical disinfectant and antiseptic products. AKV014 was evaluated at clean conditions with interfering substance 0.3 g/L bovine albumin at a contact time of 30 s and test temperature of 20.0 ± 0.6 • C diluted in distilled water. Formulation was tested at concentrations of 10-97% v/v. The incubation time was 48 h using pour plate method at 29.5-30.5 • C. At the end of this contact time, an aliquot was taken and the fungicidal action against microbial strain Candida albicans (ATCC 10231) in this portion was immediately neutralized or suppressed by a validated method (dilution-neutralization). The numbers of surviving fungi in each sample were determined and the reduction was calculated.
Antiviral Activity
The test was carried out by QACS Ltd. Laboratory (Athens, Greece) as per the European Standard test method [9]. The antiviral activity of AKVANO formulations (intended for use as skin disinfectants) was tested against three virus strains: Adenovirus type 5, Poliovirus type 1, and Murine norovirus. A 97% dilution of the AKVANO product was added to a test suspension of titrated viruses in bovine serum albumin solutions of 0.3 g/L (clean conditions). The mixtures were maintained at 20 • C for 60 s. At the end of contact time, an aliquot was taken and the virucidal activity was suppressed by dilutions in ice-cold maintenance medium. The dilutions were then inoculated onto cell monolayers in 96-well culture plates for the titration of the remaining viruses. The titres of the viruses expressed in the Tissue Culture Infectious Dose (TCID50) values, after 5-days of incubation, were determined and expressed in a log scale. Reduction in the virus infectivity was calculated from the differences of the log virus titres before (control) and after treatment with the AKVANO product. According to the EN 14476 standard, a product has antiviral activity when the reduction of the virus is at least four log units.
In Vitro Skin Irritation Test
This test was carried out by QACS Ltd. Laboratory (Athens, Greece) according to the Organisation for Economic Co-operation and Development (OECD) Guideline No. 439 [10] and using the protocol In Vitro EpiDerm™ Skin Irritation Test [11]. Skin irritation refers to the generation of reversible damage to the skin following the exposure of the chemical to be evaluated, for up to 4 h [10]. The test consisted of a topical exposure of AKVANO formulation (intended for use as a skin disinfectant) to a reconstructed human epidermis (RhE) model followed by a cell viability test. Cell viability was measured by dehydrogenase conversion of MTT present in cell mitochondria into a blue formazan salt that was quantitatively measured photometrically after extraction from tissue. The reduction of the average viability of three tissues exposed to chemicals in comparison to average viability of three negative controls (treated with water) was used to predict the skin irritation potential. The negative control used was DPBS without Ca 2+ and Mg 2+ and 5% sodium dodecyl sulphate (SDS) solution was used as a positive control.
In Vitro Eye Irritation Test
This study was carried out by Research Institutes of Sweden AB (RISE, Gothenburg, Sweden, according to the OECD guidelines No. 492 [12] and using the protocol In Vitro EpiOcular Eye Irritation Test [13]. The eye irritation test is based on the use of a reconstructed cornea epithelial model and the relevant materials were obtained from MatTek In Vitro Life Science Laboratories (Bratislava, Slovak Republic). The epithelia models are topically exposed to the product to be evaluated and after recovery, the viability of cells is measured via metabolic activity. Yellow water-soluble MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) is metabolically reduced in viable cells to a blue-violet insoluble formazan, and thus the number of viable cells correlates to the colour intensity determined by photometric measurements after dissolving the formazan in alcohol. For each treatment, the viability percentage relative to a negative control (cell culture water) is calculated. Positive control used was neat methyl acetate. Eye irritation is identified as the ability of the product to be evaluated to reduce the viability of the cells in the epithelial model system. Eye irritation potential of the product or formulation evaluated is predicted if the remaining relative cell viability is below 50% after exposure.
The AKVANO formulation, positive control and negative control were added to EpiOcular™ human cell construct models (MatTek In Vitro Life Science Laboratories, Bratislava, Slovak Republic) pre-treated with DPBS, Dulbecco's Phosphate Buffered Saline (Thermo Fisher Scientific, Waltham, MA, USA) for 30 min, whereafter the tissues were thoroughly washed followed by a post-treatment immersion and then allowed to recover for 2 h. After the recovery period, MTT solution was added to the tissues which were incubated for an additional 3 h at 37 ± 1 • C in 5 ± 1% CO 2 . Following incubation, the MTT solution was removed, 2-propanol was added, and the plate with the models was shaken rapidly for at least 2 h. The solutions for tissues were homogenized and transferred to a 96-well plate for absorbance measurement at 570 nm followed by calculation of the viability of the tissues.
In Vitro Skin Permeation Studies
These experiments were performed using Strat-M ® membranes [14] from Merck (Darmstadt, Germany) to study the permeation of ketoprofen, diclofenac diethylamine, and diclofenac sodium in different AKVANO formulations and in commercially available medicaments. The diffusion cell system [15,16] consisted of an eight-channel peristaltic pump, which delivered PBS buffer pH 7.4 to flow-through diffusion cells with a cross section area of 0.5 cm 2 placed on a stainless-steel platform which was kept at 37 • C. The receptor fluid was transported in the system through Teflon tubes (0.5 mm ID) with an approximate flow rate of 1.5 mL/h to an eight-channel fraction collector. Strat-M membranes were cut to an appropriate size and placed between the donor chamber and the receiving chamber. Approximately 5 mg of formulation was applied on top of the membranes. The opening of the donor chamber was left uncovered to allow evaporation of the volatile solvent. Receptor fluid was collected at the following time intervals: 0-2, 2-4, 4-6, 6-10, 10-14, 14-18, and 18-24 h. The concentration of active substance in the receptor fluid was analysed by RP-HPLC (Agilent Technologies Inc., Santa Clara, CA, USA) with UV detection at 240 nm for ketoprofen and 276 nm for diclofenac salts. The separation for ketoprofen was carried out on a Symmetry C8 column (150 × 3.9 mm, particle size 5 µm) from Waters (Milford, MA, USA) and for diclofenac salts (250 × 4.6 mm, particle size 5 µm) from ReproSil-Pur C8 by Dr Maisch HPLC GmbH (Ammerbuch, Germany). Ketoprofen was eluted using a flow of 1-2 mL/min with 75% A and 25% B to 100% B for 13 min, where A is methanol/water 40:60 + 0.16% triethylamine + 0.16% acetic acid and B is methanol + 0.16% triethylamine + 0.16% acetic acid, and diclofenac salts were eluted using a flow rate of 1.0 mL/min with 70% A and 30% B to 100% B in 15 min, where A is methanol/water 10:90 + 0.1% acetic acid and B is methanol + 0.1% acetic acid. The amount of active substance retained in the membranes was analysed after extraction with 1 mL of methanol overnight.
Physico-Chemical Characterization of Different AKVANO Formulations
Characterization of model AKVANO compositions was used to understand the significance of the choice of phospholipids to obtain the desired properties of both the bulk liquid itself and the organization after evaporation (see Section 3.2). AKVANO formulations (AKV001-AKV005) with different compositions (see Table 1) were used for characterization experiments, as follows: The presence of lipid, either DMPC or DOPE, increased the formulations' viscosity by about 25% (Table 2), which could suggest that aggregates are formed. Furthermore, the addition of 1.5% (w/w) of water further increased the viscosity of the AKVANO formulations, which may indicate that water gives some enhancement in the aggregation. Table 3 presents the average values (from triplicates) of the hydrodynamic radii obtained from the DLS measurements. When evaluating the DLS data, it is important to keep in mind that the lipid concentration in AKVANO formulations used is rather high. This ensures that if any aggregates giving rise to scattering are present, one can expect a good signal. The obtained size distributions are indeed well-defined, with a high repeatability for the triplicates, as seen by the low standard deviation. They are also monomodal with a low polydispersity index, which would suggest the presence of small aggregates (Figure 1). On the other hand, the presence of a large fraction of lipids in AKVANO formulations involves a significant risk of multiple scattering, which can give rise to overestimation of the aggregate's dimensions. In summary, the DLS measurements suggest that the apparent average size of aggregates formed by DOPE is larger than those formed by DMPC, and that the presence of water possibly gives rise to some aggregate growth in the DOPE formulation (Table 3). Regarding the average hydrodynamic radii presented in Table 3, it can be stated that they are on the same order of magnitude as would be expected for micelles, which are generally considered to have a radius corresponding to the maximum extended length of the aggregating amphiphile (which in this case would be around 3 nm). However, since the hydrodynamic radius is generally larger than the actual physical radius (due to solvation effects), it can be stated that the data measured for all formulations are probably on the small side for micelles, and especially for the DMPC formulation exhibiting data much smaller than what would be expected for micelles. In contrast, according to the literature, in phospholipid/ethanol systems with higher content of water, vesicular systems are formed with much larger aggregate size (ranging from one to several orders of magnitude larger) [17,18]. Pharmaceutics 2022, 14, x FOR PEER REVIEW 10 of 22
NMR Diffusometry
A well-resolved diffusion coefficient for the lipid as well as for the solvent components, e.g., ethanol and cyclomethicone D5, was obtained for all the formulation samples. Diffusion coefficient of the added water was, however, only identified and analysed in the sample with DMPC. The DOPE sample with water was notably different compared to the sample without water in terms of viscosity, DLS, and diffusion coefficients of the other components. The absence of a distinct water signal in the DOPE sample with water is likely caused by the signal being buried in larger signals arising from the lipid. The possible presence of aggregates is best assessed by calculating the apparent hydrodynamic sizes corresponding to the values of Dlipid (Table 4) derived from the Stokes-Einstein relation [5]: where kB is the Boltzmann's constant, T the absolute temperature, and η the viscosity of the formulation.
NMR Diffusometry
A well-resolved diffusion coefficient for the lipid as well as for the solvent components, e.g., ethanol and cyclomethicone D5, was obtained for all the formulation samples. Diffusion coefficient of the added water was, however, only identified and analysed in the sample with DMPC. The DOPE sample with water was notably different compared to the sample without water in terms of viscosity, DLS, and diffusion coefficients of the other components. The absence of a distinct water signal in the DOPE sample with water is likely caused by the signal being buried in larger signals arising from the lipid. The possible presence of aggregates is best assessed by calculating the apparent hydrodynamic sizes corresponding to the values of D lipid (Table 4) derived from the Stokes-Einstein relation [5]: where k B is the Boltzmann's constant, T the absolute temperature, and η the viscosity of the formulation. An important finding from these numbers is that if aggregates are formed, these are small, even smaller than indicated from the DLS data. As was mentioned above, there is a significant risk that, because of likely multiple scattering, the apparent sizes obtained from the DLS measurements are over-estimated. Since the apparent sizes obtained in the NMR experiments can be more "direct", these can be expected to reflect the effective sizes of the diffusing entities more reliably. The apparent dimensions corresponding to the D lipid are significantly smaller than expected from micelle-like aggregates. The radius of a spherical micelle is typically close to the extended length of the monomer, which, for the investigated lipids here, is~3 nm. In addition, the hydrodynamic radius is expected to be slightly larger than the physical radius, due to the influence from solvation. Thus, lipid self-diffusion data clearly suggest that possible aggregates formed are smaller than the typical micelles. It can be noted that, generally, if a molecule is present both as individually dissolved monomers and residing in well-defined aggregates (such as micelles), its observed diffusion coefficient (D obs ) under the condition of fast exchange between the two sites (which there is no reason to believe that one would not have) is the weighted average of the diffusion coefficients corresponding to the two sites (D monomer and D aggregate , respectively), according to Equation (2): where p monomer is the fraction of molecules present as individually dissolved molecules. Since D aggregate is typically smaller than D monomer , it does, in relative terms, have a smaller influence on D obs , and one could, in principle, still have large aggregates present in a fraction low enough that its influence on D obs is minor. However, considering the high concentration of lipid in the investigated samples, it would be highly unlikely that, had there been a propensity for micelle formation, a major fraction of the lipid had not been residing in aggregates. As was mentioned above, D water could only be determined for sample AKV003 (DMPC + H 2 O). It is found that, although the water molecule is smaller than the ethanol molecule, D water is significantly lower than D EtOH . This finding can be taken to suggest that there is some extent of preferential binding of water to the lipid. In this context, it can be noted that, at the herein used water concentration, there are only around six water molecules per lipid molecule. It is possible that, in the presence of a larger fraction of water, the formation of more micelle-like aggregates may be induced. To obtain further understanding of the solution structure, it is valuable to take a closer look at the diffusion data for the solvent components. Table 5 presents the diffusion coefficients of ethanol and cyclomethicone D5 in samples with lipids normalized to the corresponding values in the pure solvent mixture in the absence of lipids. By comparing the values of D/D 0 in Table 5 to the viscosities presented in Table 2, one can find out that the relative decrease in D in the presence of lipid is very close to the inverse of the corresponding relative increase in viscosity. This indicates that the reduction in diffusion rate is mainly a consequence of an increase in bulk viscosity; had a significant volume fraction of large aggregates been present in the samples, one would have expected additional reduction in solvent diffusion due to obstruction effects caused by excluded volume. These findings give additional support to the notion that micelle-like aggregates are not formed in the samples. Furthermore, they suggest that there is no preferential binding of either of the solvent components to the lipid, since the diffusion coefficients for both solvents are similar within one sample. It is difficult to tell whether some types of smaller aggregates are formed in the samples and to get an idea of their character. Considering the size of the individual lipid moleculesas is stated above, the extended length of these is~3 nm-it is possible that there is no significant association of the lipid molecules and that they are individually dissolved in solution. It should be noted that the effective size of individually dissolved monomers is typically more strongly affected by solvation than that of molecules in aggregates, i.e., the apparent volume per lipid molecule may be notably larger for an individually dissolved molecule than for one residing in an aggregate. It is thus not possible to unambiguously separate solvated monomers from aggregates made up of a small number of lipid molecules. The NMR diffusion data do, in accordance with the DLS data, suggest that there is a stronger tendency for aggregation in the samples with DOPE than in those with DMPC.
Characterization of the Film Formed after Evaporation by Atomic Force Microscopy (AFM)
The film obtained after evaporation of formulation AKV006 (DMPC) showed a characteristic pattern of rounded shapes, such as flat patches or islands protruding from the underlying substrate when imaged with AFM. These rounded shapes had a step height of 4-5 nm, or sometimes multiples of that height, indicating that they are islands of bilayers and, occasionally, multiple bilayers were formed on the surface (Figure 2). The structures appeared very flat (as seen in the topography signal) and smooth (as seen in the amplitude signal, which is more sensitive to edges and smaller features). Looking at the topography and amplitude signal, the roughness appears to be very similar between the shapes as on top of them. On the other hand, the phase signal, which is sensitive to material properties such as stiffness and elasticity, clearly shows a contrast between the two locations (Figure 2c), thus indicating that the rounded shapes seen are lipid bilayers deposited on top of a clean silica substrate. This is even more obvious when looking at the few locations where an additional smaller round-shaped lipid bilayer is deposited on the top of a larger structure, since no contrast is evident when moving between the two bilayer levels, whereas the phase signal changes significantly once the bottom surface is reached (Figure 2). It can further be stated that no contrast could be seen in the phase signal when AFM scans were made on locations where a thicker coating was deposited on the substrate (Figure 3c) (further down on the substrate where more material had accumulated). AFM images taken at the "thicker" end of the surface also show much thicker stacks of lipid bilayers, as seen in Figure 3 below, although even these thick assemblies clearly indicate a layered structure where the top layers are sometimes seen to be incomplete (see green profile line in Figure 3).
The AKV007 (DOPE) sample, on the other hand, looks completely different in the AFM images, where the DOPE lipids appear to form elongated worm-like structures when dried on a hydrophilic substrate, as seen in Figure 4. This is even more pronounced when AFM scans are made at locations with thicker coatings, where long worm-like fibres are seen ( Figure 5). There is no evidence of bilayer formation (as expected); instead, elongated structures possibly connected to the reversed hexagonal phase normally formed by DOPE [2] are seen. These results thus confirm that phosphatidylcholine rich lipid materials are more suitable to be used in AKVANO than materials with a high content of phosphatidylethanolamine.
when AFM scans are made at locations with thicker coatings, where long worm-like fibres are seen ( Figure 5). There is no evidence of bilayer formation (as expected); instead, elongated structures possibly connected to the reversed hexagonal phase normally formed by DOPE [2] are seen. These results thus confirm that phosphatidylcholine rich lipid materials are more suitable to be used in AKVANO than materials with a high content of phosphatidylethanolamine.
Antibacterial Activity
For test formulations to confirm to the standard method (EN1500:2013), the mean log reduction factor obtained should not be inferior to that achieved by the specified reference product. The acceptance criteria as laid out by European standard EN1500:2013 were fulfilled with AKVANO formulations, as primarily individual log reductions were less than 3.00 and means of log prevalues for AKVANO formulations (7.12) and for reference product (7.07) were greater than 5, with absolute difference of mean differences as 0.12 (hence less than 2.00).
Antibacterial Activity
For test formulations to confirm to the standard method (EN1500:2013), the mean log reduction factor obtained should not be inferior to that achieved by the specified reference product. The acceptance criteria as laid out by European standard EN1500:2013 were fulfilled with AKVANO formulations, as primarily individual log reductions were less than 3.00 and means of log prevalues for AKVANO formulations (7.12) and for reference product (7.07) were greater than 5, with absolute difference of mean differences as 0.12 (hence less than 2.00).
The performance of AKVANO formulations in the test procedure proved to be equivalent to the performance of the reference product (RP). AKVANO formulations AKV012 and AKV013 showed a log reduction of 3.9 and 4.3, respectively, while the log reduction of RP was 3.8 and 3.6 at the respective testing occasions ( Figure 6). The relatively higher bactericidal effect of AKV013 could be attributed to the content of citric acid in the formulation. Accordingly, both versions of AKVANO skin disinfection spray, tested at 100% concentration, when applied for total rubbing time of 60 s (2 × 30 s) and using total quantity of 6 mL (2 × 3 mL dose) of product, conforms to the requirements of EN 1500:2013.
For test formulations to confirm to the standard method (EN1500:2013), the mean log reduction factor obtained should not be inferior to that achieved by the specified reference product. The acceptance criteria as laid out by European standard EN1500:2013 were fulfilled with AKVANO formulations, as primarily individual log reductions were less than 3.00 and means of log prevalues for AKVANO formulations (7.12) and for reference product (7.07) were greater than 5, with absolute difference of mean differences as 0.12 (hence less than 2.00). The performance of AKVANO formulations in the test procedure proved to be equivalent to the performance of the reference product (RP). AKVANO formulations AKV012 and AKV013 showed a log reduction of 3.9 and 4.3, respectively, while the log reduction of RP was 3.8 and 3.6 at the respective testing occasions ( Figure 6). The relatively higher bactericidal effect of AKV013 could be attributed to the content of citric acid in the formulation. Accordingly, both versions of AKVANO skin disinfection spray, tested at 100% concentration, when applied for total rubbing time of 60 s (2 × 30 s) and using total quantity of 6 mL (2 × 3 mL dose) of product, conforms to the requirements of EN 1500:2013. An AKVANO foot spray formulation, AKV014, was also tested according to EN 13727+A1:2014-02 and proved to have met the efficacy requirements in reduction of viable counts (five log units) against Pseudomonas aeruginosa, Staphylococcus aureus, Enterococcus hirae, and Escherichia coli at 80% (v/v) and at 97% (v/v).
Antifungal Activity
According to the test procedure EN 13624, the product should demonstrate at least a four-decimal log reduction to pass the acceptance criteria. AKVANO formulation (AKV014) was tested at concentrations of 10%, 80%, and 97% (v/v) and the reduction factor of viable counts (R) was found to be active (reduction > 4.48 log units) against Candida albicans ATCC 10231 at both 80% (v/v) and 97% (v/v) concentrations. For the antiviral activity, the product under test shall demonstrate at least a fourdecimal log reduction in virus titre when tested in accordance with EN 14476+A1. These two AKVANO formulations thus demonstrated antiviral activity against the non-enveloped DNA adenovirus, the non-enveloped RNA poliovirus, and the non-enveloped RNA murine norovirus. According to the EN 14476 standard, products that have antiviral activity against these three virus strains are considered to be active against all other viruses. Considering the high ethanol concentration in the tested AKVANO formulations, the results are in line with earlier reports [19], although the additional benefit of citric acid is not obvious in the present study.
two AKVANO formulations thus demonstrated antiviral activity against the non-enveloped DNA adenovirus, the non-enveloped RNA poliovirus, and the non-enveloped RNA murine norovirus. According to the EN 14476 standard, products that have antiviral activity against these three virus strains are considered to be active against all other viruses. Considering the high ethanol concentration in the tested AKVANO formulations, the results are in line with earlier reports [19], although the additional benefit of citric acid is not obvious in the present study.
In Vitro Skin Irritation Test
Despite the potent antimicrobial activity, the AKVANO formulations are perceived as mild and non-irritating. To confirm this, two skin products based on AKVANO and intended to be used for skin care were tested according to the In Vitro EpiDerm TM Skin
In Vitro Skin Irritation Test
Despite the potent antimicrobial activity, the AKVANO formulations are perceived as mild and non-irritating. To confirm this, two skin products based on AKVANO and intended to be used for skin care were tested according to the In Vitro EpiDerm™ Skin Irritation Test. According to the EU and GHS classification (R38/Category 2), an irritant is predicted if the mean relative tissue viability of three individual tissues exposed to the test substance is reduced below 50% of the mean viability of the negative controls. According to results obtained (Table 6) from this test, the viability of the reconstructed human epidermal model was >50%. This clearly shows that the AKVANO formulation (intended for use as a skin disinfectant) is classified as Non-Irritant (NI). Table 6. Viability of reconstructed human epidermal after exposure to two AKVANO formulations and positive control, relative to the negative control.
Formulation
Relative Viability (%) SD of Viability
In Vitro Eye Irritation Test
AKV011 is a prototype spray formulation intended to be used for treatment of plaque psoriasis, including affected areas on the scalp. Since spraying on the scalp implies a risk of exposing the eyes, the irritation potential of the formulation was tested.
The measured absorption values (blank subtracted) for the duplicate aliquots of each tissue included in the test were used to calculate viabilities for each tissue and mean viabilities for the test item and positive and negative controls together with the classification of the formulation to be evaluated AKV011 (Figure 8). If the viability is reduced to <50% of the negative control, the product is considered to have an irritating potential. Accordingly, AKV011 is considered not to have a potential for ocular irritation.
of the formulation to be evaluated AKV011 (Figure 8). If the viability is reduced to <50% of the negative control, the product is considered to have an irritating potential. Accordingly, AKV011 is considered not to have a potential for ocular irritation.
There was interference with the MTT testing, as determined by interference pretesting of the test substance, and freeze killed control tissues were used for AKV011 and negative control. Since the optical density values for freeze killed control tissues for the AKV011 sample were the same as for the negative control, no correction was needed. There was interference with the MTT testing, as determined by interference pretesting of the test substance, and freeze killed control tissues were used for AKV011 and negative control. Since the optical density values for freeze killed control tissues for the AKV011 sample were the same as for the negative control, no correction was needed.
In Vitro Permeation Experiments
Several AKVANO formulations containing the active pharmaceutical ingredients ketoprofen, diclofenac diethylamine, and diclofenac sodium were tested for permeation through Strat-M artificial membranes and compared to commercially available medicinal products.
The flux J (µg/h) was calculated according to Equation (3): where i is the fraction number, C i the concentration in µg/mL, V i the volume of the fraction (mL), and t i the time in hours during which the fraction was collected. The cumulative permeation Q was calculated according to Equation (4): where Q n is the accumulated proportion of permeated active substance and m nom is the nominal amount of substance in µg applied to the membrane at the start of the experiment. Three different types of AKVANO vehicles were used in the study. AKV009a and AKV0010a contained only phospholipids, while AKV008, AKV009b, and AKV0010b also contained intermediate levels of single chain lipids MCM and IPM. AKV009c and AKV010c contained high concentrations of MCM and IPM, whereas another single chain lipid, oleic acid, was used in AKV009d at an intermediate level.
In the first set of experiments (Figure 9), a formulation of ketoprofen in AKVANO, AKV008, was compared to Orudis ® gel (2.5% ketoprofen, Sanofi AB, Stockholm, Sweden). The experiments demonstrated a much faster permeation profile for ketoprofen in AKV008 than for Orudis gel. Experiments also showed that a significant part of the initial content of ketoprofen in Orudis gel was retained on the membrane (47%), whereas for the AKV008 formulation, the retained amount was negligible (2.8%).
lipid, oleic acid, was used in AKV009d at an intermediate level.
In the first set of experiments (Figure 9), a formulation of ketoprofen in AKVANO, AKV008, was compared to Orudis ® gel (2.5% ketoprofen, Sanofi AB, Stockholm, Sweden). The experiments demonstrated a much faster permeation profile for ketoprofen in AKV008 than for Orudis gel. Experiments also showed that a significant part of the initial content of ketoprofen in Orudis gel was retained on the membrane (47%), whereas for the AKV008 formulation, the retained amount was negligible (2.8%). In a subsequent experiment, the permeation of diclofenac diethylamine in AKV009ac was compared with Voltaren ® gel (2.3% diclofenac diethylamine, GlaxoSmithKline Consumer Healthcare ApS, Hovedstaden, Denmark). The result shows that the AKV009a formulation gives a comparatively slow release of diclofenac diethylamine, whereas In a subsequent experiment, the permeation of diclofenac diethylamine in AKV009a-c was compared with Voltaren ® gel (2.3% diclofenac diethylamine, GlaxoSmithKline Consumer Healthcare ApS, Hovedstaden, Denmark). The result shows that the AKV009a formulation gives a comparatively slow release of diclofenac diethylamine, whereas AKV009b and AKV009c formulations, which contain increasing amounts of MCM and IPM, give faster permeation ( Figure 10). AKV009c shows an even faster permeation than Voltaren gel, though the difference is not statistically significant ( Figure 10). For all four formulations, a portion of initially applied diclofenac diethylamine was retained on the membrane (26% for AKV009a, 36% for AKV009b, 35% for AKV009c, and 26% for Voltaren gel) but the differences between formulations were not found to be statistically significant. AKV009b and AKV009c formulations, which contain increasing amounts of MCM and IPM, give faster permeation ( Figure 10). AKV009c shows an even faster permeation than Voltaren gel, though the difference is not statistically significant ( Figure 10). For all four formulations, a portion of initially applied diclofenac diethylamine was retained on the membrane (26% for AKV009a, 36% for AKV009b, 35% for AKV009c, and 26% for Voltaren gel) but the differences between formulations were not found to be statistically significant. In another set of experiments, formulations of diclofenac sodium in AKVANO formulations AKV010a-d were tested. The trend is similar as for diclofenac diethylamine, though the permeation rate was generally slower (Figure 11). It is also observed that the amount retained on the membrane was higher for the AKV010a (30%) than for AKV010b (14%), AKV010c (17%) and AKV010d formulations (15%). The results from the experi- In another set of experiments, formulations of diclofenac sodium in AKVANO formulations AKV010a-d were tested. The trend is similar as for diclofenac diethylamine, though the permeation rate was generally slower (Figure 11). It is also observed that the amount retained on the membrane was higher for the AKV010a (30%) than for AKV010b (14%), AKV010c (17%) and AKV010d formulations (15%). The results from the experiments with diclofenac diethylamine and diclofenac sodium consistently show a higher permeation through the Strat-M ® membrane with increasing concentration of the single chain lipids MCM, IPM, and oleic acid in the formulation. To sum up, the in vitro permeation data show that the AKVANO formulations can be designed to either enhance or to reduce the penetration of an incorporated active ingredient, simply by altering the lipid composition.
Conclusion
The presented novel drug delivery system for topical use, AKVANO, has been shown to possess advantageous features for formulation of pharmaceutical products as well as products for consumer health care and animal care. The properties can be tuned by changing the proportion between phospholipids and other lipids, such as single chain lipids. The volatile solvent system, based on ethanol or other short-chain alcohols, serves as an efficient solvent for lipids but also for a great number of active ingredients, and the phospholipids can also act as a solubilizer.
Investigations of AKVANO formulations' in vitro characteristics, in terms of viscosity, aggregate size, diffusion coefficients, and physicochemical behaviour upon evaporation, show that the formulations are non-viscous, with virtually no or very minute aggregates formed. When formulations based on phosphatidylcholine are applied to the skin, e.g., by spraying a thin film consisting of lipid bilayer structures are formed. AKVANO formulations also meet the criteria for antibacterial, antifungal, and antiviral effects and, at the same time, can be classified as a non-irritant to the skin and eye. The in vitro skin permeation experiments on artificial skin mimicking membranes shows that a relatively slow permeation of the active ingredient can be obtained if only phospholipids are used. With increasing concentration of single chain lipids, such as medium chain monoglycerides and isopropyl myristate, the permeation can be increased significantly.
This first article about AKVANO formulations has thus presented the fundamental properties of the novel topical delivery system. With an understanding of the opportunities and limitations associated with AKVANO, it is possible to develop product prototypes with certain desired characteristics. Further development of pharmaceutical and To sum up, the in vitro permeation data show that the AKVANO formulations can be designed to either enhance or to reduce the penetration of an incorporated active ingredient, simply by altering the lipid composition.
Conclusions
The presented novel drug delivery system for topical use, AKVANO, has been shown to possess advantageous features for formulation of pharmaceutical products as well as products for consumer health care and animal care. The properties can be tuned by changing the proportion between phospholipids and other lipids, such as single chain lipids. The volatile solvent system, based on ethanol or other short-chain alcohols, serves as an efficient solvent for lipids but also for a great number of active ingredients, and the phospholipids can also act as a solubilizer.
Investigations of AKVANO formulations' in vitro characteristics, in terms of viscosity, aggregate size, diffusion coefficients, and physicochemical behaviour upon evaporation, show that the formulations are non-viscous, with virtually no or very minute aggregates formed. When formulations based on phosphatidylcholine are applied to the skin, e.g., by spraying a thin film consisting of lipid bilayer structures are formed. AKVANO formulations also meet the criteria for antibacterial, antifungal, and antiviral effects and, at the same time, can be classified as a non-irritant to the skin and eye. The in vitro skin permeation experiments on artificial skin mimicking membranes shows that a relatively slow permeation of the active ingredient can be obtained if only phospholipids are used. With increasing concentration of single chain lipids, such as medium chain monoglycerides and isopropyl myristate, the permeation can be increased significantly.
This first article about AKVANO formulations has thus presented the fundamental properties of the novel topical delivery system. With an understanding of the opportunities and limitations associated with AKVANO, it is possible to develop product prototypes with certain desired characteristics. Further development of pharmaceutical and consumer health products through AKVANO technology has led to additional non-clinical and clinical data which will be reported in future articles. | 2022-04-07T15:19:53.073Z | 2022-04-01T00:00:00.000 | {
"year": 2022,
"sha1": "e568d72d83e53b150bc3d7c5af8d9f42c07024d5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/14/4/794/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "87586ff4b7839d156db41617748172ec4b9bb9a3",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219077008 | pes2o/s2orc | v3-fos-license | Relation between Concentrations of Lead, Cadmium and Mercury in Cord Blood and Prematurity in the Sidi Bel Abbes Region (West of Algeria)
Background: Exposure to heavy metals such as lead, cadmium and mercury during pregnancy carries a great risk to the mother as well as the fetus. Methods: Lead, cadmium and mercury were measured in umbilical cord blood samples of 3 groups women (30 women’s for lead, 30 cadmium and 10 from mercury) in maternity of Sidi Bel Abbes region in Algeria between 2016 and 2017.The objective of this study was to measure in the blood of the umbilical cord the concentration of lead (Pb), mercury (Hg) and cadmium (Cd), and to evaluate the relationship between these levels and prematurity. The lead, cadmium and mercury levels were measured by atomic absorption. Results: The study showed obvious variations in, maternal characteristics. The results revealed several factors predisposing to prematurity. The mean concentrations of cord blood lead, cadmium and mercury were; 18.97 μg/L, 0.26 μg/L, and6.20 nmol/L, respectively. There was a highly significant direct correlation between cord lead concentrations and gestational age(r=0.43; P = 0.017), and we found that gestational age and birth weight inversely correlated with cord mercury concentration (r=0.44 and r=0.57 respectively).No correlation was observed between cord cadmium concentrations and gestational age. Conclusion: This study has shown that pregnant women in this region were exposed to high levels for heavy metalswhich need an intervention.
INTRODUCTION:
Pregnant women and their fetuses are susceptible to the effects of exposure to environmental toxicants including lead, mercury and cadmium. Metals are ubiquitous in the environment, and exposure occurs through ingestion of food, water, soil, or dust; inhalation from air; and through direct contact with consumer products 1 . Exposure to heavy metals during pregnancy carries a great risk to the developing fetus 2 . Metals are potential risk factors for small for gestational age (SGA) births, and are hypothesized to induce growth restriction through oxidative stress mediated pathways 3 . Cigarette smoking is source of cadmium exposure 9 . Scientists suggest that cadmium may damage the placenta and reduce weight of newborn baby in pregnancy 4,5 . Toxicity from mercury may cause learning disabilities and it effects reproductive system and produces defects such as infertility, miscarriage and prematurity 6,7 . The aim of or study was to measure in umbilical cord blood, at delivery, the concentration of lead (Pb), mercury (Hg) and cadmium (Cd), and evaluate the relationship between this levels and prematurity.
PATIENTS AND METHODS:
A prospective study was conducted in public maternity in Sidi Bel Abbes region (west of Algeria), over a period of 01 years from December 2016 to October 2017. The ethical committee of our department approved the study. After signing a written informed consent, the patients were recruited to the study.
Gestational age patients over 36 weeks of amenorrhea were excluded from the study. Eventually, a total of number 70 pairs of mother-newborn were included in the study. All the patients completed questionnaires including information about age, ethnic origin, Socio-economic level, Level of education, BMI, history for prematurity and abortion and smoking habits.
Five mls of umbilical cord blood was collected immediately after delivery. The samples were chilled at (+4°C) until We evaluated the impact of the different heavy metals analyzed on birth weight and gestational age using the Pearson correlation. The result is reported in form tables and curves.
These statistical tests were considered significant if p <0.05.
RESULTS:
A total of 70 deliveries were reported during the study period, lead and cadmium were measured in 30 subjects respectively and mercury in 10 subjects. The study showed obvious variations in maternal characteristics, socioeconomic status and obstetric/ gynecological history. We defined three groups; lead dosage group (30 subjects), cadmium dosage group (30 subjects), and mercury dosage group (10subjects).
Relationship between concentration of metals and maternal characteristics
The average concentration was measured for each metal, Pb, Cd and mercury in the cord blood and were all correlated with maternal characteristics (Table2).
Lead concentrations and maternal characteristics
The difference between the different Age groups of mothers and lead concentrationswas statistically significant (P<0.001), the highest rate of lead (20.57±11.01) is found in the category over 35 years old(Table2). In this group, a single subject less than 20 years old has a high level of lead (73.3µg/L), this result cannot have statistical significance. According to statistical analysis, patients with low socioeconomic status have the highest rates of lead (28.14±22.05) (Table2). No other statistically significant relationship could be detected between lead and the rest of the maternal characteristics studied (Table2)
Cadmium concentrations and maternal characteristics
Regarding the relationship between cadmium concentrations and maternal characteristics, our results did not reveal any statistically significant relation (Table2)
Mercury concentrations and maternal characteristics
Our result observed a significant relationship between birth weight, History of abortion and mercury concentrations (P = 0.012). No other statistically significant relationship could be detected between mercury and the rest of the maternal characteristics studied (Table2)
Correlation between gestational age, birth weight and cord concentrations of lead, cadmium and mercury:
As shown in figure 1, a highly significant direct correlation was found between cord lead concentrations and gestational age (r=0.55; P = 0.002). Furthermore, a clear correlation was found between the concentration of cadmium in the umbilical cord and the birth weight (r= 0.25), (figure 2), (Table 3). Finally we found that gestational age and birth weight inversely correlated with cord mercury concentration (r=0.44 and r=0.57 respectively), (figure 3).
DISCUSSION
The mean concentration of lead in cord blood found in this study was 1.89 µg/dL. Reports from South Africa a cord blood median lead concentration of 2.39μg/Dl 13 . A Canadian study found a cord blood arithmetic mean lead concentration of 2.8 μg/dL, and another study in Saudi Arabia found 2.5 μg/dl 14 . Also, Mean cord blood lead was higher than those reported in Brazil were (1.194μg/dL) 15 .Belgium (1.47μg/dL) and Turkey (Eskisehir; 1.65μg/dL) 16,17,18 .
In our study highly significant direct correlation was found between cord lead concentrations and gestational age (P = 0.017). Multiple studies have found an association with SGA 7,19,20,27 .
In our Study the meanconcentration of cadmium in cord blood was 0.26 μg/L, a value lower than that reported in Saudi Arabia (GM = 0.78 μg/L) 22,23 ,and consistent with the values reported in many studies conducted in other areas in China (GM = 0.20μg/L) 24 ,Nepal (GM = 0.29 μg/L) 25 .
Our findings that not exceed the allowed level determined by OHSA (5μg/L) 26,27,28 . Pregnancy is a critical period in terms of cadmium toxicity, and several adverse outcomes such as preeclampsia, LBW, prematurity. Cadmium accumulates in the placenta interacting with the transport of micronutrients and may play a key role in the occurrence of intrauterine growth restriction 29,30 . In this study, we found a clear correlation between the concentration of cadmium in the umbilical cord and the birth weight, in literature two studies found no effect of cadmium on fetal growth outcomes 30, 31 , while others found relationships with birth weight or length 32,33,34,35 .
The average mercury content in the cord was 2.24μg/L. this value was lower than the Environmental Protection Agency (EPA) reference dose of 5.8 μg/L 36. In our study mercury cord levels were higher than those found in Canada (Montreal; 0.69μg/L), Poland (0.88μg/L), Slovakia (0.8μg/L), South Africa (1.2μg/L), Sweden (organic; 1.4μg/Land inorganic; 0.34μg/L) and Turkey (0.5μg/L) 37 .
Also, statistically significant relationship was observed between mercury exposure and abortion history in our study. Their teratogenic and foetotoxic roles were established. According to the WHO, it played a key role in the occurrence of spontaneous abortions 38,39,40 . However, later analysis of a more complete dataset disproved 42 .Concerning level of mercury and gestational age, four studies used mercury measurements in cord blood and maternal blood with higher exposure levels with larger samples than our study found an association between mercury and small gestational age 20,41 .
CONCLUSION
The results of the present study provide relatively comprehensive information concerning the Pb, Cd and Hg levels in the cord of preterm newborns from west of Algeria. This study has shown that pregnant women in this region of the country were exposed to similar levels, compared to pregnant women in industrialized countries, or even higher levels for lead. Further research incorporating larger samples is needed to investigate the effects of pregnant women's exposure to heavy metals -particularly Pb, Cd, Hg and its impact on small gestational age. The health effects of prenatal exposure to heavy metals as well as to other pollutants to which human population is exposed should alert countries governments to endorse stricter standards and tighten legislation to protect future generations from diseases that may develop following prenatal exposures | 2020-04-30T09:11:08.303Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "28afee219fb75a079cff2718e65a462c3d6b9d74",
"oa_license": "CCBYNC",
"oa_url": "http://jddtonline.info/index.php/jddt/article/download/4020/3073",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fa9e0645a1c2652eb13642c08345e96e1c6c0505",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
251106978 | pes2o/s2orc | v3-fos-license | Aging and the COVID-19 pandemic: The inter-related roles of biology, physical wellbeing, social norms and global health systems
The coronavirus disease 2019 (COVID-19) pandemic has had a devastating and disproportionate impact on the elderly population. As the virus has swept through the world, already vulnerable elderly populations worldwide have faced a far greater burden of deaths and severe disease, crippling isolation, widespread societal stigma, and wide-ranging practical difficulties in maintaining access to basic health care and social services – all of which have had significant detrimental effects on their mental and physical wellbeing. In this paper, we present an overview of aging and COVID-19 from the interrelated perspectives of underlying biological mechanisms, physical manifestations, societal aspects, and health services related to the excess risk observed among the elderly population. We conclude that to tackle future pandemics in an efficient manner, it is essential to reform national health systems and response strategies from an age perspective. As the global population continues to age, elderly-focused health services should be integrated into the global health systems and global strategies, especially in low- and middle-income countries with historically underfunded public health infrastructure and insufficient gerontological care.
Introduction
Age plays a crucial role in affecting population health and the coronavirus disease 2019 (COVID- 19) pandemic was no exception. The pandemic, for instance, had a particularly devastating impact on elderly populations, making them the most vulnerable population group and producing the worst disease outcome estimates compared to any other age group. As the virus swept through the world, elderly populations bore an enormous burden of hospitalizations, severe health complications, and excess deaths [1]. Currently, the COVID-19 infection incidence rates remain fairly similar among all adult age groups worldwide. However, compared to the 18-29 year reference age group, hospitalization rates are 5 and 10 times higher in the 65-74 year and 85+ year age groups, respectively. In similar comparisons of case fatality rates, people in the 65-74 year and 85+ year age groups are 65 and 340 times more likely to die due to COVID-19, respectively [1], highlighting the disproportionately high impact on the elderly populations.
These enormous increases in disease severity, hospitalizations, and an astronomical rise in the mortality rates among elderly populations, observed during the COVID-19 pandemic globally, warrant urgent attention of the scientific community to: 1) better characterize the underlying biological, social, and structural pathways that may have caused this respiratory pathogen to enact such extraordinary adverse clinical impacts on the elderly, and 2) use these insights in informing preventative and healthcare strategies. These assessments are also critical given the increasing trend of life expectancy globally: over 727.6 million people aged over 65 years and almost 146 million over 80 years in 2020, with these numbers expected to double by 2050, and gradually reaching 2.5 billion by 2100 [2]. Such unprecedented and accelerated growth in human population is likely to have tremendous detrimental impacts on the global disease burden and global healthcare systems (some of these phenomena have already been observed during the current pandemic).
In this respect, we present a broad overview of COVID-19 and aging from the interrelated perspectives of underlying biological mechanisms, physical manifestations, societal aspects, and health services with a particular focus on the elderly population.
Methods
We have conducted a targeted literature review, which is meant to be an informative, rather than all-encompassing, rapid review of the literature on a given topic. Our approach, therefore, involved an indepth, however, non-systematic literature review of the literature, and followed by an informed selection of relevant, current and high-quality articles on the relevant topics of interest to be cited.
First, we searched PubMed electronic database to systematically identify relevant scientific publications (of interventional and observational studies) and systematic reviews, which had investigated or addressed COVID-19 health outcomes in the specific context of elderly populations globally. We applied no date or language restrictions, and used the MeSH terms and free text words (where appropriate) related to 'adult', 'middle aged "aged"', 'coronavirus disease', 'post-acute COVID-19 syndrome', 'immune system', 'sex', 'social isolation', 'social stigma', 'mental health', and 'aging'. Second, To complement our review of the scientific literature, we conducted a supplementary search, based on the same search strategy and by using a benchmark search engine (the Google Search) to identify relevant media reports and regional guidelines on the topic. We have summarized data from studies reporting biologic, social and healthcare potential determinants of PACS in the elderly in Table 1 and Fig. 1.
Possible mechanistic pathways for excess risk among the elderly
Age played a significant role in COVID-19 moderate and severe cases of a complex pulmonary distress syndrome that could evolve into a multi-organ systemic dysfunction [3], and the disproportionally high morbidity and mortality are documented among people aged 65+ years worldwide. The more significant burden of post-COVID condition [4] on survivors was seen among the elderly population.
From a biological perspective, the aging process is marked by a shift in the immune system works, which partly explains the increased morbidity and mortality of the elderly [5]. The immune senescence in older patients with COVID-19 can increase the risk for severe cases by three main mechanisms. First, an increased number of senescent cells at infection leads to a sequence of senescent secretory events [6]. Second, older cells and tissues have a decreased damage repair capacity [7]. These aspects became crucial within the COVID-19 pandemics since SARS-CoV-2 is a virus that triggers an exacerbated innate immune response and depends on the organism's competence in shifting from intrinsic to an adaptive response to be effectively neutralized [8].
Third, the process of so-called "inflammaging" where well-controlled metabolic processes in young individuals progressively and chronically generate detrimental cytokines, oxidative reactive species (ROS) upregulating the innate immune response [9], leading to the attenuated interferon response. In young individuals, the innate immune system maintains the stress generated by the oxidative metabolism, but as aging progresses, the body's ability to sustain this eustress level reduces.
Oxidative metabolism starts to chronically trigger inflammatory and innate immune responses activating IL-1β and NF-κB inflammatory pathways, described as inflammaging [10]. COVID-19 promotes ageinduced immune cell polarization and gene expression related to inflammation and cellular senescence [11]. Overall, aging appears to significantly influence the biological mechanisms through which SARs-CoV2 affects immune regulation.
Long term consequences of COVID-19 among survivors
There is a scarcity of studies that focus on the potential burden of the post-acute COVID syndrome (PACS) in populations over 65 years of age. In a systematic review that included 45 studies reporting frequency and variety of persistent symptoms after COVID-19 infection [12], only two studies reported a median age above 65 years, representing 1,9 % of the included population [13,14]. Both studies looked at the severe form of COVID-19 and reported fatigue, breathlessness, and psychological distress as the most prevalent persistent symptoms, with significant impacts on functionality, independence, and cognitive function. Recently, extensive American medical databases have been analyzed, and when COVID-19 adult patients were compared to contemporary, historical, and other viral lower respiratory tract illness controls, matched by age and sociodemographic factors, those infected with SARS-CoV-2 presented an excess risk of 11 % for one or more persistent symptoms that required medical attention [13]. Moreover, six months following the infection, an excess risk of chronic respiratory failure, cardiac rhythm disorders, acute coronary syndromes, hypercoagulability, encephalopathy, dementia, memory difficulties, stroke, kidney injury, diabetes, and anemia have been reported (potentially increasing the burden on individuals and on the health care with increased outpatient encounters, exams and hospitalizations) [4,15]. A prospective analysis from the REACT-2 program, using linkage data from the National Health Services (NHS) in the United Kingdom showed that being over 65 years was the most significant and independent contributor to persistent symptoms identified [16].
Therefore, taken together, the exact effects of cellular senescence on several physiological systems (such as immune system disbalance), coupled with respiratory system impairment, markedly decreased lung cells regenerative capacity, increased pro-fibrotic mediators, and increased vascular dysfunction, explain a significant part of the risk for developing severe COVID-19 and could also explain longer-lasting symptoms among the elderly.
The roles of sex differential and gender norms on Covid-19
Age and societal inequities (e.g., in healthcare access and socioeconomic circumstances) among populations may influence COVID-19 outcomes [17], and it is possible that gender norms may further widen these disparities among the elderly. Initial reports indicated that 60 % of all COVID-19 patients were men, who were also at higher risk of developing systemic inflammation, multi-organ dysfunction, and cardiac injury, with viral shedding longer than women on average [18]. Even though increased mortality rates in men could reflect possible differences in sanitary behavior and unequal access to testing across countries [19], the severity and mortality rates were far worse among men than women in almost all countries globally [20]. Additionally, there may be some mechanistic explanations for the sex differences observed in disease severity and case fatality. For example, while estrogen has been known to trigger an immune response, by contrast, testosterone shows immunosuppressive functions by reducing cytokine production and higher levels of innate immune cytokines associated with acute-phase deterioration in female patients [21]. Furthermore, ACE2 Angiotensin Converting Enzyme 2 (ACE2), a gene coded in the X chromosome, also interferes with interferon regulation by estrogens in different tissues [22,23]. Finally, SARS-CoV-2 entry in cells has shown to be enhanced by cellular transmembrane serine protease 2 (TMPRSS2), which primes the spike protein of the virus and is regulated by androgen receptor signaling [19]. These biological explanations also align well with previous observations that women are less susceptible to severe forms of infection than men due to their somewhat superior immune responses [23].
In addition, studies looking at the post-acute sequelae of COVID-19 have reported a significantly higher sex difference for respiratory failure and acute kidney injury risk [4]. It is unclear to what extent the historical gaps in access to health care and generally higher prevalence of risk factors in postmenopausal women will impact long-term COVID-19 sequelae. One recent study, based on a large prospective cohort of hospitalized COVID-19 patients in Spain [24], showed that female participants reported more post-COVID symptoms, including anxiety, depression, or poor sleep quality, eight months after hospital discharge than males. However, more studies would be needed to replicate these findings since systematic differences in self-reported systems can be distorted by between-individual differences (such as gender norms).
Social stigma, social isolation, and mental health
The sufferings of the older adults started with the start of the endemic, deteriorated with the intensification of the nonpharmacological social distancing measures such as strict generalized lockdowns, and stayed even when the pandemic started to recede. The elderly became victims of infodemic and negative news overdose. A mixed-methods study from Turkey revealed that the average time the elderly spent following news regarding COVID-19 on TV or social media was 2.74 h/day. It increased the odds of generalized anxiety disorder by a factor of 1.188 [25]. The news of their being at higher risk of getting infected was interpreted differently by their family members. Some thought the elderly were the source of infection or a potential conduit for the virus to enter the household. Such misperceptions caused severe stigma against the elderly worldwide.
For instance, in Bangladesh, some family members deserted their elderly family members in the jungle [26], whereas the elderly suffered domestic violence in parts of South America [27]. For a long time since the onset of the pandemic, however, these issues remained largely ignored in the mainstream academic discussion regarding the impact of the pandemic. Additionally, as the global health systems were almost exclusively unprepared to deal with widespread social stigma [28], or the consequent violence against the elderly, there was hardly any timely remedy to these problems. When the generalized lockdown started, many elderly people suddenly found themselves in a far more isolated circumstance than what they were already experiencing. Overwhelming evidence soon emerged from numerous studies based on middle and high-income countries worldwide indicating a deterioration of mental health, physical health, quality of life, and general wellbeing among the elderly amidst strict lockdown [25,29]. The inability of the immediate family members to visit led to alienation and psychological breakdown [25,26,29], and in some cases, suicide [30,31].
Physical wellbeing
The isolation that ensued following generalized lockdown also impacted the physical well-being of the elderly. A systematic review of 14 cross-sectional and 11 cohort studies revealed that COVID-19 movement restrictions reduced physical activity due to increased sitting time, increased equivalent metabolic tasks, decreased steps, and reduced exercise frequency and duration [32]; these may eventually result in reduced musculoskeletal strength and endurance and cardiorespiratory capacity. Increased sedentary behavior is also associated with high blood pressure, and cardiovascular and metabolic diseases [33], among others. Several other studies from the Netherlands [34], France [35], Turkey [25], China [36], and Japan [37] also reported a sharp decline in physical activity among the elderly during the COVID-19 confinement. The elderly are prone to developing sarcopenia, cardiometabolic disorders, and other comorbidities [38]. All of these may lead to functional decline, culminating in limitations in daily life and an increased risk of falls [34].
In addition to adverse health consequences of isolation and physical inactivity, an inability to access essential health care owing to service disruption further worsened the overall health condition of the elderly. Community-bound elderly patients suffering from NCDs in China, for example, faced difficulty in collecting medicines essential to control their conditions [39]. Elderly patients from Argentina reported significant schedule difficulties in accessing routine consultations for chronic illnesses, palliative care, and mental health conditions [40]. Similar reports of service disruptions emerged from Asia [41], the Americas, and Europe [42]. Additionally, as the pandemic progressed, many who survived the disease eventually fell victim to PACS [4], creating further challenges for already struggling global health systems, especially in low-and middle-income countries (LMICs) settings.
Conclusion and recommendations
COVID-19 pandemic has demonstrated that the detrimental impact of a novel virus on elderly populations could go beyond a higher clinical risk of severe disease manifestation and hospitalization. As the world now seeks to return to normalcy, the elderly population, however, remains the ones left behind, especially in resource-limited LMIC settings where the healthcare services and social safety net are historically poor. COVID-19 pandemic may have significantly worsened the pre-existing health disparities among older adults, including access to essential preventive and curative services, and may further enhance social and economic vulnerabilities. For example, given the "new norm," many activities have turned online, and such a rapid transition has been difficult for the elderly worldwide, who are often unfamiliar with many emerging technologies. Evidently, such lack of familiarity with the new technologies has discouraged teleconsultation, including a sense of dissatisfaction owing to the virtual nature of clinical consultations. Therefore, besides developing innovative interventions to tackle these issues, further implementation research should be conducted to assess the gaps and challenges in access, adoption, and sustainability of these measures in elderly populations. Furthermore, to tackle future pandemics efficiently, it is essential to reform national health systems with an age perspective. As the global population continues to age, elderlyfocused health services should also be integrated into the global health systems and global strategies, especially in the low and middleincome countries with historically underfunded public health infrastructure and insufficient gerontological care.
Contributors
Cristina Baena contributed to conducting the systematic review and manuscript preparation.
Taufique Joarder contributed to conducting the systematic review and manuscript preparation.
Nasar U Ahmed contributed to manuscript preparation. Rajiv Chowdhury contributed to overall conceptualization, supervision, and manuscript preparation.
Funding
No funding from an external source was received for the preparation of this review.
Provenance and peer review
This article was commissioned and was externally peer reviewed.
Declaration of competing interest
The authors declare that they have no competing interest. | 2022-07-28T13:03:50.921Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "eff6c05f3a7c4c68bc39464baef607b88dab1446",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9328837",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d79a73910386ae4a5ccf5d689d94c8a43c22377",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49411887 | pes2o/s2orc | v3-fos-license | Biological Control Potential of Two Steinernematid Species Against the Date Fruit Stalk Borer (Oryctes elegans Prell, Coleoptera: Scarabaeidae)
Abstract The fruit stalk borer (Oryctes elegans) is an important pest of date palm (Phoenix dactylifera) trees in Saudi Arabia. This study was conducted to determine efficacy of using two species of entomopathogenic nematodes, Steinernema kushidai and Steinernema glaseri, against O. elegans under laboratory and field conditions. Under laboratory conditions, both species of nematodes showed a significant effect on the mortality of O. elegans larvae. Significant variations were observed when insects were exposed to nematodes for variable durations under laboratory conditions. They showed no differences in insect larval mortality when tested either in aqueous suspensions or in Galleria-infected cadavers. Insects exposed to nematode aqueous suspension for 4 d and those treated with Galleria-infected cadavers showed the same rates of mortality, which differed when insects were exposed to nematode-infected cadavers under field conditions. Mean percentages of corrected mortality varied between nematode species and number of infected cadavers. S. kushidai caused significantly higher mortality percentages ± SE (72.17 ± 5.57, 95.83 ± 4.17, 94.43 ± 5.57, and 100%) compared with S. glaseri when the fruit stalk borer, O. elegans, was treated for 6 wk with two, four, six, and eight infected cadavers, respectively.
Several species and strains of EPNs, effective against insect pests, have been identified (Lewis 2002, Grewal and, and significant improvements were made in their mass production (Friedman 1990, Grewal and Georgis 1998. Although formulations such as wettable dispersible granules, wettable powders, and infected cadavers have improved the shelf life of EPNs, the short shelf life still remains a major obstacle in their widespread and effective use against insect pests under adverse climatic conditions, except when infected cadavers are used (Grewal 2002, Shapiro-Ilan et al. 2003, Grewal and Peters 2005, Shapiro-Ilan et al. 2010. Susceptibility of EPNs to adverse climatic conditions hinders field applications and reduces chances of nematode survival during and after application. The EPN formulations, applied as aqueous suspensions using special sprayers, require constant agitation to maintain nematode homogeneity and toxic conditions for nematodes (Wright et al. 2005). However, agitation of suspension stresses out EPNs, by exposing them to increased temperatures within spray tanks, thus killing a portion of them (Fife et al. 2005, Bilgrami andGaugler 2007). The application of nematodes in aqueous suspension also exposes them to ultraviolet radiation and desiccation on exposed foliage and soil surfaces, resulting in the decrease of field efficacy and persistence of EPNs (Shapiro-Ilan et al. 2015).
In this study, experiments were made to test effects of direct application of EPNs-infected cadavers (Galleria mellonella) on the rate of efficacy of two species of steinernematid nematodes, 'Steinernema kushidai (EIK7c strain) and Steinernema glaseri (NJ strain)' in controlling the date fruit stalk borer (O. elegans) larvae. Two species of EPNs, S. kushidai (EIK7c strain) and S. glaseri (NJ strain), were tested as biological control agents against O. elegans larvae in date palm orchards. In addition, laboratory and field experiments were also made to test virulence of G. mellonella-infected cadavers.
O. elegans Collection
The third instar larvae (last instar larvae) of date fruit stalk borer were collected from an infected field from Tomor El-Mamlakha, Al-Qasim region, Kingdom of Saudi Arabia. All collected larvae were washed three times with distilled water to remove the soils and plant fibers before use in experiments.
Rearing of the Greater Wax Moth (G. mellonella)
Greater wax moth larvae, G. mellonella (L.) (Lepidoptera: Pyralidae), collected from domesticated honey bee hives, were kept in plastic rearing jars, measuring 17 × 17 × 27 cm and containing 250 g of Galleria artificial media. To facilitate egg laying by G. mellonella, rearing jars were lined with frill paper tissue.
The insects were allowed to lay eggs on frill paper tissue measuring 15 × 15 cm. The eggs were transferred gently to other rearing jars containing 250 g of prepared Galleria media. The jars were tightly closed with the double layered muslin cloth to prevent the escape of neonatal larvae and incubated at 28 ± 2°C, photoperiod of 8:16 (L:D) h, and relative humidity of 65 ± 5%. The larvae were separated from other stages upon reaching to the final instar (25 d old), used for mass rearing of the EPNs, and stored at 15°C for 2-4 wk. Insects were reared on artificial diet prepared from wheat bran (260 g), wheat flour (162 g), yeast extract (65 g), glycerol (193 g), and water (158 g; Monastyrskij and Gorbatovskij 1991). The media was freshly prepared, and the stock was frozen at -5°C for 2 wk to prevent secondary contamination.
Nematodes
Two species of EPNs, S. kushidai (EIK7c strain) isolated from Egypt by Atwa (2003) and S. glaseri (NJ strain) obtained from the laboratory of Professor Randy Gaugler, Rutgers University, New Jersey, were tested. Both species of nematodes were maintained under laboratory conditions using last instar larvae of G. mellonella as the host (Woodring and Kaya 1988). The infected cadavers of G. mellonella were used to obtain two types of nematode formulations: 1) infected cadavers formulation (capsules) and 2) aqueous suspension of IJs. To obtain an aqueous suspension of nematodes, the infected cadavers were placed on White trap according to White (1927). The IJs were collected from White trap, washed with distilled water three times, and stored at 15°C for 2 wk before use.
Infected Cadaver Formulation Production
To obtain at least 3,000 high-quality infected cadavers, 5,000 last instar larvae of G. mellonella were infected by S. kushidai and S. glaseri and the best chosen for use. Twenty-five healthy G. mellonella larvae were exposed to 1,500 S. glaseri and S. kushidai IJs in each of the 140 Petri dishes measuring 150 × 30 mm, lined with filter paper. Infected cadavers were obtained after 3-5 d post exposure and used for laboratory and field experiments. To prevent sticking or breaking of infected larvae during transportation and application, insect cadavers were coated with baby powder (Johnson's Baby Powder, Johnson & Johnson Consumer Products Company, Skillman, NJ). Infected cadavers of G. mellonella were coated with baby powder to prevent cadavers from sticking together.
To rule out any effect of talcum powder, one group of 10 infected G. mellonella larvae was treated with talcum powder and the second not, as a control. Upon maturation, the cadavers were placed on White trap (White 1927) to isolate IJs, which were counted and compared to rule out any effect of Johnson's Baby Powder.
Laboratory Experiments
Experiments were made under laboratory conditions in Petri dishes measuring 150 × 30 mm, lined with 200 g of sand moistened with 5 ml of distilled water. O. elegans was subjected to infection by both types of nematode formulations, i.e., aqueous suspension under laboratory conditions and infected cadavers under laboratory and field conditions. In laboratory bioassay, sterilized sand was used to test the efficacy of EPNs against O. elegans. Small parts of date palm fronds axils were used to feed the larvae. However, sand bioassay methods are easy to set up and closer to field situation and adapted to standard quality control tool for assessing the virulence of EPNs (Grewal 2002). Standard application of EPNs in aqueous suspension have been used to determine the EPNs' virulence against the target insect. While as a delivery of EPNs in their infected host cadavers is a novel method to increased nematode dispersal, survival, infectivity, and efficacy (Shapiro-Ilan et al. 2012) are reported herein.
Nematode Suspension Bioassay
The efficacy of S. kushidai and S. glaseri was tested against O. elegans in separate sets of Petri dishes lined with a layer of sterilized sand. Four concentrations, i.e., 250, 500, 750, and 1,000 IJs in 5 ml of distilled water of each nematode species, were tested separately against five third instar larvae of O. elegans. The control treatments consisted of distilled water only, and each experiment was replicated three times. The mortality of O. elegans was recorded 24, 48, 72, and 96 h after inoculation with IJs. The experiments were replicated three times; each replication consisting of three Petri dishes each containing five larvae.
Infected Cadaver Bioassay
The field experiments were conducted to determine efficacy of infected nematode cadavers (capsules) against O. elegans. Cadavers were used Larval O. elegans were bioassayed with Galleria cadavers containing either S. kushidai or S. glaseri at the rate of one, two, three, and four cadavers per Petri dish. Treatments were made in Petri dishes with cadavers covered with a layer of sterilized sand moistened by 5 ml of distilled water. Five third instar larvae of O. elegans were used in each of three replicates. Distilled water was used in the control with the sand moistened as needed. Cumulative mortality was recorded 5, 10, 15, and 20 d after cadaver treatment, and each experiment was replicated three times.
Field Trials
The field experiments were conducted to determine efficacy of infected nematode cadavers (capsules) against O. elegans. Cadavers were used to provide protection against destructive biotic and abiotic factors, with the EPNs released having greater energy reserves, greater ability to disperse and infect the target insects, and greater longevity in the soil (Shapiro-Ilan et al. 2003). Each nematode species was applied as two, four, six, or eight infected Galleria cadavers per tree to 50 randomly chosen trees in each of three plots. The infected cadavers were applied to the soil close to tree trunks and small shoots, 5-10 cm deep (Fig. 1). Insect mortality was determined after 2, 4, and 6 wk. For this purpose, two trees per treatment were chosen in each of the three replicate plots, for a total of six trees per treatment. Insect mortality was based on a census of live and dead O. elegans larvae in the soil close to each tree. Soil samples were collected for testing using Galleria for nematode detection in either treated or control area.
Data Analysis
The data were normalized using an arcsine transformation. The significance of differences between the means was determined using analysis of variance (ANOVA). Comparisons were made by using Tukey's multiple range test. The corrected mortality percentage and change rate of control blot mortality in field application were calculated by using Sun-Shepard's formula (Püntener 1981) as follows: where TP = treated plot, CPP = control plot population, and T = treatment.
Field Trials
The mean percentage of corrected mortality varied significantly between nematode species and among the number of infected cadavers (Table 1). Corrected mortality of O. elegans subjected to different numbers of cadavers infected by the two nematode species at different exposure times under field condition is shown in Table 1. The mortality of insects was corrected using Sun-Shepard's formula. The lowest means of corrected mortality ± SE were 0.0 and 5.55 ± 5.55% when two and four infected cadavers of S. kushidai were applied, whereas the highest were 94.43 ± 5.57 and 100.0 ± 0.0% when six and eight infected cadavers of S. kushidai were applied for the duration of 6 wk. When O. elegans was exposed to eight cadavers infected by S. kushidai for 8 wk, high variation (100%) was recorded, whereas S. glaseri caused 87.5 ± 8.54% mortality under similar conditions (Table 1). Data in Table 1 show that S. kushidai caused significant mortalities (72.17 ± 5.57, 95.83 ± 4.17, 94.43 ± 5.57, and 100.0 ± 0.0%) after 6 wk in plots treated with two, four, six, and eight infected capsules, respectively. Plots treated with two, four, six, and eight infected cadavers containing S. glaseri yielded 66.63 ± 7.45, 90.27 ± 6.25, 90.27 ± 6.25, and 87.5 ± 8.54% mortality of O. elegans, respectively, after 6 wk. Fig. 3A presents means of corrected mortality as the result of factorial analysis of nematode species and number of infected cadavers applied in a completely randomized experiment (S. glaseri: F = 7.56, df = 3, P < 0.01 and S. kushidai: F = 2.53, df = 3, P < 0.01). The interaction was highly significant with the number of infected cadavers used. The factorial analysis of corrected mortality of O. elegans between nematode species and duration of insect exposure was highly significant (S. glaseri: F = 81.4, df = 2, P < 0.01 and S. kushidai: F = 140.16, df = 2, P < 0.01; Fig. 3B). Table 2 shows factorial analysis of corrected mortality of O. elegans at different durations of exposure to nematodes, number of infected cadavers applied, and nematode species. Highly significant variation was recorded among the means of corrected mortality as produced by duration of exposure (F = 276.97; df = 2; P < 0.05) and different number of infected cadavers (F = 19.33; df = 3; P < 0.05). The interaction was highly significant (F = 2.94; df = 6; P < 0.05; Table 2). Variations among the means of corrected mortality between nematode species (F = 416.39; df = 2; P < 0.05) and duration of insect exposure to nematode species (F = 90.64; df = 4; P < 0.05), and number of infected cadavers and nematode species (F = 5.09; df = 6; P < 0.05) were highly significant ( Table 2). The interaction between duration of insect exposure, number of infected cadavers, and nematode species was highly significant (F = 1.44; df = 12; P > 0.05).
Discussion
The two species of EPNs, S. kushidai and S. glaseri, had significant effects on the population of O. elegans. Under laboratory conditions, S. kushidai species was more effective than S. glaseri against O. elegans, when tested in various concentrations of IJs or as infected host cadavers (nematode capsules). Similar results with aqueous nematode suspension and infected cadavers under laboratory conditions suggested that nematode formulations made no differences in the rates of mortality compared with infected cadaver. The relatively delayed effect of nematode cadavers probably resulted due to the emergence of nematode IJs at different times and finding a host on an irregular basis. The differences in the mean mortality of O. elegans under laboratory conditions may occur due to several factors (e.g., nematode species, nematode concentrations, and nematode formulations). Individual habits and behavior of nematode species may greatly influence their ability to parasitize white grubs and protect trees (Lewis et al. 2006). The IJs find the host, enter it through its natural apertures (oral cavity, anus, and spiracles) or in some cases through the cuticle (Dowds and Peters 2002). Although the EPNs were able to infect larval stage, there was considerable variability in the number of infected larvae, showing distinct strain dependence, and differences between using infected cadavers or aqueous suspensions of IJs. In general, EPN application within infected cadavers tends to be more efficacious than application in aqueous suspensions, which is why only infected cadavers were used under field conditions.
Use of EPNs to control fruit stalk borer O. elegans in palm farms in Kingdome of Saudi Arabia (KSA) is unknown because of the limited data about EPNs' active ingredients or formulations in KSA. Fruit stalk borer O. elegans are living as cryptic habitat insect in most life cycle stages with the limited data about behavior, biology, and ecology of fruit stalk borer in KSA, so EPNs are promising for controlling this insect. Choosing the right species of EPNs in a particular formulation against a particular pest in a particular environment is very important for successful biological control (Shapiro-Ilan et al. 2002). For this reason, efficacy of two species of EPNs (S. kushidai and S. glaseri) against O. elegans under laboratory and field conditions was evaluated in this article. This is a first report showing EPNs' preference to O. elegans and first report to use infected cadavers to control O. elegans under field conditions. In the literature, no data on efficacy of EPNs toward the fruit stalk borer O. elegans has been reported. Superior efficacy of EPNs, applied as infected cadavers with application in aqueous suspension, was reported earlier by Shapiro-Ilan et al. 2003. While as Shapiro-Ilan et al. (2003 reported that, compared infected cadavers with application in aqueous suspension, under laboratory studies, greenhouse and field condition have indicated that application of infected cadavers can result in superior nematode dispersal, survival, and infectivity. Correspondingly, several previous reports have suggested that efficacy of the cadaver application approach is approximately equal to application in aqueous suspensions (Shapiro-Ilan et al. 2003), which agrees with the current study, but my experiments focussed on the efficacy of S. kushidai and S. glaseri, and different of time to get the same results of insect mortality for infected cadavers (maximum 20 d) compared with aqueous suspension (maximum 4 d).
The current results suggest that the two nematodes species (S. kushidai and S. glaseri) have very different levels of infectivity against O. elegans. Overall, this study indicates that S. kushidai and S. glaseri have considerable promise as biocontrol agents for O. elegans. However, contrasting results were obtained under field conditions, when O. elegans-infested date palm trees were treated with S. kushidai-or S. glaseri-infected cadavers. Such differences might have occurred due to the different durations of nematode exposure, delayed emergence of IJs, stressed conditions (Bilgrami and Gaugler 2007), migration of IJs toward host, etc. Several factors such as nematode species, types of nematode formulation, biotic and abiotic factors, and host-searching abilities greatly influence the efficacy of EPNs (Lewis et al. 2006). Life strategies of S. kushidai and S. glaseri might also have contributed to the differences in the rates of mortality of O. elegans. S. glaseri is highly mobile and active in searching its host in the soil (Lewis et al. 2006), whereas the S. kushidai acts as an 'ambusher', searching for a host mainly on the soil surface (Campbell and Gaugler 1997).
Cadaver formation in EPNs is a natural way to protect IJs from exposure of hazardous environmental conditions, such as UV light and soil moisture. Ansari et al. (2009) used a kaolin-starch mixture and Del Valle et al. (2009) used unflavored gelatin; both authors demonstrated that the coating provided protection and promoted the conservation of the insect cadavers, herein baby talcum powder is used for the same reason. Coating cadavers with the baby talcum powder (Johnson's Baby Powder) has increased its longevity but restricted IJs to leave cadavers, resulting in reduced percentage of insect mortality. Soil moisture appears to be the key factor in nematode efficacy against insects (Shetlar et al. 1988, Kung et al. 1991, Gaugler et al. 1992, Downing 1994. Deol et al. (2011) applied Steinernema carpocapsae-infected G. mellonella cadavers and recorded higher mortality of Tenebrio molitor compared with aqueous suspensions. During the present study, G. mellonella was used as a host because of its ability to produce more IJs than other insect hosts, such as T. molitor (Jansson and Lecrone 1994, Bruck et al. 2005, Deol et al. 2011. Thus, our results support prior studies suggesting that IJs emerging from cadavers possessing high virulence rate than aqueous suspensions, when tested against insect pest (Shapiro-Ilan and Lewis 1999, Shapiro-Ilan et al. 2003). The EPNs applied as cadavers have successfully infected larvae of O. elegans in the soil, when applied close to small shoots of the tree trunk, exhibiting survival and virulence retention, suggesting the use of host cadavers as the tool to overcome disadvantages of direct application of aqueous suspensions under field conditions. Insect management using EPNs has been known since 1980s (Georgis 2002); however, their applications to O. elegans-infestated date palms in Saudi Arabia have never been done, especially in Saudi Arabia. Therefore, this study was conducted using EPNs as the biological control agent against O. elegans in date palm farms of Saudi Arabia. Choosing the right species of pest and parasite and effective modes of application are important parameters to achieve maximum mortality (Shapiro et al. 2002). This study also suggests that the two species of EPNs, i.e., S. kushidai and S. glaseri, are correct choices to use as biological control agents against O. elegans, when applied in the form of infected cadavers under field conditions. | 2018-06-28T01:06:59.982Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "a8883cf7d0f5a668dc127839491db2ccdc10d597",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/18/3/26/25068852/iey060.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8883cf7d0f5a668dc127839491db2ccdc10d597",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
40630796 | pes2o/s2orc | v3-fos-license | Transcription of Human Zinc Finger ZNF268 Gene Requires an Intragenic Promoter Element*
Human ZNF268 gene is a typical Krüppel-associated box/C2H2 zinc finger gene whose homolog has been found only in higher mammals and not in lower mammals such as mouse. Its expression profiles have suggested that it plays a role in the differentiation of blood cells during early human embryonic development and the pathogenesis of leukemia. To gain additional insight into the molecular mechanisms controlling the expression of the ZNF268 gene and to provide the necessary tools for further genetic studies of leukemia, we have mapped the 5′-end of the human ZNF268 mRNA by reverse transcription-PCR and primer extension assays. We then cloned the 5′-flanking genomic DNA containing the putative ZNF268 gene promoter and analyzed its function in several different human and mouse tissue culture cell lines. Interestingly, our studies show that the ZNF268 gene lacks a typical eukaryotic promoter that is present upstream of the transcription start site and directs a basal level of transcription. Instead, the functional promoter requires an essential element that is located within the first exon of the gene. Deletion and mutational analysis reveals the requirement for a cAMP response-element-binding protein (CREB)-binding site within this element for promoter function. Gel mobility shift and chromatin immunoprecipitation assays confirm that CREB-2 binds to the site in vitro and in vivo. Furthermore, overexpression of CREB-2 enhances the promoter activity. These results demonstrate that the human ZNF268 gene promoter is atypical and requires an intragenic element located within the first exon that mediates the effect of CREB for its activity.
The first zinc finger protein containing the Krüppel-associated box (KRAB 4 -containing proteins) was discovered in 1991 by Bellefroid et al. (1) and was found to be down-regulated during in vitro terminal differentiation of human myeloid cells. KRAB-containing zinc finger genes represent a subfamily within a large family of zinc finger genes, and they typically act as transcriptional repressors (2). They make up approximately one-third (290) of the 799 different zinc finger proteins present in the human genome, and as a result, this group of proteins is the largest single subfamily of transcriptional regulators in mammals. Many genes encoding KRAB-containing proteins are arranged in clusters, but others occur individually throughout the genome (3). This family of genes has been shown to be involved in diverse developmental and pathological processes (4 -9).
By studying the molecular basis of human embryonic development, we have previously constructed a cDNA library from RNA isolated from 3-to 5-week-old human fetuses. Screening of this library led us to isolate the human zinc finger 268 protein gene, which is a typical KRAB-containing zinc finger protein gene (10,11). More importantly, expression analysis has implicated a role for ZNF268 in embryogenesis (10,11).
By using a recombinant expression cloning (SEREX) approach to identify tumor-associated antigens in chronic lymphocytic leukemia, Krackhardt et al. (12) identified 14 antigens KW-1 to KW-14. Among them, KW-4 was found to be one of the several known alternatively spliced transcripts of ZNF268 gene (12). This and its developmental expression profiles suggest that ZNF268 play a role in the differentiation of blood cells and the pathogenesis of leukemia (11)(12)(13)(14). Interestingly, extensive screening of cDNA libraries and various approaches to clone genomic DNA fragments of the ZNF268 homolog in mouse failed to isolate a mouse homolog of the ZNF268 gene (data not shown). Data base search after the completion of mouse and human genome confirmed that ZNF268 is present in the human genome but not mouse genome (data not shown). Thus, ZNF268 may play a unique role in higher mammals, making it particularly interesting to study the regulation and function of this gene in human development and pathogenesis.
By studying how the human ZNF268 gene is regulated during development and pathogenesis, we have previously isolated the genomic DNA up to 2400 bp upstream of the 5Ј-end of the longest published ZNF268 cDNA sequence and demonstrated that it contained a functional promoter (15). Unfortunately, as the transcription start site was not mapped and detailed mutational analysis was not carried out, it was unclear whether the region represents the promoter or a regulator region of the * This work was supported in part by the National Natural Science Foundation of China Grant 30500266 and the Research Fund for the Doctoral Program of Higher Education Grant 20020486044. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 Both authors contributed equally to this work. 2 Supported by the Intramural Research Program of the NICHD, National Institutes of Health. 3 To whom correspondence should be addressed. Tel.: 86-27-68752831; Fax: 86-27-68752146; E-mail: Liwxlab@whu.edu.cn. 4 The abbreviations used are: KRAB, Krü ppel-associated box; ZNF268, zinc finger 268; CREB, cAMP response-element-binding protein; EMSA, electrophoresis mobility shift assay; ChIP, chromatin immunoprecipitation; RT, reverse transcription; EGFP, enhanced green fluorescent protein; UTR, untranslated region; CMV, cytomegalovirus; TK, thymidine kinase.
gene. Here we describe the identification and characterization of the human ZNF268 gene promoter. Interestingly, unlike the vast majority of mammalian RNA polymerase II promoters, the human ZNF268 gene promoter requires a promoter element located intragenically in the first exon that contains a critical cAMP response-element-binding protein (CREB) transcription factor-binding site.
EXPERIMENTAL PROCEDURES
Materials-A 5-week-old human embryo was obtained from therapeutic termination of pregnancy, with appropriate advice and consent at the Zhongnan Hospital of Wuhan University in China, and was used for RNA isolation as described previously (10,11).
RNA Extraction-Total RNA was extracted by using the Trizol reagent (Invitrogen) according to the manufacturer's instructions. Genomic DNA contamination was removed by digestion of RNA samples with RNase-free DNase I (Takara) for 30 min at 37°C. RNA concentrations were determined spectrophotometrically, and samples were stored at Ϫ80°C.
RT-PCR and PCR Analysis-Total RNA (1 g) from a human embryo and various cell lines were reverse-transcribed using oligo(dT) adaptor as primer and Takara RNA PCR kit (Takara). Various combinations of ZNF268 gene-specific primers (sense, PECS5, PECS4, PECS3, PECS2, and PECS11; antisense, PE41 and PD2; see Table 1) were used for PCR amplification (35 cycles) with Taq polymerase. The amplified products were separated on a 1.5% agarose gel and photographed.
Cloning of ZNF268 Gene 5Ј-Flanking Region-Human genomic DNA was extracted from normal human fresh blood according to a published methodology (16,17) and used as the template for amplification of ZNF268 5Ј-flanking region and the intron between exons 1 and 2. Primers for the 3290-bp 5Ј-flanking region were Z5S1 (sense) and Z5A1 (antisense) ( Table 1), which were designed based on the sequence of Homo sapiens chromosome 12 clone CTD-2140B24 (GenBank TM accession number AC026786). The PCR fragment was inserted into the pGEM-T Easy vector (Promega) to generate pGEM-T-3290 plasmid and then verified by DNA sequencing.
To map the promoter region, different fragments of the 5Ј-flanking regions of the ZNF268 gene that differed in length were inserted into the firefly luciferase reporter vector, pGL3-Basic (Promega). The strategy for cloning of the fragments of the ZNF268 gene promoter into a pGL3-Basic vector was as follows, with the numbers indicating the nucleotide positions relative to the transcription start site.
1) The PCR fragment of 3024 bp (amplified with primer pair PU/PD2; see Table 1) and the same fragment but in opposite orientation (with primer pair PU⌬/PD2⌬) were generated by direct PCR amplification, using pGEM-T-3290 plasmid as the template. The fragments were then inserted into pGL3-Basic, to generate pGL3(Ϫ1790/ϩ1234) plasmid, in which the expected promoter would drive the expression of the luciferase reporter and pGL3(ϩ1234/Ϫ1790) plasmid, in which the promoter fragment is in the opposite orientation and thus would drive the expression away from the luciferase reporter, respectively.
Primer Extension Assays-Primer extension assay was done as described (28,29). Briefly, an antisense primer PE11 (Table 1) corresponding to the position from ϩ31 to ϩ11 (relative to ZNF268 transcription start site) was labeled using T4 polynucleotide kinase (Promega) and 5000 Ci/mmol [␥-32 P]ATP, and the unincorporated [␥-32 P]ATP was removed by passage through a G-25 column (Amersham Biosciences). The labeled primer was hybridized to 30 g of DNase I-treated (Takara) total RNA prepared from a 5-week-old human embryo, HeLa cells, HEK293 cells, NIH3T3 cells, K562 cells, or Jurkat cells in 1.25 M KCl, 50 mM Tris-HCl (pH 7.5), and 5 mM EDTA at 65°C for 1.5 h. The RNA and oligonucleotide were precipitated, resuspended in 1ϫ Reverse Transcriptase buffer (25 mM KCl, 50 mM Tris-HCl (pH 7.5), 10 mM dithiothreitol, 3.5 mM MgCl 2 , 0.5 mM dGTP/dATP/dTTP/dCTP, 100 g/ml bovine serum albumin), and incubated with 100 units of SuperScript TM RNase H Ϫ reverse transcriptase (Invitrogen) at 42°C for 1 h. The same oligonucleotide that was used for the primer extension experiment, and the pGEM-T-3290 plasmid, was used to obtain a sequencing ladder by using the SequiTherm EXCEL TM II DNA sequencing kit (Epicenter) according to the manufacturer's instruction. The sequencing and primer extension reactions were electrophoresed on an 8% acrylamide, 7 M urea gel in 1ϫ Tris-buffered EDTA. After autoradiography for 24 -72 h, the size of the primer extension product and the transcription start site was determined by direct comparison to the sequencing ladder.
Transient Transfection and Functional Analysis of the ZNF268 Gene Promoter with the Green Fluorescent Protein Reporter-HEK293, HeLa, and NIH3T3 Cells were seeded in 24-well plates at 75% confluence and transfected with 0.6 g of pEGFP(Ϫ1790/ϩ1255), pEGFP(Ϫ1790/ϩ45), or pEGFP(Ϫ37/ ϩ1255) plasmid using Sofast TM transfection reagent (Sunma) according to the manufacturer's instruction. K562 and Jurkat cells were seeded in 24-well plates and transfected at about 1 ϫ 10 5 cells/well using DMRIE-C transfection reagent (Invitrogen) following the manufacturer's instruction. Phytohemagglutinin (final concentration 1 g/ml; Sigma) and phorbol 12-myristate 13-acetate (final concentration 50 ng/ml; Sigma) per well were added after 4 h of transfection for Jurkat cells, whereas only phorbol 12-myristate 13-acetate was added for K562 cells to enhance promoter activity. The plasmid pEGFP-C1, which contained the CMV promoter upstream of the green fluorescent protein gene, was used as a positive control, and pEGFP-1, which contained no eukaryotic promoter or enhancer element, was used as a negative control.
Transient Transfection and Dual Luciferase Assay-Transient transfection was done with the indicated 0.6 g of firefly luciferase reporter construct and the internal control Renilla luciferase reporter construct, pRL-TK (Promega) (firefly luciferase reporter construct and pRL-TK in a ratio of 20:1), which contains the Renilla luciferase gene driven by the herpes simplex virus thymidine kinase (TK) promoter. When indicated, the expression vector for the transcription factor CREB-1 or CREB-2, pcDNA-CREB-1 or pcDNA-CREB-2, respectively, was co-transfected into HeLa cells. In addition, the parental vector, pGL3-Basic, was used as a promoterless negative con-trol, and pGL3-CMV containing the firefly luciferase gene driven by the CMV promoter was used as a positive control.
The cells were cultured as above, and luciferase activity was analyzed 48 h post-transfection with the Turner BioSystems TD-20/20 luminometer (Turner Designs, Sunnyvale, CA) using the dual luciferase reporter assay system (Promega). Triplicate samples were measured for each construct, and the average values of the ratio of firefly luciferase light units to Renilla luciferase light units were used for data analysis. The results show the mean values of three independent experiments with standard errors.
Chromatin Immunoprecipitation (ChIP) Assay-HeLa cells were cross-linked with 1% formaldehyde at 37°C for 15 min. Chromatin extracts were prepared as described previously (30,31). Sonication was performed 20 times for 10 s each using a Sonic Dismembrator model 100 (Fisher), resulting in DNA fragments between 150 and 600 bp in size. Immunoclearing was performed for 6 h at 4°C using 2 g of sheared salmon sperm DNA (Invitrogen) and 40 l of protein A-Sepharose (50% slurry in dilution buffer) (Sigma). Supernatants were collected and submitted to immunoprecipitation with 5 l of polyclonal antip-CREB-1 and anti-CREB-2 antibodies (Santa Cruz Biotechnology) overnight at 4°C. In parallel, supernatants were incubated without antibody as controls. Then 40 l of protein A-Sepharose were added, and the incubation was continued for 4 h. Precipitates were washed and extracted with 1% SDS (v/v), 0.1 M NaHCO 3 , and heated at 65°C overnight to reverse the cross-links. DNA fragments were precipitated with 3 volumes of 100% ethanol and 0.1 volumes 3 M ammonium acetate and resuspended in 25 l. Finally, 1.5 l were amplified by PCR for 35 cycles with the primers PECS11/PECA for the promoter region and PU/PDT1A for an upstream region as a negative control (Table 1). Similar ChIP assays were performed with anti-RNA polymerase II and anti-TFIID (TBP) antibodies (Santa Cruz Biotechnology) except that the precipitated DNA was amplified with the primer set PES1/PE12 for the region containing the transcription start site or PECS1/PD2T1 for the region containing the intragenic element.
Computer Analysis-To identify the putative binding sites of transcription factors, the 3-kb 5Ј-flanking region of the human ZNF268 gene was analyzed by the MatInspector program (32) and TESS.
Cloning ZNF268 5Ј Putative Promoter Region and Mapping the Transcription Start
Site-To initiate studies on transcriptional regulation of the ZNF268 gene, a 3290-bp fragment (Ϫ1858 to ϩ1432, ϩ1 corresponding to the transcription start site, see below) consisting of the 5Ј-flanking region of the human ZNF268 gene, exons 1 and 2, intron 1, and part of intron 2, was PCRamplified from human genomic DNA, which was isolated from normal human blood, and cloned into pGEM-T easy vector. The insert was sequenced to confirm the identity to the published sequence of genomic DNA (GenBank TM accession number AC026786). Analysis of the promoter sequence by using MatInspector software (32) and TESS revealed a number of binding sites for transcription factors, including p53, c-Ets, CREB, AP1, and CEBP (Fig. 1). The size of the ZNF268 mRNA from HEK293, HeLa, K562, and Jurkat cell lines was determined from Northern hybridizations to be ϳ4.5-4.8 kb (data not shown), whereas the cDNA that encoded the longest coding region of ZNF268 was only 2841 bp (11). Consequently, a large 5Ј-UTR and/or 3Ј-UTR sequence was expected. Previous cloning and sequencing studies showed the 3Ј-UTR to be ϳ650 bp, whereas 5Ј-rapid amplification of cDNA ends failed to extend the 5Ј-UTR beyond 331 bp upstream of the translation start site (11). Examination of the genomic sequence revealed a region of high GC content located ϳ100 bp upstream of the translation start site, which might have caused premature termination of the 5Ј-rapid amplification of cDNA ends cloning. Therefore, to determine the length of the 5Ј-UTR and locate the region of the transcription start site, RT-PCR was performed on total RNA isolated from HEK293, HeLa, K562, and Jurkat by using various combi-nations of primers corresponding to regions upstream from the previously determined 5Ј-UTR (Fig. 2). Amplified cDNA fragments were obtained using primer combination PECS11/PE41, PECS2/PE41, and PECS3/PE41, but not with the PECS4/PE41 and PECS5/PE41 primer combination ( Fig. 2A), suggesting that the 5Ј-UTR indeed extended much further upstream with the transcription start site being present between sense primer sites PECS3 and PECS4. Because the primer PE41 is located in the first exon, to ruled out the unlikely scenario that some of the PCR signals might be due to genomic DNA contamination despite the DNase I treatment of the RNA, RT-PCR amplifications were also carried out by using primer Table 1) were used for cDNA PCR amplification with a series of sense primers (PECS11, PECS2, PECS3, PECS4, and PECS5; see Table 1) as indicated. The PCR products were separated on agarose gels and stained with ethidium bromide. The Control is positive control PCR products obtained by using ZNF268 genomic clone DNA pGEM-T-3290 as the PCR template. The cRNA was synthesized from total RNA isolated from HEK293, HeLa, K562, and Jurkat cell lines, respectively, as indicated. Note that two products were detected with the primer PD2 (indicated with the up and down arrows, B), corresponding to two alternatively spliced mRNA forms (see text). FIGURE 3. Mapping the transcription start site by using primer extension assay. Total RNA isolated from a 5-week human embryo, HeLa, K562, HEK293, NIH3T3, and Jurkat cell lines were used for primer extension assays with a ␥-32 P-labeled 21-bp antisense oligonucleotide complementary to the region ϩ31 to ϩ11 relative to the transcription start site. The resulting cDNA products were analyzed on polyacrylamide gels together with DNA sequencing reactions on genomic 5Ј-UTR DNA performed with the labeled primer (lanes labeled G, A, T, and C ). Note that a single band (arrow) was detected, mapping the start site to the first T within AT(ϩ1)TGATC (or GATCAA(ϩ1)T in the sense strand), for RNA isolated from all human cell lines or embryos, whereas no signal was detected for RNA from the mouse cell line NIH3T3 or with probe alone, confirming the specificity of the primer extension assay.
combinations PECS11/PD2, PECS2/PD2, PECS3/PD2, PECS4/PD2, and PECS5/PD2, where the primer PD2 is located in the second exon. Again, no products were detected when primers PECS4 and PECS5 were used, although the other primers produced the expected PCR fragments (Fig. 2B), indicating the start site is indeed located between PECS3 and PECS4.
The RT-PCR with primer PD2 produced two sets of products (Fig. 2B, arrows). Cloning and sequencing of the PCR products revealed that the lower band matched the cDNA clone published previously (10,11). The upper band, however, had a 259-bp insert compared with the cDNA clone, published previously (10,11), and was of the same size as the product in the control panel when genomic DNA was used as the PCR template. These results suggest that the inserted sequence is a novel exon region not identified previously and that it can be alternatively spliced to produce the two transcripts detected by RT-PCR in different cell lines or human tissues ( Fig. 2B and data not shown).
To map the exact position of the transcription start site, primer extension assay was performed using antisense primer PE11 on RNA from six samples as follows: a 5-week human embryo, human cell lines HeLa, K562, HEK293, and Jurkat, and finally mouse cell line NIH3T3 (as negative control, because ZNF268 is absent in the mouse genome). As shown in Fig. 3, the results revealed a single extension product of 31 nucleotides long in all human samples but not the mouse cell line or when no RNA was used, thus mapping the transcription start site of ZNF268 gene to the A (ϩ1) residue at the position Ϫ996 upstream of the ATG translation start site (Fig. 3, also see Fig. 2). The immediate upstream region from the transcription start site lacks canonical TATA and CCAAT boxes, although the sequence around the transcription start site, CAA ϩ1 TAAT closely matches the transcription initiation recognition sequence, YYA ϩ1 NWYY (33).
Functional Analysis of the ZNF268 Gene Promoter; Identification of an Intragenic Promoter Element-To analyze the activity of the putative ZNF268 gene promoter, three plasmids pEGFP(Ϫ1790/ϩ1255),pEGFP(Ϫ1790/ϩ45),andpEGFP(Ϫ37/ ϩ1255) were constructed by placing different lengths of the putative promoter region and the 5Ј-end of the gene in front of the EGFP reporter coding sequence (Fig. 4A) and were transfected into HEK293 cell lines. The plasmid pEGFP-C1, which contained CMV promoter upstream of EGFP gene, was used as positive control, and pEGFP-1, which contained no eukaryotic promoter or enhancer element, was used as negative control. After transfection, EGFP fluorescence expression was observed under a confocal laser scanning microscope. The fluorescence of cells transfected with the positive control plasmid pEGFP-C1 could be observed between 4 and 6 h, and the number of the cells expressing EGFP continuously increased until 48 h. The fluorescence of cells transfected with pEGFP(Ϫ1790/ϩ1255) could be observed between 20 and 24 h, and the number of cells expressing EGFP continuously increased until 48 h, suggesting that this construct contained an active promoter. As expected, fluorescence could not be detected in the cells transfected with negative control plasmid pEGFP-1. Interestingly, it was surprising that the fluorescence signal could not be observed in the cells transfected with pEGFP(Ϫ1790/ϩ45) plasmid even after 72 h (Fig. 4), although the fluorescence signal could be observed in the cells transfected with pEGFP(Ϫ37/ϩ1255) similar to the cells transfected with pEGFP(Ϫ1790/ϩ1255). Similar results were obtained in HeLa, NIH3T3, K562, and Jurkat cell lines (data not shown). The results indicate that essential promoter sequences are downstream of ϩ45 and that a functional promoter may be present between Ϫ37 and ϩ1255. To quantitatively define the promoter activity, the fragment containing the putative promoter from Ϫ1790 to ϩ1234 was cloned in both orientations into a luciferase reporter vector to generate pGL3(Ϫ1790/ϩ1234) (transcribing to produce luciferase mRNA) and pGL3(ϩ1234/Ϫ1790) (transcribing away from the luciferase gene) (Fig. 5A). The plasmids were transfected into HeLa cells together the internal Renilla luciferase reporter construct. Quantitative analysis of the luciferase activity revealed that only the correct orientation yielded strong luciferase expression (Fig. 5A), indicating that the fragment functions in an orientation-dependent manner expected for a RNA polymerase II promoter.
To further define the promoter region, a set of reporter plasmids containing the promoter of variable lengths driving the FIGURE 5. Deletion analysis reveals that an essential promoter is located between ؊37 and ؉760. A, the fragment from Ϫ1790 to ϩ1234 controls reporter gene expression in an orientation-dependent manner. The plasmids pGL3(Ϫ1790/ϩ1234) and pGL3(ϩ1234/Ϫ1790) as well as the negative control vector pGL3-basic were transfected into HeLa cells, and the luciferase activity was determined 48 h after transfection. The relative luciferase activity was calculated by normalizing against the co-transfected internal control pRL-TK. Note that pGL3(Ϫ1790/ϩ1234) yielded strong reporter activity, whereas the one in opposite orientation had close to background activity, indicating that pGL3(Ϫ1790/ϩ1234) contained an orientation-dependent promoter. B-F, deletion analysis of the promoter. B, schematic diagram of the deletion constructs driving the luciferase reporter (shaded box). HEK293 (C ), HeLa, (D), K562 (E ), and Jurkat (F ) cells were transiently transfected with the indicated promoter construct as indicated, and the luciferase activity was determined 48 h after transfection. The relative luciferase activity was calculated by normalizing against the co-transfected internal control pRL-TK. Parallel transfections with pGL3-CMV and pGL3-Basic were used as positive and negative controls, respectively. The promoter activity of the full-length promoter pGL3(Ϫ1790/ϩ1234) was set at 100%. Three independent experiments were conducted, and the data were shown as the mean values with standard errors. The transcription start site (ϩ1, TSS) and intron 1 are indicated with a bent arrow and shaded box, respectively, in the diagrams. expression of the firefly luciferase reporter gene, as shown in Fig. 5B, were constructed and transfected into HEK293, HeLa, K562, and Jurkat cell lines together with the internal Renilla luciferase reporter construct. Quantitative analysis of the luciferase activity showed that the region from Ϫ1790 to ϩ1234, pGL3(Ϫ1790/ϩ1234) (construct 1), had strong promoter activity (set to 100%), whereas the promoter activity was reduced to the background level (construct 16) for pGL3(Ϫ1790/ϩ45) (construct 2) in all cell lines (Fig. 5, C-F ). In addition, the construct pGL3(Ϫ37/ϩ1234) (construct 11) had an activity similar to that (slightly higher than) of pGL3(Ϫ1790/ϩ1234), in complete agreement with the EGFP reporter assay above.
A series of 5Ј-deletion constructs up to Ϫ37 (constructs 3-10) of pGL3(Ϫ1790/ϩ45) (Fig. 5B) were made and assayed similarly. All constructs had only background activity in all cell lines (Fig. 5, C-F ), suggesting that the lack of activity of constructs containing sequence upstream of ϩ45 was not because of any inhibitory effects of upstream sequence but more likely due to the lack of essential promoter elements. Deletion of the 3Ј of the construct 11, pGL3(Ϫ37/ϩ1234), to ϩ938 or ϩ760 (constructs 12 or 13) had little effect on the promoter activity (Fig. 5, C-F ), whereas deletion of ϩ205 or ϩ540 (constructs 14 or 15) abolished the promoter activity (Fig. 5, C-F ). These results suggest that an essential promoter element is located between Ϫ37 and ϩ760.
Mutational Analysis of Potential Transcription Factor-binding Sites-As mentioned above, sequence analysis revealed the existence of a number putative transcription factor-bindings sites (Fig. 1). Interestingly, all except one CREB-binding site are located within the 172-bp minimal promoter region between ϩ589 and ϩ760. Among them are binding sites for p53, Ets, CREB, AP1, and C/EBP. To assess the importance of these transcription factor-binding sites, mutations were introduced into the binding sites of transcription factor p53, Ets, CREB, AP1, and C/EBP in the ZNF268 promoter regions (Table 1) in the plasmid pGL3(Ϫ37/ϩ938) (20 -27), which has the activity of the full-length promoter (construct 12, Fig. 5C ), to generate the plasmids pGL3(Ϫ37/ϩ938)-p53-mut, pGL3(Ϫ37/ϩ938)-Etsmut, pGL3(Ϫ37/ϩ938)-CREB-mut, pGL3(Ϫ37/ϩ938)-AP1mut, and pGL3(Ϫ37/ϩ938)-C/EBP-mut. The luciferase constructs were transfected into different cell lines, and the luciferase activity was assayed as above. The results showed that mutation of the p53, Ets, AP1, or C/EBP-binding site had little or small effects on the promoter activity compared with wild type construct in all cell lines. In contrast, mutation of the cyclic AMP-response element-like consensus sequence dramatically reduced promoter activity in HEK293, HeLa, and K562 cells, with only a small effect in Jurkat cells (Fig. 7B). These findings suggest that the cyclic AMP-response element (TGACGCA)
TABLE 1 Oligonucleotides used in this study
The boldface letters indicate mutated residues.
Oligonucleotide
Oligonucleotide a Regions of oligonucleotide not derived from the gene are underlined, and lowercase letters indicate mutated residues. b Shown are the oligonucleotide positions, where ϩ1 is the transcription start site of the ZNF268 gene. within the region ϩ733 to ϩ739 plays a critical role for ZNF268 promoter activity.
CREB-2 Binds to the CREB-binding Site in the Promoter Both in Vitro and in Vivo-
To investigate if the putative CREB element in the ZNF268 promoter can interact with the members of CREB/ATF transcription factor family in vitro, ␥-32 P-labeled double-stranded oligonucleotide probes containing the corresponding wild type CREB site or its mutated version in the ZNF268 promoter (Table 1) (21,26) or control CREB sequences (34) (Fig. 8A) were prepared and used in gel EMSA with HeLa nuclear extracts. The ZNF268 wild type CREB probe yielded two complexes (Fig. 8B, lane 7) that migrated identically to complexes formed with the CREB probe (Fig. 8B, lane 2), whereas no complex was formed with the mutated CREB-binding site (Fig. 8B, lane 12). These complexes were effectively competed away by an excess of unlabeled ZNF268 wild type CREB probe (Fig. 8B, lanes 4 and 9) or the control CREB probe (lanes 3 and 8) but not the mutated probe (lanes 5 and 10). The results suggest that CREB binds to the ZNF268 gene promoter at ϩ733 to ϩ739.
To investigate if CREB binds to the ZNF268 gene in vivo, we performed ChIP assay with anti-p-CREB-1 and anti-CREB-2 antibodies in HeLa cells to determine whether endogenous CREB binds to the ZNF268 CREB-like site (TGACGCA) in the chromosome. The immunoprecipitated DNA was amplified by PCR with primers PECS11/PECA (ϩ594 to ϩ925) for the promoter region containing the CREB-binding site or PU/PDT1A for a region (Ϫ1790 to Ϫ1381) upstream of the start site lacking CREB-binding site ( Table 1). A 332-bp fragment was detected when PECS11/PECA primers were used with anti-CREB-2 but not CREB-1 antibody (Fig. 9A). In addition, no signal was detected when PCR amplification of the precipitated DNA for the negative control region was done with primers PU/PDT1A (Fig. 9B). Finally, when no antibody was included in the ChIP assay, no signal was detected with either primer sets (Fig. 9B). These results indicate that the transcription factor CREB-2 binds the CREB site in the promoter in HeLa cells in vivo.
RNA Polymerase II and TFIID Are Bound to the Region Containing the Transcription Start Site in Vivo-Although the essential promoter element is located downstream of the start site, it was unclear how transcription could start over 600 bp upstream from this element. Thus, we carried out ChIP assay to determine whether TFIID and RNA polymerase II was associated with the transcription start site and the intragenic promoter element. As shown in Fig. 9, C and D, ChIP assay with anti-RNA polymerase II and anti-TBP (a subunit of TFIID) gave These results indicate that although the essential promoter element is located downstream of the start site, it was able to recruit TFIID and RNA polymerase II to the start site.
Overexpressing the Transcription Factor CREB-2 but Not CREB-1 Enhances the Promoter Activity in HeLa Cells-To further investigate the role of CREB in HeLa cells, pGL3(Ϫ37/ ϩ938) was transiently co-transfected into HeLa cells with the expression plasmid pcDNA-CREB-1 or pcDNA-CREB-2. The cells were cultured for 48 h, and luciferase activity was assayed as before. The results showed that the ectopic overexpression of CREB-2 but not CREB-1 stimulated the promoter activity by about 6-fold in HeLa cells (Fig. 10), confirming a critical role of CREB-2 for the function of the human ZNF268 gene promoter in HeLa cells.
DISCUSSION
In this study, we have carried out a detailed analysis of the promoter of the human ZNF268 gene, which has been implicated in the differentiation of blood cells and the pathogenesis of leukemia (12). Our major findings are severalfold. First, RT-PCR and primer extension analysis not only mapped the start site but also revealed the existence of a novel exon region between exon 1 and exon 2 of a previously identified transcript, indicating that the 5Ј-UTR is encoded by one or two exons due to alternative splicing. Second, we showed that transcription of the human ZNF268 gene requires an intragenic promoter element located in first exon of the gene. Finally, our in vitro and in vivo studies demonstrated a critical role of CREB-2 for the function of this promoter.
The human ZNF268 gene was originally isolated from a human embryonic cDNA library, and its homolog is absent in lower mammals such as mouse. This together with its potential roles in development and pathogenesis make it important to understand how the gene is regulated. Although we showed earlier that a genomic fragment, including 2400 bp upstream of the 5Ј-end of the longest published cDNA, could drive the expression of a report gene in tissue culture cells, we were not able to map the transcription start site (15), making it impossible to dissect the promoter of the gene.
Here we have made use of the availability of the genomic sequence from the genome data bank and RT-PCR analysis to first determine the approximate location of the transcription start site. We then mapped the exact location of the start site by primer extension analysis.
Our mapping analysis not only determined the transcription start site but also revealed the existence of a previously B, gel mobility shift assay using a ␥-32 P-labeled probe corresponding to the CREB-binding site of the human ZNF268 promoter was performed by using HeLa nuclear extract. Wild type and mutant ZNF268 CREB probes as well as a control CREB probe were used for unlabeled competitors to show the specificity of the binding as indicated. Note that two specific bands (arrows) were detected, which was possibly because of different CREB proteins and/or because of protein modifications. FIGURE 9. ChIP assays demonstrate that CREB-2 binds to the promoter in vivo, whereas RNA polymerase II and TFIID are associated with the transcription start site. ChIP assays were performed with HeLa cells with indicated antibodies or without antibody as a negative control. The precipitated DNA was amplified by PCR, and the PCR products were separated on agarose gels and stained with ethidium bromide. A and B, PCR amplification of DNA precipitated with anti-p-CREB-1 and anti-CREB-2 by using the primers PECS11/PECA, flanking the CREB-binding sites (ϩ594 to ϩ925) (A), or primers PU/PDT1A as a negative control (Ϫ1790 to Ϫ1381) (B). C and D, PCR amplification of DNA precipitated with anti-RNA polymerase II (pol II) and anti-TFIID(TBP) by using the primers PES11/PE12, flanking the transcription start site (C ), or primers PECS1/PD2T1, flanking the CREB-binding sites (D). Input lanes show products after PCR amplification of chromatin DNA prior to immunoprecipitation. unknown exon region that can be alternatively spliced. The major spliced mRNA included the intron and was present in all tissues and cell lines analyzed. The minor form lacking the 259-bp intron corresponded to the previously published cDNA (10,11). Although how this alternative splicing is regulated remains to be determined, this alternative splicing will produce two different mRNAs with different 5Ј-UTRs. This may affect translation and/or mRNA stability, thereby allowing another level of regulation on the expression of the ZNF268 protein. In this regard, it is worth pointing out that another cDNA form was reported earlier that contained 318 bp of this intron (35), likely representing a different alternative splicing of this intron. As our RT-PCR failed to detect such a form in any of cell lines or the embryonic RNA sample, it might have resulted from a rare alternative splicing event or it is highly cell type-specific. Given the existence of these and other reported alternative mRNA forms, alternative splicing seems to also be an important mechanism of regulation the expression of ZNF268 protein.
Our deletion analysis of the ZNF268 promoter showed that the sequences up to 1800 bp upstream of the transcription start site are dispensable for the activity of the promoter. More importantly, we found that a minimal region of 172 bp within the first exon was sufficient for the majority of the promoter in different human cell lines. Consistent with this, there are no obvious TATA and CCAAT boxes in the region immediately upstream of the transcription start site. In addition, sequence analysis revealed the existence of a number of transcription factor-binding sites in the intragenic minimal promoter element but not upstream of the transcription start site. Furthermore, functional studies of various mutant promoters confirmed the same minimal promoter element can drive the expression of the reporter gene in all human cell lines analyzed, arguing against the potential artifact of cell culture studies.
Finally, when a 3-kb unrelated fragment was inserted between this intragenic element and the luciferase coding region in the reporter construct, the construct failed to drive the expression of the reporter (data not shown), suggesting that it cannot function over long distance. Thus, unlike the vast majority of the eukaryotic RNA polymerase II promoters, the ZNF268 gene is controlled by an intragenic promoter element. Although the minimal promoter element was sufficient to drive the expression of the reporter gene in different cell lines, it remains possible that another promoter element around the transcription start site functions as the true basic promoter of the gene and is required to specify the transcription start site (of interest in this regard, the sequence around the transcription start site, CAA ϩ1 TAAT, is similar to the transcription initiation recognition sequence, YYA ϩ1 NWYY (33)). Consistent with this, our ChIP assays with anti-RNA polymerase II and anti-TFIID showed that both RNA polymerase II and TFIID are strongly associated with the region containing the start site but not the intragenic promoter element. In the absence of the region containing the start site, the intragenic promoter element may drive the expression of the reporter gene by activating the cryptic transcription start site in the plasmid vector. In this regard, the intragenic promoter element can be regarded as a strong transcription enhancer in a general sense but as an essential promoter element for the ZNF268 gene, because in its absence, the promoter element failed to show any detectable promoter activity.
Interestingly, a number of other genes that have been implicated in disease development and progression also utilize intragenic promoter elements to control their transcription. Among them are the tumor suppressor gene WT1, which is also a zinc finger gene (36), helix-loop-helix tal-1 gene (37), which is implicated in leukemia development, the N-ras proto-oncogene (38), and HBV X gene (39) etc. It is unclear why intragenic promoter elements are used to control the expression of these genes. It is possible that such an arrangement may allow distinct mechanisms to regulation such important regulatory genes in development and pathogenesis.
The intragenic promoter element contains a number of binding sites for different transcription factors, including p53, c-Ets, CREB, AP1, and C/EBP. Interestingly, mutational studies of the binding sites showed that the CREB site at ϩ733 to ϩ739 bp plays a critical role in the promoter function in different cell lines except the Jurkat T cell line. The effects of mutating other transcription factor-binding sites were relative modest, suggesting that these other transcription factors may play relatively minor roles in the function of the promoter or that their functions may be redundant among each other. Similarly, the presence of different levels of various transcription factors in Jurkat T cells may be responsible for the reduced effect of CREB site mutation on the promoter activity in this cell line. CREB-1 and -2 are known to be present in Jurkat T cells (40 -45). It is possible that their levels are relatively low compared with other transcription factors regulating the ZNF268 promoter. Thus, mutating the CREB-binding site has a relatively minor effect on the promoter activity. In support of this, overexpression of CREB-1 and -2 enhances the activity of the ZNF268 promoter in Jurkat T cells, and CREB-2 has a greater effect than CREB-1 as in HeLa cells (data not shown).
Our in vitro and in vivo binding studies showed that CREB-2 but not CREB-1 binds to the CREB-binding site within the minimal promoter region in vivo, at least in HeLa cells. Consistent with this, overexpression of CREB-2 but not CREB-1 remarkably increased the promoter activity in HeLa cells. Clearly, further studies are required to determine why CREB-1 failed to affect the promoter activity. Both CREB-1 and CREB-2 are known to be present in HeLa cells (data not shown and see Refs. 46 -48). It is quite likely that the promoter context may affect how strong the interaction of CREB-1 and CREB-2 with the promoter in vivo as protein-protein interactions at the promoter may help to stabilize or destabilize the binding of these transcription factors to the site.
CREBs are known to participate in a diverse array of cellular processes, including cell survival and proliferation and glucose metabolism (49). They are key mediators of critical target genes that control myeloid cell proliferation and differentiation (50), and they can also promote abnormal proliferation and survival of myeloid cells in vitro and in vivo through up-regulation of specific target genes (51,52). Therefore, CREBs can act as proto-oncogenes to regulate hematopoiesis and contribute to the leukemia phenotype, and CREB-dependent pathways may serve as targets for directed therapies in leukemia in the future (53). In addition, CREBs have also been implicated that it plays a critical role in the pathogenesis of human T lymphotropic virus-related T-cell leukemias (54). The requirement of CREBs for the expression of ZNF268 as demonstrated here together with the earlier implication of a role for ZNF268 in leukemia suggest that ZNF268 may function in the CREB pathways during disease development and progression. Clearly, further mechanistic and functional studies are needed to investigate this fascinating possibility. | 2018-04-03T04:52:48.412Z | 2006-08-25T00:00:00.000 | {
"year": 2006,
"sha1": "06801ea5fb03ac6595976da763026b77ff5e629b",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/34/24623.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "c1034fd9d1fcebbfcecd68afca28a067f01db0c9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254334100 | pes2o/s2orc | v3-fos-license | In Silico Study of Compounds in Bawang Dayak ( Eleutherine palmifolia (L) Merr.) Bulbs on Alpha Estrogen Receptors
Breast cancer is an uncontrolled malignancy of the breast that originates from glandular cells, gland ducts, and the supporting tissues of the breast. The development of herbal-based anticancer drugs is progressing, one of which is derived from the natural ingredients of Eleutherine palmifolia tubers. The aim of this study was to determine the activity of compounds derived from Eleutherine palmifolia tubers on the alpha-estrogen receptor (ERα) using an in silico study. The crystal structure of the enzyme used was 3ERT which was obtained from the Protein Data Bank (PDB). The applications used in this journal are Chem3D (initial preparation of ligands and receptors), AutoDock4 (redocking and calculation of root mean square deviation (RSMD)), and Biovia (visualization of redocking results). Tamoxifen was used as a reference ligand. Based on the results of the in silico study, the best compound that has the potential to be developed as a candidate for anticancer drug is Eleutherine because the results of the in silico test on Eleutherine have a value of G=-7.56 kkal/mol and the smallest KI=2.89 nM, so it can be concluded that eleutherine is one of potential drug candidate for anti cancer.
INTRODUCTION
According to the World Health Organization (WHO), cancer is the leading cause of death in the world, with around 10 million people dying from cancer in 2020 (Sung, et al., 2021). In 2020, the most common new cases of cancer were breast cancer with 2.26 million cases, followed by lung cancer and colon cancer (World Health Organization, 2021).
Empirical testing resulted in data that the onion bulbs (Eleutherine sp.) can be used as a treatment for wounds, coughs, bloody diarrhea, jaundice, abdominal pain, inflammation of the intestinal axis, dysentery, colon cancer, breast cancer, ulcer drugs, and vomiting stimulants (Muti'ah, et al, 2020). Preclinically, the Eleutherine palmifolia Indones. J. Cancer Chemoprevent.,13(2), [83][84][85][86][87][88][89][90][91][92][93] plant can be used as an anticancer by inhibiting the proliferation process of T47D breast cancer cells. This test was carried out in vitro, in vivo, and in silico (Muti'ah, et al., 2020) In the human body, there are several receptors that can be expressed, one of which is estrogen receptor (ER) which consists of core ER, extra-nuclear ER, and ER paired G protein (GPER). ERα includes core ER that in human tissue is widely expressed including the breast, prostate, uterus, liver, and bones. Overall, alpha-estrogen receptor (ERα) has an important role in the development of breast cancer in inhibiting tumor progression, maintaining the luminal phenotype, and restoring breast cancer sensitivity to hormone therapy (Liu, et al., 2020).
According to Liao, et al. (2014), high expression of ERα is closely correlated with breast cancer cell proliferation. The proximity of this correlation makes ERα a potential target for breast cancer therapy. Therefore, it is necessary to develop a natural compound that has the potential to inhibit the activity of ERα. The naphthoquinone derivative compound, Eleutherinol, has an affinity for ERα and can be found in Eleutherine palmifolia (Narko, et al., 2017;Ha, et al., 2013). So the purpose of this study is to determine the activity of compounds derived from Eleutherine palmifolia bulbs against ERα using in silico studies that are expected to obtain natural compounds that have the potential to inhibit ERα activity.
Lipinski Predictions
Lipinski predictions were made to analyze drug-like properties using the Chemdraw application. Based on Lipinski's rule of five, a compound has similar properties to drugs if the molecular weight (BM) of the compound is less than 500 Daltons, the log P partition coefficient value is less than 5, the number of hydrogen bond donors (HBD) less than 5, and the number of hydrogen bond acceptors (HBA) is less than 10.
ADMET Predictions
Analysis to predict the absorption, distribution, metabolism, excretion, and toxicity profile of the test compound was carried out by redrawing the structure of the test compound, then clicking on 'submit' to display the results of ADME information. After that, download the file into the desired format (excel, CSV, PDF, SDF) to save the prediction results, to see the toxicity profile, press the "Toxicity" task in the upper right corner of the page, then the next step is the same as the ADME test.
Prediction of the ADMET profile shows a variety of absorption data including human intense absorption (HIA) and cell permeability of CaCO 2 , then distribution that includes plasma protein binding (PPB) and blood brain barrier distribution (BBB), as well as toxicity data which includes mutagenic and carcinogenic.
Molecular Docking Simulation
Molecular docking begins by preparing receptors that will be used and can be downloaded from Brookhaven Protein Data Bank (GDP) on the website https://www.rcsb.org/.
In this journal, the receptor that will be used is the ERɑ with PDB code 3ERT found in humans (Homo sapiens). These macromolecules were obtained from the research with methods that used the X-Ray method Diffraction and resolution 1.90Å tethered 4-Hydroxytamoxifen (Shiau, et al., 1998). Next process is the separation process files of macromolecules and ligands using Biovia Discovery Studio software to ensure the structure that will used has an active side. Water molecules and residues present in the protein structure must be removed by deletion, so that the water molecules do not interfere with the docking simulation process and ensure that only ligands and receptors can interact. This process of removing water molecules is also known as desolvation. Receptors and ligands are stored separately in the .pdb file format on the desktop. The preparation of the receptor begins with the addition of a hydrogen atom which aims to adjust the docking atmosphere, approach pH 7 and automatically set to polar only, then add a Kollman charge. The next process is to save the receptor in pdbqt. format, then the preparation of the 4-Hydroxytamoxifen ligand by adding hydrogen atoms arranged to merge nonpolar, so that only polar hydrogen atoms will interact with protein residues then add gasteiger charge and the file is saved in pdbqt file format. The ligand is then continued with the molecular docking process using the Autodock program (Sari, et al., 2020).
Validation method was carried out by redocking the innate ligand of the target receptor with 2 parameters namely the grid parameter (Grid Parameter File/Grid Box) and the anchoring parameter (Docking Parameter File/Grid Coordinate) with the help of the Autodock Tools software by opening the receptor file and ligands in .pdbqt format. The grid parameters include determining the coordinates and determining the volume. Function of Gridbox in determining the receptor area to be anchored based on the x, y, and z coordinates of the comparison compound in order to determine the lowest energy conformation of the ligand. The coordinates used in the anchoring of the molecule to the 3ERT receptor, with the coordinate centers, that are x 30.01, y -19.113, and z 24.207. Meanwhile, the volume of the mooring grid used in this study is 40x40x40 with spacing of 0.375. The mooring parameters are carried out using the Lamarckian Genetic Algorithm method and the conformational search is 100 times. Validation of this method is carried out in order to find out whether the program used is in accordance with the requirements for use. The analysis was carried out by evaluating the results of the RMSD. If the RMSD results are obtained 2 then the parameters used are considered valid (Suherman, et al., 2020).
After the method validation is done, then running docking can be done using Autodock4 and Autogrid4. After the running process is complete, the file in .dlg format can be opened with the help of the Notepad++ program. The results obtained from running docking in the form of bond energies, inhibition constants of all tested ligands and the number of clusters. These results are used to compare to each other results.
RESULTS
Lipinski prediction results can be seen in Table 1, ADMETOX prediction results can be seen in Table 2, and the molecular docking simulation results can be seen in Table 3.
DISCUSSION
In Table 1 all compounds have a small relative molecular mass of <500 Da, this is in accordance with the theory which states that the relative molecular mass value shows its relation to the distribution process of the drug. In the distribution process, the drug will penetrate the biological membrane. Drugs that have a relatively small molecular mass, the drug will easily penetrate biological membranes. The results of the Log P value of all compounds <5. This Log P value Indones. J. Cancer Chemoprevent., 13(2), 83-93 indicates a relationship with hydrophobicity or lipophilicity. In terms of pharmacokinetics, drugs that are absorbed orally must be able to cross the lipid bilayer in the intestinal epithelium. This is useful so that the transport system is efficient so that the drug must be ensured to be lipophilic to be able to penetrate into the lipid bilayer, but not too lipophilic because once the drug has entered the lipid bilayer, the drug cannot penetrate out again and is retained in the body. so the drug can be toxic. The hydrogen bonds of all the compounds studied had <5 donor hydrogen bonds and <10 acceptor hydrogen bonds. The donor and acceptor values of these hydrogen bonds show their relation to the biological activity of a drug molecule (Liao, et al., 2014). Based on the results obtained, it shows that the 10 compounds from the Eleutherine palmifolia have met the rules of Lipinski's Rule of Five. Therefore, 10 compounds from Eleutherine palmifolia bulbs have good absorption so that they are suitable for use as oral preparations. In Table 2. HIA (%) in all compounds is included in the low category (0-20%) (Nursamsiar, et al, 2016). HIA is the sum of bioavailability and absorption evaluated from the ratio of excretion through urine, bile and feces. This HIA parameter aims to predict the process of drug absorption that occurs in the intestine. In addition, the permeability ability of CaCO 2 cells was also studied where in the results all compounds were in-cluded in the low category (<4 nm/sec) (Cheng, et al., 2013). CaCO 2 cells are an in vitro model to determine drug transport through intestinal epithelium derived from human colonic adenocarcinoma which has multiple transport pathways. From the distribution parameters that were predicted based on the binding to plasma proteins, namely the PPB parameter, all compounds had a strong attachment of >90% except for Glucopyranoside and Quinone compounds (Kumar, et al., 2018: Purwaniati, 2020. PPB is a drug fraction that is available in free form for distribution to various tissues (Kleywegt and Jones, 1997). Toxicity prediction is carried out to predict and assess the possible risks that arise from the test compound that can affect humans. The test compound as a drug candidate in addition to having good biological activity, requires a low toxicity value (Lipinski, et al., 1997). Of the 10 compounds that were tested, it was found that two test compounds, namely Glucopyranoside and Oxyresveratrol, had no results on mutagens and carcinogens, indicating that these compounds met the requirements of not causing toxicity.
The best receptor selected was <2.5 resolution. The target receptor structure used in this journal has a resolution value of 1.90 so that it meets the criteria (Liu, et al., 2020). In addition, the criteria used in selecting the target receptor is to look at the value of R-factor and R-free. The recommended Rfactor and R-free values are <0.25 and lower and . Meanwhile, the value of the inhibition constant indicates the strength of the compound in inhibiting the work of the receptor, the smaller the value, the greater the inhibition (Umamaheswari, et al., 2013). Based on the results obtained in Table 3. there are no test compounds that have lower bond energy values or inhibition constants than natural ligands, with natural ligands 3ERT -11.88 kcal/mol and 1.97 nM, respectively. comparison (Tamoxifen) were -10.4 kcal/mol and 18.94 nM. However, the Eleutherine compound had a lower bond energy value and inhibition constant than the other test compounds with values of -7.56 kkal/mol and 2.89 nM, respectively. These results indicate the ability of Eleutherine compounds to bind to the active site of the 3ERT receptor is weaker than natural ligands. However, this compound still has the possibility to bind to the active site of the 3ERT receptor and has a fairly good inhibitory potential. In addition, the value of the bond energy and the inhibition constant of the Eleutherine compound showed lower results than the comparison ligand. Thus, the compound is predicted to have a better interaction with the 3ERT receptor than the comparison ligand and is the best test compound in this study so that it can be used as a candidate for breast anticancer drugs. In addition, observations were made and compared the results of tethering visualization by looking at the amino acid residues and the number of hydrogen bonds formed from the interaction with each test compound and target receptor (Muti'ah, et al., 2020). Based on the data that has been produced in Table 3. The best test compound for the 3ERT receptor has 2 hydrogen bonds with amino acid residues GLU A:353 and LEU A:357.
In the interaction between the natural ligand of the 3ERT receptor with amino acid residues, there are 2 hydrogen bonds with amino acid residues GLU A:353 and THR A:347, while the reference drug ligand for the 3ERT receptor has 1 hydrogen bond with amino acid residues THR A:347.
Based on pre-clinical studies in silico, according to research by Amelia, et al (2015), eleutherinol compounds in Eleutherine americana can inhibit ERα breast cancer. This is because the eleutherinol compound has a binding energy value of -6.43 kkal/mol to the 3ERT receptor and has two hydrogen bonds with GLU A:353 and ARG A:394 amino acids that can inhibit ER. Eleutherinol is a derivative of eleutherine. Meanwhile, in this study, eleutherine was found as the best compound that has the potential to be developed as a candidate for breast cancer drugs. The eleutherine compound also has a GLU A:353 amino bond which is thought to play an important role in the compound's affinity for ERα. In addition, the compound eleutherine was chosen because the results of the in silico test on eleutherine obtained the value of G=-7.56 kkal/mol and the smallest KI=2.89 nM.
CONCLUSION
Based on the results of the in silico test, it can be concluded that the best compound because bond with amino acid residues THR A:347, so has the potential to be developed as one of candidate for breast anticancer drug candidates is Eleutherine because the results of the in silico test on Eleutherine obtained the value of G=-7.56 kkal/ mol and the smallest KI=2.89 nM. ADMET prediction results for Eleutherine compounds have a poor absorption value in the intestine (HIA) of 0.027% and a low permeability value (CaCO 2 ) of -5.028, but these results are better than other compounds found in Eleutherine palmifolia. In addition, the Eleutherine compound has results that meet the requirements according to Lipinski's rule, namely the molecular weight is not more than 500 daltons with the results obtained 272.30 g/mol, has a high lipophilicity with log P<5 with the results obtained 2.58, the value of hydrogen bond donors is not more than 5 with an eleutherine yield of 0 hydrogen bonds, and no more than 10 hydrogen bond acceptors with an eleutherine yield of 4. | 2022-12-07T19:31:40.209Z | 2022-11-30T00:00:00.000 | {
"year": 2022,
"sha1": "71844164837819d47049a65200eb3f4361832401",
"oa_license": "CCBYNC",
"oa_url": "https://ijcc.chemoprev.org/index.php/ijcc/article/download/401/258",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cabed7179a883318d1f3def2d367acbd27aeb58a",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
} |
250316173 | pes2o/s2orc | v3-fos-license | The Impact of Severe Acute Respiratory Syndrome-Coronavirus-2 Infection and Pandemic on Mental Health and Brain Function in the Elderly
This review discusses the evolving evidence base and clinical considerations for examining the direct and indirect effects of the coronavirus disease (COVID-19) pandemic on the mental health of elderly individuals. It briefly addresses the cognitive and psychiatric outcomes in older adults who have survived COVID-19 infections and the complexity of appraising them during different stages of the pandemic. Indirect effects of the COVID-19 pandemic on the mental health of the geriatric population are also explored, including those influenced by quarantine, media campaigns, discrimination, and difficulties in accessing supportive services like long-term care and medical care.
The Impact of Severe Acute Respiratory Syndrome-Coronavirus-2 Infection and Pandemic on Mental Health and Brain Function in the Elderly INTRODUCTION At the precipice of its third year, the coronavirus disease (COVID- 19), as a pandemic, has destabilized the well-being of individuals across the globe, and in many ways has disproportionately affected the lives of the elderly. Severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2) was first recognized in China in late 2019 as a respiratory virus with broad systemic effects and a high potential for transmissibility and lethality, spreading quickly around the globe. The World Health Organization estimates over 6 million deaths related to COVID-19 worldwide, 1 and the Centers for Disease Control and Prevention has identified nearly 975,000 deaths in the United States, 2 over 75% in individuals over age 65. Early in the pandemic, the elderly were identified as vulnerable to severe complications and higher lethality rates. The wellbeing of the elderly has been a source of great concern, given the high morbidity and mortality, disruption in natural supports, sweeping social changes, and the implications of isolation precautions on these individuals. However, the elderly have long been appraised to have better emotional regulation, lower stress reactivity, and a greater sense of well-being than younger adults. 3 Understanding the overall impact of the COVID-19 pandemic on the mental health and global well-being of the elderly remains complex and requires an appraisal of the direct effects of COVID-19 infection on individuals, as well as the psychological impact of public health measures, such as lockdown protocols intended to curb the spread of the virus. At the time of the publication of this article, the preponderance of published literature examines data in the early stages of the pandemic during a time of significant fear, confusion, and uncertainty; although ongoing research is underway to better understand the later stages of the pandemic after the development of effective vaccines and loosening of social restrictions.
CASE
Mr. J is an 81-year-old man who contacted the geriatric psychiatry clinic in June of 2020 at the urging of his children. He had previously been supported in the clinic for anxiety, insomnia, and caregiver stress related to the care of his wife, who suffered from moderate dementia and had recently transitioned to a memory unit at an assisted living facility in 2019. In the wake of the pandemic, he had been unable to visit his wife for the past 3 months due to isolation protocols and an outbreak of COVID-19 in the facility. Mrs. J eventually contracted the infection and suffered from weakness, lethargy, and dehydration, requiring multiple hospitalizations and transitions to skilled nursing facilities. She had declined by this point and was minimally verbal and with a higher degree of baseline confusion, but had survived the infection. Mrs. J remained weak and unable to progress out of the nursing home, and the couple was only able to communicate during brief and infrequent phone calls, and the interactions were quite limited. Mr. J remained healthy but had been isolated at home for the past 3 months. His children limited any in-person interactions due to their wishes to avoid exposing him to the virus. Mr. J was now suffering from an increasingly depressed mood and anxiety in the wake of separation and uncertainty about his wife's trajectory. He was still able to manage his own cooking and cleaning but his children had provided assistance for grocery deliveries. He was able to connect for his telehealth visit, although troubleshooting his video conference took nearly 20 minutes, and effective communication was limited by his profound hearing impairment.
How can you understand the direct and indirect effects of the COVID-19 pandemic on Mr. J and Mrs. J?
What interventions for addressing Mr. J's symptoms were feasible at this time? How may this have changed as the pandemic progressed?
DIRECT NEUROPSYCHIATRIC EFFECTS OF CORONAVIRUS DISEASE INFECTION
Neurological symptoms of SARS-CoV-2 infection were recognized early in the pandemic and are a source of immediate concern and academic interest. Case reports of a broad spectrum of acute COVID-related neurologic events included ischemic stroke, encephalitis, epilepsy, neurodegenerative diseases, and inflammatory-mediated neurological disorders. 4 Mediators in such cases are hypothesized to include the direct neurotrophic effects of COVID-19, as well as indirect effects of hospitalization, hypoxia, use of mechanical ventilation and sedatives, systemic inflammation, and organ dysfunction. 5 Across age groups, in the early stages of the pandemic, hospitalized adults experienced high rates of COVID-related delirium 6 ; and elderly patients who experienced COVID-related delirium were found to be at higher risk of subsequent longer-term cognitive decline. 7 The capability of COVID-19 to invade the blood-brain barrier, exhibit direct neurotrophic effects on the central nervous system, and directly contribute to cardiovascular and cerebrovascular disease is theorized to place the elderly population at higher longer-term risk of cognitive decline, dementia, and even motor impairment. 8 These concerns are only a superficial summary of a larger and evolving evidence base. Neuropsychiatric effects of COVID-19 infection in the elderly are covered extensively elsewhere in this issue by Roy and Dix.
OUTCOMES OF CORONAVIRUS DISEASE INFECTION ON THE PSYCHIATRIC HEALTH OF ELDERLY INDIVIDUALS
Elderly individuals were psychologically impacted by the pandemic in different ways than their younger counterparts. In the general population, the elderly appeared more likely to express fear of COVID-19 than younger individuals, but globally had a lesser degree of psychological impact related to the pandemic and were considered to be a generally more resilient group. [9][10][11] However, patients who have experienced COVID-19 infections appear uniquely vulnerable to psychological symptoms compared to noninfected individuals. 12 Research involving adult COVID-19 survivors may help guide understanding of the direct psychological effects of COVID-19 infection on the elderly, but further efforts to distinguish these two populations are necessary. One study of over 40,000 adult patients in a global health collaborative clinical research database identified a number of common psychiatric manifestations of coronavirus infection, the most prevalent being anxiety and related disorders in 4.6%, mood disorders in 3.8%, sleep disorders in 3.4%, and even suicidal ideation in 0.2%. 13 Survivors of COVID-19 are found in short-term follow-up studies to have prominent symptoms of anxiety, depression, fatigue, and insomnia, 14,15 also reflected in the persistence of elevated PHQ-9 scores 2-3 months after hospital discharge. 16 Studies up to 6-months postinfection reveal symptoms of anxiety and/or depression in 23% of participants and sleep difficulties in 26%. 17 Rates of depression in COVID-19 survivors were significantly higher than those of noninfected individuals affected by quarantine and isolation precautions. 18 Studies specifically examining elderly COVID-19 survivors suggest that the elderly are vulnerable to psychological symptoms, especially after severe infections. In one study of hospitalized elderly COVID-19 survivors, 11.5% were identified to have clinically significant symptoms of anxiety and 46.2% to have clinically significant symptoms of depression. 19 In a small study of 69 elderly individuals 2-weeks posthospital discharge, multiple measures of psychiatric well-being were astonishingly elevated in COVID-19 survivors when compared with age-matched healthy residents in the community-with 100% of survivors showing pathological scores in a measure of global mental health, 93.2% with symptoms of anxiety, and 86.6% with symptoms of depression. 20 A recent cohort study of 215 residents of long-term care facilities in Spain during the early stages of the pandemic identified that elderly residents, regardless of the presence or absence of initial COVID-19 infection, experienced growth rates of psychiatric symptoms at 3-month follow-up, including symptoms of depression (57.7%), anxiety (29.3%), post-traumatic stress disorder (PTSD) (19.1%), and The Impact of SARS-CoV-2 Infection sleep disturbance (93%); although this trend was true regardless of COVID-19 infection, those who had tested positive for COVID-19 at baseline experienced higher rates of anxiety and PTSD compared to their noninfected peers. 21 Estimates of the incidence of PTSD symptoms in adult COVID-19 survivors are variable between studies and may be moderated by time relative to infection, the severity of infection, hospitalization, as well as the social context of the infection. One survey of clinically stable hospitalized adult COVID-19 survivors in the very early stages of the pandemic in Wuhan, China found that 96.2% had a significant degree of PTSD-spectrum symptoms on the day of hospital discharge, 22 although these would be better classified as acute stress symptoms. However, 4 months posthospital discharge, a study of 238 COVID-19 survivors in Italy identified mild symptoms of PTSD in 25.6%, moderate symptoms in 11.3%, and severe PTSD symptoms in 5.9%. 23 However, Horn and colleagues 24 studied patients in France with laboratoryconfirmed COVID-19 infection 2 months after infection and reported that rates of clinically probable PTSD were significantly lower in patients over 60 years of age with laboratory-confirmed COVID-19 infection when compared with younger patients. Cai and colleagues 19 also found that individuals who were retired or over 60 years old had a lesser degree of PTSD symptoms associated with recent COVID-19 infection when compared with younger infected individuals and that social support appeared to be a protective factor against the development of PTSD symptoms. Some hypothesize that the elderly, despite their vulnerability to the virus, may be more able to contextualize the relative impact of the virus to other traumatic and stressful events experienced earlier in life.
Longer-term sequelae of COVID-19 infections persisting beyond 12 weeks postinfection are often referred to as "long COVID," "long-haul COVID," "chronic COVID," "post-acute COVID-19," and a variety of other terms. Ongoing research is needed to define and study this phenomenon, which according to the CDC, includes neuropsychiatric symptoms, such as fatigue, mood changes, sleep changes, and cognitive changes often described as "brain fog." 25 A systematic review by Reynaud-Charest and colleagues. 26 examined a number of studies 121 weeks after COVID-19 infection. Their summarized interpretations suggested that data on older age as a moderator of post-COVID depressive symptoms are mixed, that severity of acute COVID-19 infection did not clearly influence persistent depressive symptoms after COVID 19 infection, and that neurocognitive impairment did not clearly influence depression. However, the presence of post-COVID depressive symptoms did significantly impair neurocognitive function.
The phenomenon of post-COVID psychosis remains a topic of discussion in case studies and case series, with little data involving the elderly, and should be interpreted in the context of the relatively high incidence of COVID-related encephalopathy and neurologic sequelae in this age group.
Not only is there concern that COVID-19 infection increased the risk of poor psychiatric outcomes in the elderly, but there is clear evidence that poor psychological health increased the risk of poor outcomes of COVID-19 infection. A Cochrane review of 21 studies, including data from 91 million individuals revealed that those with pre-existing mood disorders had higher odds of COVID-19-related hospitalizations (odds ratio 1.31) and death (odds ratio 1.51) when compared with those without pre-existing mood disorders. 27 This is hypothesized to relate to the higher rates at which individuals with mood disorders reside in congregate facilities, experience comorbid health conditions, or possibly the increased risk of inflammatory states in those with certain mood disorders.
THE INDIRECT EFFECTS OF CORONAVIRUS DISEASE ON GERIATRIC MENTAL HEALTH
Even in elderly individuals who have been fortunate enough to avoid SARS-CoV-2 infection, the indirect effects of the global pandemic on mental health and overall well-being deserve recognition. The interaction between society and the pandemic has just as much influence on geriatric mental health as the virus has on the body. The policies enacted by governing bodies around the world to combat the pandemic and their cascading effects are the most apparent indirect influences of the virus. Although jurisdictions worldwide enacted different sets of public health protocols to control viral spread, public health information campaigns, quarantine, and masking orders were relatively pervasive.
Changes in residential care, homecare, and family care The COVID-19 pandemic had a profoundly negative impact on resources for elderly adults both in the community and in residential care settings. Indirectly, barriers to obtaining optimal supportive services placed stress on both older adults and their caregivers, contributing to suboptimal global well-being and a greater risk of poor mental health outcomes.
Adults over age 65 comprise 62.5% of adult day care utilizers, 81.9% of home health agency utilizers, 81.5% of nursing home residents, and 93.4% of residential care community residents. 28 Infections in congregate care settings spread rapidly in the early pandemic, and although case reporting was imperfect at the time and often difficult to interpret, residents in nursing home settings comprised a large proportion of COVID-19-related infections and deaths. For example, one report in May 2020 suggested that at this 2-month mark in the pandemic, 42% of deaths in the United States from COVID-19 had stemmed from the 0.6% of the population residing in a nursing home and assisted living facilities. 29 As of March 31, 2022, there were 1,011,780 confirmed cases of COVID-19 and 151,726 COVID-19-related deaths in nursing home residents in the United States. 30 During the early waves of the pandemic, nursing homes were challenged to care for an unprecedented number of acutely ill patients in uncertain circumstances, finding it difficult to meet the needs of residents in accordance with usual quality standards. Front line nursing home staff noted multiple unique stressors impeding day-to-day care, including constraints on testing, extended use and reuse of personal protective equipment (PPE), appraising and implementing guidance from numerous regulatory agencies, increased workloads, staffing shortages, and the breakdown of organizational communication and teamwork. These practical challenges were coupled with the emotional burden of caring for residents facing isolation, severe illness, and death. Front line staffs reported increasing levels of burnout and also were demoralized by negative media coverage of nursing homes in comparison to the heroic efforts of hospital staff. 31 Nationwide, the number of nursing home staff COVID-19 infections marginally exceeded those of nursing home residents, and as of March 31, 2022, there have been 2341 staff deaths related to COVID-19 (https://data.cms.gov). Infections to the staff perpetuated burnout and drastically disrupted the continuity of care of patients in all facility settings.
For residents of congregate care settings, a paucity of PPE and vaccinations translated to extreme measures to curb the spread of coronavirus. Elderly residents were placed in "lockdown" or "isolation" arrangements, were unable to see their families, and were unable to participate in shared meals or activities. The lack of access to surveillance testing perpetuated restrictions for months. 32 The development and implementation of coronavirus vaccines drastically reduced the risk of mortality in The Impact of SARS-CoV-2 Infection congregate care settings and allowed for a stepwise reinstitution of community meals, stimulating group activities, and family visitation. However, COVID-19 exposures in long-term care settings continued to disrupt normal operations, including episodic lockdowns and slowing of admissions, making it difficult for patients in need of residential care to access such in a timely manner. Many families continued to experience a hesitancy to utilize congregate care environments. In these circumstances, the care of the elderly fell increasingly into the hands of family caregivers and home care agencies.
Access to adult daycare programs was uniquely disrupted during the pandemic, with a preponderance of programs closing entirely. Many elderly adults rely upon these programs for daytime structure, stimulating activities, access to basic nursing care, safe observation, and socialization. These needed services are essentially impossible to replicate in a home care environment. Closures and avoidance of adult day programs shifted responsibilities for care to in-home care and especially to informal family care internationally. 33,34 However, home care agencies were also strained during the pandemic, and often unable to match the needs of the elderly in the community. One study in Japan suggested that home care service utilization did not increase during the pandemic, despite encouragement by the Japanese government. 35 Home care service providers were financially strained and unable to grow during the pandemic, due to declines in referrals, the high cost of necessary supplies, and furloughs. 36 Family caregivers were encouraged to implement a number of strategies to increase stimulation for recently homebound elderly individuals, including utilization of digital devices for social connection and stimulation, engaging in purposeful activities around the home, and developing a simple and predictable daily routine. 37 A small study involving spousal and adult-child caregivers of patients with dementia who received telephone support for family caregiving during the pandemic identified multiple sources of anxiety for caregivers, including the sense of isolation, increased responsibility, stress related to worsening dementia-related behaviors, restrictions of social interaction, concerns about job loss, and difficulties in adapting to COVID-19 safety recommendations. 38 Unfortunately, a sample of 897 community-dwelling older persons in the United States during the first 3 months of the pandemic identified reports of elder abuse in 21.3%, when compared with a 10% prepandemic baseline prevalence, with a sense of community appearing protective, although financial strain being associated with an increased risk of abuse. 39 Adult day programs have slowly become more accessible when compared with the early pandemic, but are still limited and prone to unpredictable closures.
Isolation and other downstream effects of quarantine
Successful quarantining inherently and deliberately leads to isolation, the explicit goal being to seclude individuals so that person-to-person transmission is limited. Surveys around the world have reflected broad increases in the rates of loneliness, depression, and anxiety as a result of this isolation, and the risks of suicide and suicidality have risen accordingly. 40 Geriatric patients living at home often reported a decrease in physical activity, increase in fatigue and hopelessness. 41 Although the vast majority of surveys concluded a measurable increase in loneliness, depression, and anxiety, there were a small number of studies that examined this from different perspectives that seemed somewhat more optimistic. One survey in Qatar compared older adults to the gender and age-matched controls and found that the prevalence of depressive, anxiety, and stress scores in the elderly were not significantly different. 42 However, in the quarantine group, higher depressive, anxiety, and stress scores as well as lower resilience was associated with the female gender. An Austrian survey comparing prepandemic and pandemic levels of loneliness found that although COVID-19 restrictions did result in increased levels of loneliness in the elderly, the effects were short-lived. 43 They concluded that they expect no strong negative consequences for mental health, although longitudinal studies are clearly needed.
An unintended consequence of specifically targeted quarantining policies was observed in Sweden. 44 As COVID-19 cases initially rose, the Public Health Agency in Sweden strongly advised avoiding contact with those aged 70 and above as a means to protect those deemed "weak and frail." Verbal abuse toward Swedish elderly for walking outside increased thought due to disparity in the restriction guidelines based on age.
Interventions were made in an attempt to reduce social isolation and loneliness while still being physically apart. Intuitively, direct communication through social networking websites was associated with reduced loneliness, whereas passive engagement was associated with greater loneliness. 45 Flexibility in the delivery of both loneliness and psychological interventions with cognitive behavioral therapy improved benefit. The elderly also often reported the importance of traditional communication methods, such as telephone calls. Involvement of the elderly in befriending programs demonstrated increases in self-confidence, allowing volunteers to give back to their community and benefit from social engagement. A cross-sectional study from Hong Kong showed that the elderly who continued to volunteer during the pandemic experienced fewer symptoms of depression and anxiety, suggesting that encouragement of volunteerism despite difficult circumstances can promote mental health. 46
Grief
Owing to the higher mortality rates among the elderly, we may presume greater rates of catastrophic grief as the elderly lose friends, family, and loved ones at such an unnaturally accelerated rate. There have been countless stories of those who have died alone due to social distancing requirements, with families and friends unable to say goodbye in person. Normal bereavement processes and the social and cultural rituals they require have been universally disrupted almost without exception. The expectation was that rates of prolonged grief disorder among the elderly would naturally increase as a result. 47 A cross-sectional survey not limited to the elderly did note that where there were higher grief levels after COVID-19 bereavement than natural bereavement, grief severity was not significantly different pre-and post-pandemic. However, experiencing a loss during the pandemic elicited a more severe acute grief reaction. 48
Personal Protective Equipment and Communication
Sensory impairments, such as reduced visual and aural acuity, are most common in the geriatric population, who are also most likely to depend on the use of hearing aids, medical equipment, and compensatory strategies, such as lip-reading to function optimally. Widespread use of PPE during the pandemic has exacerbated the existing sensory and comprehension obstacles that the elderly face when communicating with family, friends, and service providers. Barriers to effective communications can negatively impact an individual's ability to confidently maintain meaningful social connections and express his or her needs effectively, hence contributing to loneliness, anxiety, disorientation, and distress.
Providers and caregivers are encouraged to implement pragmatic strategies for mindful communication during COVID-19. This may include approaching a patient from the front, giving time for older adults to process who you are, interacting at eye level, projecting a calm attitude, using short simple sentences, and emphasizing those sentences with gestures. 49 The Impact of SARS-CoV-2 Infection Reactions to public health announcement and media campaigns Public health information campaigns have been an invaluable instrument for disseminating information about COVID-19, announcing new policies and procedures, and issuing recommendations on best practices. An information campaign that fails to convey the severity of the pandemic would induce a low level of arousal and in turn less action toward protecting oneself against the threat. At the other extreme, hyperbolic messaging may induce excessive stress and feelings of being overwhelmed, also leading to inadequate response to a threat.
Owing to the heightened morbidity and mortality in the geriatric population after COVID-19 infection, there was specific messaging in informational campaigns and in the media emphasizing the risk of the virus to the elderly, 50 which has persisted late into the pandemic. 51 Although this message was intended to protect a more vulnerable generation, this repetitive reminder throughout the pandemic also emphasized the frailty of old age and may have had the unintended consequence of framing the elderly as a burden or a liability. Although drawing on data that is accurate in that the elderly are more at risk, the framing continues to divide the young and the old. Social media carried more blatant ageist sentiments. During a time when more individuals relied on social media to feel connected during quarantine, user-generated metadata terms such as "#BoomerRemover" trended on the popular microblogging site Twitter. 52 This collected and disseminated ageist messages and became a platform for expressions of intergenerational resentment.
Racial inequality and violence
The COVID-19 death rate in the United States has disproportionately consisted of Non-Latino Black and Latino Americans, 53 which was most prominent in middleaged adults but persisted into the 9th decade. These findings brought essential attention to sources of structural racism within these communities, noting higher rates of employment in "essential" positions, employment without paid sick leave, and living arrangements in densely populated areas or multigenerational homes. 54 Although living in close proximity to family caregivers was once a protective factor for wellbeing in these families, it became an unavoidable risk. Similar influences directly contributed to increased COVID-19 mortality and barriers to care among older Asian Americans. 55 The organization Stop Asian American and Pacific Islander (AAPI) Hate gathered data on racially motivated attacks on this group during the first year of the pandemic-of the 10,905 hate incidents reported, seniors were involved in up to 7%. 56 The National Public Radio report on the US census survey found that Asian American households were twice as likely as white households to report food scarcity at home during the pandemic due to fear of going out 57 ; in some locales, community organizations were able to respond to this need with meal delivery, but it is more likely that the elderly with baseline difficulties accessing services were left to manage on their own, perpetuating fear and uncertainty.
Telehealth and access to care
In the early stages of the pandemic, the shift to telemedicine services utilizing webbased videoconferencing platforms or telephone support allowed otherwise isolated individuals to have some connection to medical and psychiatric care. Given the nature of mental health care, psychiatry was uniquely poised for a transition to telehealth services, providing access to care with minimal risk of viral exposure. Although this rapid innovation was helpful to many, the infrastructure to support this shift was not available in many lower resource countries and was often not sufficient in caring for elderly individuals with cognitive limitations. 58 Low computer literacy, poor Internet access, cognitive limitations, and sensory impairment remain barriers to the universal implementation and usefulness of telemedicine services for a substantial portion of this demographic. Although a number of modifications were made to cognitive evaluation tools to allow for virtual assessment, thorough cognitive assessments remained difficult to implement in a telehealth format. With waning in the perceived acuity of the pandemic and widespread availability of effective vaccinations and appropriate PPE, the availability of in-person care is again normalizing.
SUMMARY AND IMPLICATIONS FOR PRACTICE
Thoughtful assessment of elderly individuals will be essential for helping this cohort heal, recover, and adapt to the pandemic. This will include screening for prior COVID infection, assessing the severity of the prior infection and any short-or long-term neurocognitive effects of infection, assessing for the presence or worsening of psychiatric symptoms, appraising available social supports and changes in social engagement, identifying needs that were not met during pandemic restrictions and if these have been subsequently remedied, identifying loss and grief, and assessing for the appropriateness of in-person support versus utilization of telehealth to increase access to quality care.
Ongoing research will be essential in helping to better understand shifts in the cognitive, psychiatric, and physical health of elderly individuals during the later waves of the pandemic, and the impact of vaccination and other public health interventions on these domains.
The demand for geriatric psychiatry will grow even more precipitously in a late pandemic and post-pandemic world, and large-scale efforts to address this resource gap will be essential for the health of our communities Boxes 1 and 2.
CLINICAL PEARLS
The direct effects of severe acute respiratory syndrome coronavirus virus 2 infection in the elderly include cognitive dysfunction, anxiety, depression, insomnia, and trauma-related symptoms, although the extent to which these are disruptive is likely a function of the severity of infection, the presence of pre-existing medical and psychiatric comorbidities, and the sociocultural context of the infection.
Sociocultural shifts during the coronavirus disease (COVID-19) pandemic generated unprecedented challenges for even noninfected individuals, relating to grief, isolation, loneliness, discrimination, and barriers to meeting basic needs.
The COVID-19 pandemic drastically impacted the delivery of care for individuals dependent on long-term care facilities, homecare services, and adult day services, shifting the burden of care to family caregivers.
Although older adults have experienced poor outcomes relating to cognitive health and mental health during the pandemic, they have also shown a greater degree of resilience compared to younger adults, especially in consideration of post-traumatic or acute stress symptoms.
Providers will continue to proactively support the physical, psychological, and social health of geriatric adults affected by the COVID-19 pandemic, including screening for and treating psychiatric symptoms; assessing for cognitive dysfunction; identifying unmet day-today needs; promoting vaccination; and supporting a safe return to valued activities and social relationships.
The pandemic provided the catalyst for the expansion of telehealth services in geriatric psychiatry, although in-person services remain necessary to care for the most vulnerable. | 2022-07-07T13:04:53.455Z | 2022-07-06T00:00:00.000 | {
"year": 2022,
"sha1": "0ef9da68fae2a0fb725986b7738d2b6691d33c47",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.psc.2022.07.007",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bf02c4975129bef450d135bbcb90e7a6b0c5d32",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
14119475 | pes2o/s2orc | v3-fos-license | Search for massive rare particles with the SLIM experiment
The SLIM experiment is a large array of nuclear track detectors located at the Chacaltaya High Altitude Laboratory (5260 m a.s.l.). The preliminary results from the analysis of ~383 m^2 exposed for 4.07 y are here reported. The detector is sensitive to Intermediate Mass Magnetic Monopoles, 10^5<M_M<10^12 GeV, and to SQM nuggets and Q-balls, which are possible Dark Matter candidates.
Introduction
Grand Unified Theories (GUT) of the strong and electroweak interactions predict the existence of magnetic monopoles (MMs), produced in the early Universe at the end of the GUT epoch, with very large masses, M M > 10 16 GeV.GUT poles in the cosmic radiation should be characterized by low velocity and relatively large energy losses [1].At present the MACRO experiment has set the best limit on GUT MMs for 4 • 10 −5 < β < 0.5 [2].
Intermediate Mass Monopoles (IMMs) [10 5 ÷ 10 12 GeV] could also be present in the cosmic radiation; they may have been produced in later phase transitions in the early Universe [3].The recent interest in IMMs is also connected with the possibility that they could yield the highest energy cosmic rays [4].IMMs may have relativistic velocities since they could be accelerated in one coherent domain of the galactic magnetic field.In this case one would have to look for downgoing fast (β > 0.1) heavily ionizing MMs.
Besides MMs, other massive particles have been hypothesized to exist in the cosmic radiation and to be components of the galactic cold dark matter: nuggets of Strange Quark Matter (SQM), called nuclearites when neutralized by captured electrons, and Q-balls.SQM consists of aggregates of u, d and s quarks (in approximately equal proportions) with slightly positive electric charge [5].It was suggested that SQM may be the ground state of QCD.They should be stable for all baryon numbers in the range between ordinary heavy nuclei and neutron stars (A ∼ 10 57 ).Nuclearite interaction with matter depend on their mass and size.In ref. [6] different mechanisms of energy loss and propagation in relation to their detectability with the SLIM apparatus are considered.In the absence of any candidate, SLIM will be able to rule out some of the hypothesized propagation mechanisms.
Q-balls are super-symmetric coherent states of squarks, sleptons and Higgs fields, predicted by minimal super-symmetric generalizations of the Standard Model [7].They could have been produced in the early Universe.Charged Q-balls should interact with matter in ways not too dissimilar from those of nuclearites.
After a short description of the apparatus, we present the calibrations, the analysis procedures and the results from the SLIM experiment.
Experimental procedure
The SLIM (Search for LIght magnetic Monopoles) experiment, based on 440 m 2 of Nuclear Track Detectors (NTDs), was deployed at the Chacaltaya High Altitude Laboratory (Bolivia, 5260 m a.s.l.) since 2001 [8].The air temperature is recorded 3 times a day.From the observed ranges of temperatures we conclude that no significant time variations occurred in the detector response.The radon activity and the flux of cosmic ray neutrons were measured by us and by other authors [9].Another 100 m 2 of NTDs were installed at Koksil (Pakistan, 4600 m a.s.l.) since 2003.
Extensive test studies were made in order to improve the etching procedures of CR39 and Makrofol, improve the scanning and analysis procedures and speed, and keep a good scan efficiency."Strong" and "soft" etching conditions have been defined [10] The detectors have been calibrated using 158 AGeV In 49+ (see Fig. 1) and 30 AGeV Pb 82+ beams.For soft etching conditions the threshold in CR39 is at REL ∼ 50 MeV cm 2 g −1 ; for strong etching the threshold is at REL ∼ 250 MeV cm 2 g −1 .Makrofol has a higher threshold (REL ∼ 2.5 GeV cm 2 g −1 ) [11].The CR39 allows the detection of IMMs with two units Dirac charge in the whole β-range of 4 • 10 −5 < β < 1.The Makrofol is useful for the detection of fast MMs; nuclearites with β ∼ 10 −3 can be detected by both CR39 and Makrofol.
The analysis of a SLIM module starts by etching the top CR39 sheet using strong conditions, reducing its thickness from 1.4 mm to ∼ 0.6 mm.Since MMs, nuclearites and Q-balls should have a constant REL through the stack, the signal looked for is a hole or a biconical track with the two base-cone areas equal within the experimental uncertainties.The sheets are scanned with a low magnification stereo microscope.Possible candidates are further analyzed with a high magnification microscope.The size of surface tracks is measured on both sides of the sheet.We require the two values to be equal within 3 times the standard deviation of their difference.A track is defined as a "candidate" if the REL and the incidence angles on the front and back sides are equal to within 15%.To confirm the candidate track, the bottom CR39 layer is then etched in soft conditions; an accurate scan under an optical microscope with high magnification is performed in a region of about 0.5 mm around the expected candidate position.If a two-fold coincidence is found the middle layer of the CR39 (and in case of high Z candidate, the Makrofol layer) is analyzed with soft conditions.
Non reproducible candidates
In 2006 the SLIM experiment found a very strange event when analyzing the top CR39 layer of stack 7408.We found a sequence of many "tracks" along a 20 cm line; each of them looked complicated and very different from usual ion tracks, see Fig. 2left(a,b).For comparison Fig. 2left(c) shows "normal" tracks from 158 AGeV Pb 82+ ions and their fragments and Fig. 2left(d) shows tracks from 400 AMeV Fe 26+ ions.
Since that "event" was rather peculiar, we made a detailed study of all the sheets of module 7408, and a search for similar events and in general for background tracks in all NTD sheets in the wagons around module 7408 (within a ∼ 1 m distance from module 7408).We etched "softly" all the sheets in order to be able to follow the evolution of the etch-pits.A second event was found in the CR39 bottom layer (top face) of module 7410, see Fig. 2right.Some background tracks in other modules were found after 30 h of soft etching.We decided to further etch "strongly" the 7410-L6 layer in short time steps (5h) and to follow the evolution of the "tracks" by systematically making photographs at each etching step.After additional strong etching, the "tracks" began more and more similar to those in the 7408-L1 layer, see Fig. 2right(b,c,d).The presence of this second event/background and its evolution with increasing etching casts stronger doubts on the event interpretation and supports a "background" interpretation also of the "tracks" in module 7408.We made different hypotheses and we checked them with the Intercast Co.Since 1980 we analyzed more than 1000 m 2 of CR39 using different etching conditions and we have not seen before any of the above mentioned cases.It appears that we may have been hit by an extremely rare manufacturing defect involving 1 m 2 of CR39.
Results and Conclusions
We etched and analyzed 383 m 2 of CR39, with an average exposure time of ∼ 4.07 years.No candidate passed the search criteria: the 90% C.L. upper limits for a downgoing flux of IMMs with g = g D , 2g D , 3g D and for dyons (M+p) are at the level of ∼ 1.5 • 10 −15 cm −2 s −1 sr −1 for β ≥ 4 • 10 −2 , see Fig. 3left.The same sensitivity was reached also for nuclearites with β ≥ 10 −4 (Fig. 3right) and Q-balls coming from above
Figure 1 :
Figure 1: Calibrations of CR39 nuclear track detectors with 158 AGeV In 49+ ions and their fragments.
. Strong etching conditions (8N KOH + 1.25% Ethyl alcohol at 77 • C for 30 hours) are used for the first CR39 sheet in each module, in order to produce large tracks, easier to detect during scanning.Soft etching conditions (6N NaOH + 1% Ethyl alcohol at 70 • C for 40 hours) are applied to the other CR39 layers in a module, if a candidate track is found in the first layer.It allows more reliable measurements of the Restricted Energy Loss (REL) and of the direction of the incident particle.Makrofol layers are etched in 6N KOH + Ethyl alcohol (20% by volume), at 50 • C.
Figure 2 :
Figure 2: Left: (a) Global view of the "event" tracks in the L1 layer of wagon 7408.(b) Microphotographs of the 22 cones at the top of Fig. 3a.(c) Normal tracks of 158 AGeV Pb 82+ ions and their fragments (soft etching), and (d) of 400 AMeV Fe 26+ ions and their fragments (strong etching).Right: Example of "tracks" in the L6 layer of wagon 7410: (a) after 30h of soft etching, (b) after 5h more of strong etching, (c) after 4h more of strong etching and (d) after 10h more of strong etching. | 2007-12-10T11:21:10.000Z | 2007-12-10T00:00:00.000 | {
"year": 2007,
"sha1": "bda9a159c1ebde5699615537a91148bb3cd952b7",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/0712.1438",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d1dcbccc74b2bff70df71107e440e4b200f446de",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
268702479 | pes2o/s2orc | v3-fos-license | Selenium Nanoparticle Activity against S. mutans Biofilms as a Potential Treatment Alternative for Periodontitis
The disruption of periodontal biofilms and prevailing antimicrobial resistance issues continue to pose a great challenge to the treatment of periodontitis. Here, we report on selenium nanoparticles (SeNPs) as a treatment alternative for periodontitis by determining their antibiofilm activity against S. mutans biofilms and the potential role of particle size in disrupting biofilms. SeNPs were synthesised via a reduction reaction. Various physicochemical characterisations were conducted on the NPs, including size and shape. The microbroth dilution method was used to conduct the biofilm and antibiofilm assay against S. mutans, which was analysed by absorbance. SeNPs displayed hydrodynamic sizes as low as 46 ± 4 nm at a volume ratio of 1:5 (sodium selenite/ascorbic acid) with good monodispersity and stability. Hydrodynamic sizes of SeNPs after resuspension in tryptic soy broth supplemented with 2.5% sucrose (TSB + 2.5% suc.) and incubated at 37 °C for 24 h, ranged from 112 to 263 nm, while the zeta potential values increased to greater than −11 mV. The biofilm assay indicated that S. mutans are weakly adherent, bordering on moderately adherent biofilm producers. The minimum biofilm inhibitory concentration (MBIC) was identified at 500 µg/mL. At a 1000 µg/mL concentration, SeNPs were able to inhibit S. mutan biofilms up to 99.87 ± 2.41% at a volume ratio of 1:1. No correlation was found between antibiofilm activity and particle size; however, antibiofilm activity was proven to be concentration-dependant. SeNPs demonstrate antibiofilm activity and may be useful for further development in treating periodontitis.
Introduction
Periodontitis is an accelerated and severe stage of gingivitis in which the gums become inflamed and detach from the teeth.This progressive disease can result in potential bone loss, tooth decay, and fallout [1].The Global Burden of Disease Study, assessing disease burden from 1990 to 2016, revealed that periodontitis is the eleventh most prevalent disease around the globe and affects approximately 20-50% of the population [2], while severe periodontal diseases affect around 19% of adults globally [1].Considered one of the biggest threats to dental health, periodontitis is a major public health concern that may hinder or interfere with the mastication process and alter one's appearance and confidence, consequently affecting quality of life [3].Periodontal diseases may similarly expose people to vast socio-economic bearings and healthcare costs.In 2018, a study intended to estimate the economic burden of periodontitis in the US and Europe revealed that indirect costs as a result of periodontal disease totalled USD 150.57 billion and EUR 156.12 billion in Europe.The overall estimates also revealed that periodontal disease caused a total loss of USD 154.06 billion in the US and EUR 158.64 billion in Europe in the year 2018, of which indirect costs made a significant impact [4].
The oral cavity is an intricate ecosystem which hosts over 150 diverse bacterial species in an individual, as well as other types of microorganisms, including arches, fungi, protozoa, and viruses [5].The onset of periodontitis is attributed to the formation of pathogenic biofilm.Biofilms are a complex network of bacteria that develop over time on dental surfaces [6], which can stimulate an inflammatory host response, resulting in the degradation of supporting periodontal tissues and subsequent tooth loss [7].Streptococcus mutans is a Gram-positive bacterium that is an influential etiologic agent in dental caries [8].S. mutans naturally thrive within the human oral cavity but more so in dental plaque, which is a biofilm of different species of microorganisms that develop on the hard surfaces of the tooth.This microbial species is not solely responsible for producing dental caries; it also plays a role in altering the local environment by developing a milieu that is rich in extracellular polysaccharides and low in pH [8,9].These modifications, in turn, create an ideal environment for other acidogenic and aciduric species to flourish.Studies have been employed to detect S. mutans in oral sites, not only for its role in developing caries but also due to its association with extraoral pathologies.S. mutans and a subset of its strains have previously been associated with sub-acute bacterial endocarditis as well as some other extraoral infections such as cerebral microbleeds, immunoglobulin A nephropathy and atherosclerosis [10].
To date, no standardised treatment exists for periodontitis.However, the conventional course of treatment generally involves the implementation of non-surgical procedures such as debridement, scaling and root planning (SRP).These non-surgical procedures eradicate plaque biofilm and calculus, followed by smoothing the tooth or root surface and may be performed both supra-and sub-gingivally [11].While SRP is reported to be successful at reducing the microbial load within the oral cavity, it has not proven efficacious at eliminating the pathogenic species from a subject infected with periodontitis and may result in the recolonisation of these species and possibly the progression of periodontitis [12].Therefore, these procedures typically require the use of adjunctive antimicrobial and anti-inflammatory agents.However, these chemotherapeutic agents are not short of limitations.Studies have previously revealed high rates of resistance of selected sub-gingival periodontal pathogens to a broad range of antimicrobial agents typically used in clinical practice [13].Hence, supplementary microbial and antibiotic susceptibility testing prior to initiating antimicrobial therapy may be required.Also included in the potential treatment options is the development of advanced local drug delivery devices such as films, fibres, gels, and strips.However, the difficulty in gaining access and determining the depth perception of periodontal pockets, costly manufacturing, the use of non-bioabsorbable and non-biodegradable materials, and troublesome and time-consuming insertion and removal of certain devices make these advanced drug delivery options a challenging therapeutic alternative [14].
Nanotechnology promises to provide an opportunity to resolve certain shortcomings experienced by current treatment modalities, especially those involving the treatment of periodontitis.The field of nanotechnology offers treatment alternatives for the restoration of damaged, infected, absent, and fractured teeth [15,16].Several of the latest progressions in this field include the integration of nanocomposites, nanoimpressions, and nanoceramics within clinical dentistry [15].Metallic nanoparticles (NPs) are small particles that are 1-100 nm in size and are composed of merely a few hundred atoms.Metallic NPs, along with their innate antibacterial activity, present several possibilities for eliminating pathogenic microorganisms in the oral cavity in comparison to conventional treatment [17].Along with antibiofilm and antibacterial activity, metallic NPs may also play a regenerative role within the oral cavity [18].The biggest advantage of metallic NPs as antimicrobial agents is their ability to simultaneously act through multiple mechanisms.The many possible mechanisms include stimulating reactive oxygen species (ROS) production, which may hinder DNA replication and amino acid synthesis, resulting in the destruction of the bacterial cell membrane [19].As a result, microbes are unable to develop resistance to these mechanisms of action, contrary to conventional antibiotics [20].Selenium nanoparticles (SeNPs) have gained a considerable amount of attention as they possess favourable properties such as biocompatibility, bioavailability, and low toxicity and have been proven to bear excellent antimicrobial activity [21,22].With the imminent rise in antimicrobial resistance and the need for alternate antibacterial agents, metallic NPs have proven to be effective at combating microbial infections as their bactericidal activity is different from those of conventional antimicrobial agents [23,24].Although studies have already proven that SeNPs exhibit exceptional antimicrobial activity, limited research explores their antibiofilm activity against S. mutans and whether the size of the NPs could influence biofilm penetration and, consequently, biofilm disruption.This work, therefore, aimed to synthesise and characterise SeNPs and determine the antibiofilm activity and MBIC of SeNPs against S. mutans while also distinguishing the effect of particle size on antibiofilm activity.
SeNP Synthesis
SeNPs were produced via a wet chemical reduction reaction, which was adapted from [25,26] with necessary modifications.A quantity of 0.179 g of sodium selenite was transferred to a conical flask along with 10 mL of 10 mM SDS stock solution and stirred at 300 rpm, at 25 • C for 30 min.While the reaction proceeded, various volumes of 50 mM ascorbic acid stock solution were added dropwise to produce multiple samples with final volume ratios of 1:1, 1:3 and 1:5 of sodium selenite to ascorbic acid.The reaction proceeded for 30 min, and colour changes were observed from clear to orange/red.Thereafter, each NP sample was centrifuged for 40 min at 12,580× g.The precipitate was collected and redispersed in dH 2 O.This experiment was repeated and performed in triplicate for each volume ratio (n = 3).The SeNP samples were stored at 4 • C for future use.Bulk stock solutions of SeNPs with known concentrations were prepared by drying, weighing, and resuspending samples from each volume ratio in sterile water and, thereafter, sonicating.Bulk samples were also stored at 4 • C for future use.
Hydrodynamic Size, Polydispersity Index and Zeta Potential Analysis
The hydrodynamic size, polydispersity index (PDI) and ZP of the SeNPs were characterised using Zetasizer ZS-90, which employs the dynamic light scattering (DLS) technique (Malvern, UK, software version 8.00.4813).NP samples were prepared as described in Section 2.2.SeNP Synthesis, after which SeNP samples were redispersed in dH 2 O and temporarily refrigerated (<24 h) before being subjected to size, PDI and ZP analysis at an unknown concentration.Each sample was pipetted into a disposable polystyrene cuvette for size and PDI and a DTS70 cell for ZP analysis at 25 • C.
High-Resolution Transmission Electron Microscopy (HR-TEM) Analysis and Size Distribution
HR-TEM analysis was performed on SeNP samples at volume ratios of 1:1, 1:3 and 1:5 in dH 2 O, which were analysed using a FEI Tecnai F20 TEM (Thermo Fisher (FEI), Eindhoven, The Netherlands) at 200 kV.The core diameter of the SeNPs was verified via Image J software version 1.53k from the TEM images acquired, and the particle size distribution graphs were constructed on OriginPro 2023b software.
Stability of SeNPs
The stability of SeNPs was evaluated in dH 2 O and in TSB + 2.5% suc.with a final concentration of 1 mg/mL.The hydrodynamic size, PDI and ZP were measured at 0-, 1-, 7-, 14-and 30-daytime periods using DLS (Malvern Zetasizer, software version 8.00.4813).Samples of 1 mg/mL concentration of SeNPs in TSB + 2.5% suc.were incubated for 24 h at 37 • C and characterised via DLS.
S. mutans Culture Preparation
Approximately 3-4 single colonies were selected from the overnight culture of isolated S. mutans and used to inoculate a 5 mL tube containing sterile TSB + 2.5% of suc., which was then incubated overnight at 37 • C. The overnight culture was adjusted to obtain an optical density (OD) equivalent to 0.5 McFarland using a spectrophotometer at a wavelength of 540 nm.The bacterial suspension was then further diluted to 1:100 with TSB + 2.5% suc.The wells on the perimeter of a 96-well plate were filled with 200 µL of sterile water to avoid possible contamination.The remaining wells contained 200 µL aliquots of bacterial suspension and uninoculated TSB + 2.5% suc.(sterility controls).The plates were then sealed and incubated overnight at 37 • C for biofilm formation.The assay was repeated in triplicate for an incubation period of 24 and 48 h.
Crystal Violet Biofilm Assay
After incubation, wells were decanted, and each plate was washed thrice with a sterile 0.9% saline solution.Biofilms that remained adherent after the wash were fixed with methanol at room temperature for 20 min, followed by the decanting of the methanol and air-drying of the plates.Test wells were then stained with 200 µL of a 0.1% CV solution for 15 min at room temperature.The CV solution was then discarded, and plates were washed thrice in sterile saline solution and allowed to dry.Adhered biofilm was resolubilised with 33% acetic acid.A sample from each test well was then transferred to the wells of a clean, optically clear 96-well plate.The absorbance was measured via UV spectroscopy (POLARstar Omega plate reader (software version 3.31), BMG LABTECH, Ortenberg, Germany) at a wavelength of 540 nm.The average and SD of the absorbance measurements were reported.The extent of biofilm adherence was categorised as shown in Table 1.The OD of the experimental wells containing inoculated media (OD) were compared to the OD of the sterility control (OD (control)) containing uninoculated media.
Table 1.Type of biofilm adherence and respective OD criteria required to occupy a specific adherent type [27].
Antibiofilm Assay
An overnight culture of S. mutants was adjusted to 0.5 McFarland and diluted to 1:100 in TSB + 2.5% suc.as similarly produced in the biofilm assay.A twofold serial dilution of UV-sterilised (via a UV chamber, Ultra-Violet Products, Inc., Los Angeles, CA, USA) SeNP suspension with a concentration of 2000 µg/mL and aliquots of TSB + 2.5% suc.was executed.The final concentration in test wells ranged from 7.81 to 1000 µg/mL.A 100 µL volume of the 100-fold bacterial suspension was pipetted into the test wells.Two separate columns were assigned as the growth and sterility control, which contained inoculated and uninoculated media, respectively.The wells on the perimeter of the plate were filled with sterile water.The plates were then covered, sealed and incubated overnight at 37 • C followed by further processing as per the CV assay described earlier.This experiment was repeated for each SeNP formulation ratio (1:1, 1:3 and 1:5) and then replicated thrice for each volume ratio to obtain a mean and SD (n = 3).
The percentage of inhibition of biofilm formation was calculated as shown in Equation (1) [28].Biofilm inhibition was evaluated between 0 and 100%.Percentage inhibition below 0 was categorised as biofilm growth enhancement; values between 0 and 50% indicated weak antibiofilm activity, and above 50% depicted good biofilm inhibition [29].
Statistical Methods
Measurements were reported as the mean ± standard deviation SD.Statistical analysis was measured via GraphPad Prism version 9.50 (730) using Tukey's multiple comparison test (one-or two-way ANOVA), whereby probability values <0.05 were considered significant.
Hydrodynamic Size, PDI and ZP
The average hydrodynamic sizes of SeNPs at volume ratios of 1:1, 1:3 and 1:5 were 70 ± 17 nm, 47 ± 1 nm, and 46 ± 4 nm, respectively (Figure 1a).Similarly, the PDI values at volume ratios of 1:1, 1:3 and 1:5 were 0.23 ± 0.04, 0.25 ± 0.09, and 0.28 ± 0.01, respectively (Figure 1b), indicating relatively monodisperse samples.The results indicate that the size of the SeNP is minimally affected by the proportion of the reducing agent in the formulation.Earlier studies on SeNP synthesis that tested similar volume ratios reported that the increase in the reducing agent during synthesis has a significant impact on the hydrodynamic size of the formulation [30,31].In theory, ascorbic acid with a reducing capacity of 1-2, should fully reduce a volume ratio of 1:2 of sodium selenite to ascorbic acid; however, an excess of reducing agent causes greater reduction to occur, which subsequently retards oxidation and, therefore, results in smaller sized particles [31].This concept is displayed by NP formulations with volume ratios of 1:3 and 1:5.SeNPs at volume ratios of 1:1, 1:3 and 1:5, and generated ZP values of −49.5 ± 4.2 mV, −43.3 ± 4.3 mV, −32.6 ± 5.56 mV (Figure 1c), respectively.As these formulations were observed to be relatively stable, no trend was detected between the reaction components and ZP, as no statistically significant data were produced in this particular experiment.It can, therefore, be deduced that the ZP and stability of the SeNP sample do not depend on the proportion of the ascorbic acid (the reducing agent) present during synthesis.
HR-TEM Analysis and Size Distribution of SeNPs
The TEM images presented in Figure 2a confirm that the 1:1 SeNPs had irregularshaped particles, while the 1:3 (Figure 2b) and 1:5 (Figure 2c) SeNP samples displayed more uniformly shaped spherical particles with good monodispersity.It has been reported in the literature that increasing the reducing agent during the synthesis of certain metal-based NPs reduces the number of dispersed metallic nanoparticles, which subsequently reduces the agglomeration of nanoparticles, which seems to be the case in this study [32,33].The size distribution graphs reveal that the average core diameter for the 1:1, 1:3 and 1:5 samples were 72 ± 20 nm, 40 ± 7 nm and 56 ± 12 nm, respectively, and these are comparable to the hydrodynamic diameters.No statistical differences were observed between the core diameter and the hydrodynamic diameters for each formulation ratio (p > 0.05).
HR-TEM Analysis and Size Distribution of SeNPs
The TEM images presented in Figure 2a confirm that the 1:1 SeNPs had irregularshaped particles, while the 1:3 (Figure 2b) and 1:5 (Figure 2c) SeNP samples displayed more uniformly shaped spherical particles with good monodispersity.It has been reported in the literature that increasing the reducing agent during the synthesis of certain metal-based NPs reduces the number of dispersed metallic nanoparticles, which subsequently reduces the agglomeration of nanoparticles, which seems to be the case in this study [32,33].The size distribution graphs reveal that the average core diameter for the 1:1, 1:3 and 1:5 samples were 72 ± 20 nm, 40 ± 7 nm and 56 ± 12 nm, respectively, and these are comparable to the hydrodynamic diameters.No statistical differences were observed between the core diameter and the hydrodynamic diameters for each formulation ratio (p > 0.05).
Stability of SeNPs in dH2O
The 1:3 and 1:5 volume ratios generated particle sizes well below 100 nm and PDI values below 0.5 for the 30-day study period.The 1:1 ratio produced the largest hydrodynamic size of 175 ± 19 nm (Figure 3a) at the immediate characterisation (0 days).Consequently, the highest PDI of 0.59 ± 0.09 at a volume ratio of 1:1 was also recorded at the 0day reading (Figure 3b).The NP size and PDI of the 1:1 formulation appeared to reduce and stabilise with time; however, the 1:3 and 1:5 ratios, with larger proportions of the
Stability of SeNPs in dH 2 O
The 1:3 and 1:5 volume ratios generated particle sizes well below 100 nm and PDI values below 0.5 for the 30-day study period.The 1:1 ratio produced the largest hydrodynamic size of 175 ± 19 nm (Figure 3a) at the immediate characterisation (0 days).Consequently, the highest PDI of 0.59 ± 0.09 at a volume ratio of 1:1 was also recorded at the 0-day reading (Figure 3b).The NP size and PDI of the 1:1 formulation appeared to reduce and stabilise with time; however, the 1:3 and 1:5 ratios, with larger proportions of the reducing agent present, provided more consistent and stable particle size and PDI results for the duration of the stability study.Statistically significant PDI data were identified for the 1:3 formulation between the day 0 and day 1 measurements (p = 0.0412), after which the PDI data appeared more consistent over time.Despite this, the 1:3 and 1:5 groups consistently proved to be monodispersed over the 30 day period.
produced the most unstable result for this sample as well.Subsequent to the 1-day measurement, however, the 1:1 formulation appeared to gradually destabilise over time, particularly between the 1-day and 30-day ZP characterisation points, which produced a statistically significant result (p = 0.0182).No statistically significant data were observed amongst the 1:3 and 1:5 groups.The ZP values for these ratios fluctuated along the -30mV range, and the results indicate a slight but insignificant decrease and stabilisation in the ZP value over time, followed by an insignificant increase and destabilisation at the 30-day mark.Previous stability studies performed on SeNPs with the addition of a stabiliser reported comparable results in that the formulation was able to remain stable for more than 30 days; however, it was only able to remain stable for a certain period in the aqueous medium after which the NPs became increasingly unstable [34,35].For the ZP measurement, the only statistically significant results were produced by the 1:1 ratio at the 0-day characterisation point, which also produced the highest ZP value of −6.22 ± 2.45 mV.This measurement exhibited a difference when compared to the 1-(p = 0.0009), 7-(p = 0.0249) and 14-day (p = 0.0050) ZP results (Figure 3c).These statistically significant results can be similarly attributed to the fact that this ratio achieved the largest hydrodynamic size and highest PDI at the 0-day characterisation point and subsequently produced the most unstable result for this sample as well.Subsequent to the 1-day measurement, however, the 1:1 formulation appeared to gradually destabilise over time, particularly between the 1-day and 30-day ZP characterisation points, which produced a statistically significant result (p = 0.0182).No statistically significant data were observed amongst the 1:3 and 1:5 groups.The ZP values for these ratios fluctuated along the -30mV range, and the results indicate a slight but insignificant decrease and stabilisation in the ZP value over time, followed by an insignificant increase and destabilisation at the 30-day mark.Previous stability studies performed on SeNPs with the addition of a stabiliser reported comparable results in that the formulation was able to remain stable for more than 30 days; however, it was only able to remain stable for a certain period in the aqueous medium after which the NPs became increasingly unstable [34,35].
Stability of SeNPs in TSB + 2.5% Sucrose
The DLS analysis revealed that the hydrodynamic size of SeNPs in TSB + 2.5% suc.increased considerably (Figure 4a) in comparison to the particle sizes generated 24 h (1 day) after the formulation (Figure 3a) in dH 2 O.The differences in hydrodynamic sizes produced between 1:1, 1:3 and 1:5 samples among the TSB + 2.5% suc.and dH 2 O samples were 151 nm, 98 nm and 64 nm, respectively.Despite the increase in size displayed by the samples incubated in TSB + 2.5% suc, it appeared that the higher proportion of ascorbic acid rendered the SeNP sample less prone to large particle size increases.This is evident by the linear relationship exhibited in the size differences in relation to the proportion of ascorbic acid present in the formulation.
Stability of SeNPs in TSB + 2.5% Sucrose
The DLS analysis revealed that the hydrodynamic size of SeNPs in TSB + 2.5% suc.increased considerably (Figure 4a) in comparison to the particle sizes generated 24 h (1 day) after the formulation (Figure 3a) in dH2O.The differences in hydrodynamic sizes produced between 1:1, 1:3 and 1:5 samples among the TSB + 2.5% suc.and dH2O samples were 151 nm, 98 nm and 64 nm, respectively.Despite the increase in size displayed by the samples incubated in TSB + 2.5% suc, it appeared that the higher proportion of ascorbic acid rendered the SeNP sample less prone to large particle size increases.This is evident by the linear relationship exhibited in the size differences in relation to the proportion of ascorbic acid present in the formulation.The SeNPs at a concentration of 1000 µg/mL, suspended in TSB and incubated for a period of 24 h, demonstrated aggregation.Despite the particle size substantially increasing, the PDI only slightly increased (Figure 4b).SeNPs suspended in TSB also displayed a substantial decline in stability, as displayed by all volume ratios exhibiting ZP values above −11 mV (Figure 4c).
Biofilm Forming Ability of S. mutans
The average absorbance of the biofilm formation of S. mutans over a 24 and 48 h incubation period was 0.24 ± 0.014 and 0.21 ± 0.002, respectively (Figure 5).This is slightly less than double the amount of sterility control (24 h), which acquired an average absorbance of 0.14 ± 0.005 (p < 0.0001).The biofilm results received over the 48 h incubation period were statistically significant when compared to both the 24 h sterility control (p = 0.0007) and the 24 h biofilm formation assay (p = 0.0403).According to the biofilm classification described by Christensen et al. (1985) [27], both the 24 and 48 h data results classified the S. mutans strain as a weak bordering on moderate biofilm producer.The SeNPs at a concentration of 1000 µg/mL, suspended in TSB and incubated for a period of 24 h, demonstrated aggregation.Despite the particle size substantially increasing, the PDI only slightly increased (Figure 4b).SeNPs suspended in TSB also displayed a substantial decline in stability, as displayed by all volume ratios exhibiting ZP values above −11 mV (Figure 4c).
Biofilm Forming Ability of S. mutans
The average absorbance of the biofilm formation of S. mutans over a 24 and 48 h incubation period was 0.24 ± 0.014 and 0.21 ± 0.002, respectively (Figure 5).This is slightly less than double the amount of sterility control (24 h), which acquired an average absorbance of 0.14 ± 0.005 (p < 0.0001).The biofilm results received over the 48 h incubation period were statistically significant when compared to both the 24 h sterility control (p = 0.0007) and the 24 h biofilm formation assay (p = 0.0403).According to the biofilm classification described by Christensen et al. (1985) [27], both the 24 and 48 h data results classified the S. mutans strain as a weak bordering on moderate biofilm producer.Li et al. ( 2013), in a study that tested the sucrose-dependent biofilm formation of three S. mutans strains after 24 h incubation against different nicotine concentrations, reported that all three strains exhibited absorbance values below 0.07 at 490 nm [36].These authors also performed a similar experiment to that reported in this study to test saliva-dependent biofilm formation against nicotine, in which the biofilm growth control exhibited absorbance values below 0.15 at 490 nm for all three S. mutans strains [36].The absorbance results for both experiments appeared to be significantly lower compared to the present study, even though the assays were conducted under similar experimental conditions.Additionally, this indicates that the biofilm assay from this study might be superior as the process of coating wells in saliva was not necessary to produce results within a similar absorbance range.
Effect of Size and Concentration on Antibiofilm Activity of SeNPs
As shown in Figure 6, the highest biofilm percentage inhibition was achieved at a 1000 µg/mL SeNP concentration, in which the formulation ratio 1:1 of sodium selenite to ascorbic acid generated 99.87 ± 2.41% inhibition relative to the growth control (0 µg/mL) at 0% biofilm inhibition.Volume ratios of 1:3 and 1:5 of sodium selenite to ascorbic acid possessed inhibition percentages comparable to that of 1:1 at the 1000 µg/mL SeNP concentration.Nearly all concentrations that exhibited biofilm inhibition percentages well below 50% (7.81-62.5 µg/mL) demonstrated statistically significant differences, with the SeNP concentrations that exhibited biofilm inhibition percentages appearing well above 50% (250-1000 µg/mL) (p < 0.0001).Adeyemo et al. (2022) [29] characterised biofilm percentage inhibition as follows: <0-biofilm growth enhancers; 0-50%-weak antibiofilm activity; and >50%, which were categorised as good biofilm inhibitors [29] and similarly, SeNP concentrations at 125 µg/mL for 1:1 and 250 µg/mL-1000 µg/mL at volume ratios of 1:3 and 1:5 were classified as good biofilm inhibitors.At concentrations below 31.25 µg/mL, 62.5 µg/mL and 15.63 µg/mL were characterised as biofilm growth enhancers for ratios of 1:1, 1:3 and 1:5, respectively [29].However, another perspective could be that the SeNPs are operating at sub-antibiofilm concentrations and are simply incapable of hindering growth at such low concentrations and, therefore, display concentration-dependant activity.2013), in a study that tested the sucrose-dependent biofilm formation of three S. mutans strains after 24 h incubation against different nicotine concentrations, reported that all three strains exhibited absorbance values below 0.07 at 490 nm [36].These authors also performed a similar experiment to that reported in this study to test salivadependent biofilm formation against nicotine, in which the biofilm growth control exhibited absorbance values below 0.15 at 490 nm for all three S. mutans strains [36].The absorbance results for both experiments appeared to be significantly lower compared to the present study, even though the assays were conducted under similar experimental conditions.Additionally, this indicates that the biofilm assay from this study might be superior as the process of coating wells in saliva was not necessary to produce results within a similar absorbance range.
metallic NPs can stimulate biofilm development as a defence mechanism against toxicity [40].This suggests that subinhibitory concentrations stimulate biofilm formation, which accounts for the 'biofilm enhancer' classification acquired by the lower concentrations of SeNPs.Additionally, this classification denotes that the S. mutans bacterial strain has the potential to operate as a moderate biofilm producer [27].Several studies have previously reported on the MIC of SeNPs, which have concluded that the MIC against S. mutans could be as low as 68 µg/mL [41].While these studies have reported commendable results at relatively low concentrations, MICs are only a representation of the activity of a drug against bacteria.This study investigated the effect of SeNPs on biofilms, which are clusters of surface-associated bacteria that are embedded in a self-produced matrix.Biofilms are known to be rather formidable, highly resistant, and less sensitive to antimicrobial agents in comparison to planktonic bacteria [42].It is likely, for this reason, that higher concentrations of SeNPs were required in this study to achieve optimal antibacterial/antibiofilm activity and, as such, ultimately yielded a higher MBIC.[37].Among the possible mechanisms theorised in this study include the fact that AgNPs, and more broadly metallic NPs, are responsible for the production of ROS, which might cause oxidative stress [38].The excess production of ROS ensuing under this oxidative stress may cause adverse effects on cellular components and the destruction of proteins, DNA, and lipids in the microbes [39].Another potential theory could be that reduced concentrations of metallic NPs can stimulate biofilm development as a defence mechanism against toxicity [40].This suggests that subinhibitory concentrations stimulate biofilm formation, which accounts for the 'biofilm enhancer' classification acquired by the lower concentrations of SeNPs.Additionally, this classification denotes that the S. mutans bacterial strain has the potential to operate as a moderate biofilm producer [27].
Several studies have previously reported on the MIC of SeNPs, which have concluded that the MIC against S. mutans could be as low as 68 µg/mL [41].While these studies have reported commendable results at relatively low concentrations, MICs are only a representation of the activity of a drug against bacteria.This study investigated the effect of SeNPs on biofilms, which are clusters of surface-associated bacteria that are embedded in a self-produced matrix.Biofilms are known to be rather formidable, highly resistant, and less sensitive to antimicrobial agents in comparison to planktonic bacteria [42].It is likely, for this reason, that higher concentrations of SeNPs were required in this study to achieve optimal antibacterial/antibiofilm activity and, as such, ultimately yielded a higher MBIC.
Kwasny & Opperman (2010) proposed that the lowest concentration of an agent that inhibited biofilm growth by ≥80% was considered the MBIC [43].Hence, an SeNP concentration of 500 µg/mL can be considered the MBIC across all formulations.The biofilm inhibition percentage graph appears to have a directly proportional relationship.This is apparent as the percentage of inhibition tends to increase with the increasing SeNP concentration.
During formulation, the SeNP samples (1000 µg/mL) that were suspended in TSB + 2.5% suc. at molar ratios 1:3 and 1:5 produced better monodispersity and smaller hydrodynamic sizes in comparison to the 1:1 resuspended sample (Figure 4).It is also important to note that no statistically significant comparisons were observed amongst the different volume ratios at concentrations that produced significant antibiofilm activity (500 µg/mL and above).Therefore, it was deduced that the size of these SeNPs during formulation did not greatly affect the antibiofilm activity thereof.Studies have previously reported that antibacterial activity is not significantly impacted by NP size, as much as it may be impacted by the NP concentration or excess production of ROS [44,45].Similarly, this study displays a directly proportional relationship between the concentration of SeNPs and inhibition of S. mutans, but not so much a correlation among the varying particle sizes displayed by each formulation ratio and percentage inhibition.
Conclusions
This study aimed to investigate the biofilm formation ability of S. mutans and determine the antibiofilm activity as well as the MBIC of SeNPs and whether SeNP size influences the biofilm inhibition percentage.SeNPs were as small as 46 ± 4 nm and were found to be monodisperse and relatively stable.SeNPs showed excellent antibiofilm activity against S. mutans, up to 99.87 ± 2.41%, and this activity was concentration-dependent. SeNPs are promising candidates for further development as novel therapies for the treatment of periodontitis.Further research could explore the cytotoxicity of their formulation in a mammalian cell line to rule out any potential noxious effects and determine their ability to eradicate preformed S.mutans biofilms while also refining the antimicrobial mechanisms of action associated with metallic NPs.Furthermore, future research could experiment on incorporating various drug delivery systems using SeNPs for controlled and localised drug release, such as a hydrogel scaffold, on account of its biocompatibility and comparable physical, chemical and biological properties to that of human tissues.Additionally, this SeNP formulation also exhibits great potential for loading into a niosome, as this system has been known to significantly enhance the biological properties of drugs, such as their antimicrobial and antibiofilm activity.
Figure 1 .
Figure 1.The (a) particle size, (b) PDI and (c) ZP results of the SeNP formulations at volume ratios of 1:1, 1:3 and 1:5 of sodium selenite to ascorbic acid (n = 3).The synthesis reaction proceeded at 25 °C, for 30 min, at 300 rpm and incorporated 10 mM of SDS as a stabiliser.Statistical significance is represented as follows-*: p˗value ≤ 0.05.Error bars represent SD.
Figure 1 .
Figure 1.The (a) particle size, (b) PDI and (c) ZP results of the SeNP formulations at volume ratios of 1:1, 1:3 and 1:5 of sodium selenite to ascorbic acid (n = 3).The synthesis reaction proceeded at 25 • C, for 30 min, at 300 rpm and incorporated 10 mM of SDS as a stabiliser.Statistical significance is represented as follows-*: p-value ≤ 0.05.Error bars represent SD.
Figure 2 .
Figure 2. HR-TEM images displaying the morphology, core diameter and corresponding size distribution graph generated via Image J software for SeNPs suspended in dH2O at volume ratios (a) 1:1, (b) 1:3 and (c) 1:5 at 100 nm.
Figure 2 .
Figure 2. HR-TEM images displaying the morphology, core diameter and corresponding size distribution graph generated via Image J software for SeNPs suspended in dH 2 O at volume ratios (a) 1:1, (b) 1:3 and (c) 1:5 at 100 nm.
Figure 3 .Figure 3 .
Figure 3. Stability of SeNPs in dH2O at different periods of 0˗, 1˗, 7˗, 14˗ and 30 days and the effect the time had on the (a) particle size, (b) PDI and (c) ZP of SeNPs at different volume ratios of sodium selenite to ascorbic acid (n = 3).The synthesis reaction proceeded at 300 rpm, for 30 min, at 25 °C and incorporated 10 mM of SDS as a stabiliser.Statistical significance is represented as follows-*: p˗value ≤ 0.05, **: p˗value < 0.01 and ***: p˗value < 0.001 Error bars represent SD.
Figure 4 .
Figure 4. Stability of 1000 µg/mL concentrations of SeNPs in TSB + 2.5% suc.after 24 h of incubation at 37 °C and the effect on (a) particle size, (b) PDI and (c) ZP of SeNPs at different volume ratios of sodium selenite to ascorbic acid (n = 3).The synthesis reaction proceeded at 300 rpm, for 30 min, at 25 °C and incorporated 10 mM of SDS as a stabiliser.Statistical significance is represented as follows-**: p˗value < 0.01, ***: p˗value < 0.001 and ****: p˗value < 0.0001.Error bars represent SD.
Figure 5 .
Figure 5. Biofilm formation of S. mutans in TSB+2.5% sucrose at 24 and 48 h incubation time periods and sterility control at 24 h incubation as measured by absorbance at 540 nm (n = 3).Statistical significance is represented as follows-*: p-value ≤ 0.05, ***: p-value < 0.001 and ****: p-value < 0.0001.Error bars represent SD.Li et al. (2013), in a study that tested the sucrose-dependent biofilm formation of three S. mutans strains after 24 h incubation against different nicotine concentrations, reported that all three strains exhibited absorbance values below 0.07 at 490 nm[36].These authors also performed a similar experiment to that reported in this study to test salivadependent biofilm formation against nicotine, in which the biofilm growth control exhibited absorbance values below 0.15 at 490 nm for all three S. mutans strains[36].The absorbance results for both experiments appeared to be significantly lower compared to the present study, even though the assays were conducted under similar experimental conditions.Additionally, this indicates that the biofilm assay from this study might be superior as the process of coating wells in saliva was not necessary to produce results within a similar absorbance range.
Figure 6 .
Figure 6.Percentage biofilm inhibition of SeNPs at volume ratios of 1:1, 1:3 and 1:5 and concentrations of 7.81, 15.63, 31.25,62.5, 125, 250, 500, 1000 µg/mL and 0 µg/mL as the negative control for a 24 h incubation period (n = 3).Not all statistically significant comparisons are depicted on the graph.Error bars represent SD.Similarly,Saeki et al. (2021) reported that sub-inhibitory concentrations of AgNPs (½ MIC, 7.81-31.25µM) significantly increased (p < 0.05) swarming, swimming, twitching motility, and the biofilm formation capacity in P. aeruginosa isolates[37].Among the possible mechanisms theorised in this study include the fact that AgNPs, and more broadly metallic NPs, are responsible for the production of ROS, which might cause oxidative stress[38].The excess production of ROS ensuing under this oxidative stress may cause adverse effects on cellular components and the destruction of proteins, DNA, and lipids in the microbes[39].Another potential theory could be that reduced concentrations of metallic NPs can stimulate biofilm development as a defence mechanism against toxicity[40].This suggests that subinhibitory concentrations stimulate biofilm formation, which accounts for the 'biofilm enhancer' classification acquired by the lower concentrations of SeNPs.Additionally, this classification denotes that the S. mutans bacterial strain has the potential to operate as a moderate biofilm producer[27].Several studies have previously reported on the MIC of SeNPs, which have concluded that the MIC against S. mutans could be as low as 68 µg/mL[41].While these studies have reported commendable results at relatively low concentrations, MICs are only a representation of the activity of a drug against bacteria.This study investigated the effect of SeNPs on biofilms, which are clusters of surface-associated bacteria that are embedded in a self-produced matrix.Biofilms are known to be rather formidable, highly resistant, and less sensitive to antimicrobial agents in comparison to planktonic bacteria[42].It is likely, for this reason, that higher concentrations of SeNPs were required in this study to achieve optimal antibacterial/antibiofilm activity and, as such, ultimately yielded a higher MBIC.Kwasny & Opperman (2010) proposed that the lowest concentration of an agent that inhibited biofilm growth by ≥80% was considered the MBIC[43].Hence, an SeNP concentration of 500 µg/mL can be considered the MBIC across all formulations.The | 2024-03-27T15:04:21.793Z | 2024-03-25T00:00:00.000 | {
"year": 2024,
"sha1": "4ee5a5010d5d123cde24381169736f5189a4cd53",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/16/4/450/pdf?version=1711340889",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "08245adbc4d0d9274bc733722d7fc61a15c1efe9",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": []
} |
227248337 | pes2o/s2orc | v3-fos-license | Bimodal distribution and set point HBV DNA viral loads in chronic infection: retrospective analysis of cohorts from the UK and South Africa [version 2; peer review: 1 approved]
Hepatitis B virus (HBV) viral load (VL) is used as a biomarker to assess risk of disease progression, and to determine eligibility for treatment. While there is a well recognised association between VL and the expression of the viral e-antigen protein, the distributions of VL at a population level are not well described. We here present cross-sectional, observational HBV VL data from two large population cohorts in the UK and in South Africa, demonstrating a consistent bimodal distribution. The right skewed distribution and low median viral loads are different from the left-skew and higher viraemia in seen in HIV and hepatitis C virus (HCV) cohorts in the same settings. Using longitudinal data, we present evidence for a stable ‘set-point’ VL in peripheral blood during chronic HBV infection. These results are important to underpin improved understanding of HBV biology, to inform approaches to viral sequencing, and to plan public health interventions. We have made it clear that we are not trying to directly compare HBV/HCV and HIV, more give a broad overview of how the viruses differ and how this may be due to underlying virus/host interplay. We have restructured our discussion, adding a ‘limitations and caveats’ section to our manuscript acknowledging the cohorts compared from the UK and South Africa are very different and our main aim is to demonstrate that inspite of their many differences, the bimodal viral load distribution in HBV is consistent.
Introduction
Hepatitis B virus (HBV) DNA viral loads (VL) show wide variation between individuals with chronic hepatitis B (CHB) infection, and are used to determine treatment eligibility 1 . The relationship between HBV e-antigen (HBeAg)-positive status and high VL in CHB is well recognised, but there are few refined descriptions of VL distribution, and limited understanding of the biology that underpins these patterns. Set point viral load (SPVL), defined as a stable level of viraemia in peripheral blood during the initial years of chronic infection, is a concept well established in HIV 2 . However, despite many biological similarities between HIV and HBV viral replication cycles, SPVL has not been explored for CHB to date.
Developing improved insights into the distribution of VL at a population level is important for planning wider treatment deployment to support progress towards international sustainable development goals for HBV elimination, which set ambitious targets for reducing morbidity and incidence of new CHB cases 3 . Characterisation of HBV VL dynamics is also important for mathematical modelling, and for generating new insights into persistence, transmission and pathogenesis. To support development of in vitro research, understanding the VL distribution at a population level informs approaches to viral sequencing, which typically have thresholds of 10 3 -10 4 iu/ml, below which sequences cannot be derived.
We have therefore set out to generate a preliminary description of the HBV VL distribution in independent cohorts from the UK and South Africa, to compare these patterns with VL distributions in two other chronic blood-borne viral infections, HIV-1 and hepatitis C virus (HCV), and to seek evidence for SPVL in HBV infection.
Methods
We retrospectively collected VL measurements ± supporting metadata for adults with chronic HBV, HCV and HIV infection from four cohorts: (i) HBV: UK dataset We collected data for adults (>18 years) with CHB infection (defined as positive HBsAg on ≥2 occasions ≥ 6 months apart) from electronic records at Oxford University Hospitals NHS Foundation Trust, as part of the National Institute of Health Research Health Informatics Collaborative (NIHR-HIC), as previously described 4 . We assimilated VL results (Abbott M2000 platform) for 371 individuals off nucleoside analogue therapy over six years commencing 1st January 2011, for whom baseline HBeAg status was available in 351 (95%) cases. Age, sex and self-reported ethnicity (using standard ethnicity codes) were available for 352, 355 and 322 individuals, respectively. For longitudinal VL analysis, we only used data prior to commencing antiviral treatment, including patients with ≥2 measurements ≥6 months apart (n=299 individuals, 1483 timepoints). The upper limit of quantification is HBV DNA 10 8 IU/ml.
(ii) HBV: South Africa dataset
We collected all HBV VL data from the South African National Health Laboratory Service (NHLS) recorded over a four year period commencing 1 st January 2015 (n=6506 individuals). These were generated using various commercial platforms in different NHLS labs across the country.
Other metadata (HBeAg status, HIV status, treatment data) were not available. For the purposes of analysis, we excluded VL measurements below the limit of detection based on the assumption that the majority of these samples were taken on antiviral treatment (indicated for HBV infection ± HIV co-infection). All those above the laboratory limit of quantification were designated 1.7×10 8 IU/ml. For analysis of longitudinal data, we included patients with ≥2 detectable VL measurements (n=874 individuals; 9578 timepoints).
(iii) HCV
Baseline HCV viral loads were collected for adults prior to commencing antiviral treatment between 2006-2018, representing 925 individuals, from the same source as the UK HBV data using the Abbott M2000 platform, and collected through the NIHR-HIC pipeline. The setting and characteristics of this study population has been previously described 5 .
(iv) HIV HIV data were obtained from a UK database of HIV seroconverters between 1985-2014 through the BEEHIVE collaboration (n=1581) 2 . HIV VL was measured using COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 on samples collected starting at 6-24 months after infection. SPVL was defined as the average VL for each patient over time, as previously described 2 .
Statistical analysis
We used Graphpad Prism v.8.2.1 for analysis of VL distributions, skewness, and univariate analysis of patient parameters associated with HBV VL (Mann Whitney U test and Kruskall Wallis test). HBV and HCV VL are conventionally reported in IU/ml, but to make direct comparisons between VL in different infections, we also converted data into copies/ml (1 IU = 5.4 copies/ml for HBV 6 and 2.7 copies/ml for HCV 7 . We used R package (version 3.6.1) to assess within and between patient VL variability, using longitudinal data from UK REVISED HBeAg-negative adults, and from South African individuals with detectable VL. A large contribution of between-host variation would provide support for SPVL. We defined total variation, between-individual and within-individual variation according to analysis of variance (ANOVA). Specifically, the calculations are as follows: ( )
For the UK data we investigated whether sex, age or ethnicity had any influence on VL; the only significant association was lower VL with increasing age in the HBeAg-positive group (p=0.01 by Kruskal Wallis, Supplementary Figure 1; extended data 9 ).
Inter-patient variation accounted for 82.7% and 88.0% of the variability in UK and South African longitudinal datasets respectively, whilst within-patient variation accounted for 17.3% and 12.0%. This provides support for a stable SPVL within individuals with CHB.
Summary of Results
In this short report, we describe a consistent bimodal distribution of VL in CHB in a diverse UK population and a large South African dataset, in keeping with previously published studies (e.g. 10), and reflecting the role of HBeAg in immunomodulation 11 . However, descriptions of this pattern have not previously been carefully refined. This is the first study to demonstrate the concept of SPVL in HBV infection, with between-host factors explaining >80% of the variation in VL during HBeAg-negative CHB.
Inferences based on the distribution of viral loads HBV viral loads in HBeAg-negative infection are significantly lower than HCV and HIV, which may relate to differences in viral population structure, viral fitness, host immune responses, and the availability of target cells. These factors might also explain why HIV, HCV and HBeAg-positive infection have left skew VL distributions, whereas HBeAg-negative infection has a right skew. Broadly, the biological significance of the relationship between VL and HBeAg status could be considered in two ways, first by addressing the mechanisms that underpin viraemic control, and second by considering the impact of alterations in VL on disease outcomes, including inflammatory liver disease, cancer and cirrhosis. These could not be addressed within this current dataset, but remain important questions for future research.
Limitations and caveats
The cohorts on which we report are different in many ways (host and viral genetics, demographics, environmental factors, access to treatment and laboratory monitoring), and for this reason we do not set out to make any statistical comparisons between cohorts in different settings. Rather, we make the more general observation that in spite of these many potential differences, the overall bimodal distribution of HBV viral loads is broadly consistent. A smaller proportion of individuals with high viraemia in the UK cohort is likely to be reflective of wider access to suppressive antiviral therapy. Missing metadata is a limitation for further analysis of our South African dataset, and longer term aspirations will be to investigate larger VL datasets together with more robust longitudinal clinical and laboratory data.
Implications for HBV sequencing
Whole genome sequencing has the potential to increase our understanding of HBV, but approximately 50% of cases fall below the current sequencing threshold 12 . This means that at present there is a significant 'blind spot' in sequence data, preventing analysis of sequence variants in individuals with VL below the population median. The data presented in this report highlight the current challenges for HBV sequencing, and a need for resource investment to improve the sensitivity of sequencing approaches, for example considering amplification or enrichment approaches.
Conclusions and future aspirations
Enhanced descriptions of HBV VL may shed light on the biology of chronic HBV infection, inform mathematical models of viral population dynamics within and between hosts, improve understanding of risk factors for transmission and disease progression, underpin optimisation of viral sequencing methods, and help to stratify patients for clinical trials and treatment. This project contains the following extended data:
Open Peer Review
factors affecting the distribution of VL.
The definition or biological significance of the set-point viral load in chronic HBV infection patients is unclear. For example, in the tolerant phase of HBV infection, the viral load can be maintained at a high level, while in the inactive phase of HBeAg negative, the viral load can be maintained at a low level. Can the authors explain the biological significance of the set point viral load of the patients?
2.
As for the Figure 1F, I don't think VL of different viruses (HBV, HCV, HIV) can be compared among the infected patients.
3.
In the discussion, the authors stated that the between-host factors explain >80% of the variation in VL during HBeAg-negative CHB. But the VL variability within and between patient was not given in the parts of results.
If applicable, is the statistical analysis and its interpretation appropriate? Partly
Are all the source data underlying the results available to ensure full reproducibility? Yes Are the conclusions drawn adequately supported by the results? Partly characteristics, but the basic characteristics of the 4 study cohorts are unclear or very different, the results are not comparable. For example, the metadata (HBeAg status, age, treatment data) were not available, but all of these characteristics are very important factors affecting the distribution of VL.
We agree with the reviewer about the differences between cohorts, and that we cannot determine host or viral factors that are associated with the viral load distributions we observe. We recognise that the text of our manuscript can be improved accordingly, and have made the following modifications: We have removed the reference to 'precise determinants' of viral load from the abstract, and have taken out an aspiration to link VL to host characteristics from the methods, which were potentially misleading. Instead, we have simplified the abstract to say: 'While there is a well recognised association between VL and the expression of the viral e-antigen protein, the distributions of VL at a population level are not well described'. We have also improved clarity in the abstract by stating explicitly that the approach is an observational one.
1.
A key learning point from these data is to inform HBV sequencing, which was not well represented in the introduction (although featured in the discussion); we have added this to make it clear that our intention is to provide an observation and description of HBV viraemia, rather than to make mechanistic insights or to draw direct comparisons between populations. We have amended the final sentence of the abstract to include the point about sequencing.
2.
We have structured the discussion with sub-headings to add clarity. Reflecting the point raised by the reviewer, we have added to 'limitations and caveats' section to say: 'The cohorts on which we report are different in many ways (host and viral genetics, demographics, environmental factors, access to treatment and laboratory monitoring), and for this reason we do not set out to make any statistical comparisons between cohorts in different settings. Rather, we make the more general observation that in spite of these many potential differences, the overall bimodal distribution of HBV viral loads is consistent'.
3.
We recognise that a bigger metadata set would be of huge value, but providing this on a national level would not be feasible for any setting, and certainly not for South Africa where there are substantial clinical and laboratory resource constraints. However, our report is a very unusual opportunity in sharing viral load data for a whole country. We have added to the discussion: 'The South African dataset represents viral load data for the whole country; assimilating wider clinical or laboratory metadata is not currently practical. In many low/middle income settings, biomarkers such as HBeAg status are infrequently measured due to resource constraints. Furthermore, linkage between clinical data (such as treatment) and laboratory data (such as viral load) is challenging at a national level for even high income settings.' For this reason, we have formulated our observations into a short report, rather than a full length paper; we believe this is a proportionate way to share observational data which underpins questions for future research into the associations and determinants of viral load.
4.
The definition or biological significance of the set-point viral load in chronic HBV infection patients is unclear. For example, in the tolerant phase of HBV infection, the viral load can be maintained at a high level, while in the inactive phase of HBeAg negative, the viral load can be maintained at a low level. Can the authors explain the biological significance of the set point viral load of the patients?
We agree this is a really interesting question. This difference in set-point according to eAg status is highlighted by Figure 1A vs 1B. The 'biological significance' of this observation could be considered in two ways, first addressing the mechanisms underlying the marked difference in viral loads, and second by considering the impact of this change on driving pathology. These are both complex questions, that remain to be clearly elucidated and are outside the remit of this current paper; rather than setting out to address these questions, the aim of this short report is to provide observational data that is a foundation for future research. We have added this point to the discussion.
As for the Figure 1F, I don't think VL of different viruses (HBV, HCV, HIV) can be compared among the infected patients.
We agree that a direct comparison between viral loads is difficult, but are intended to reflect a broad comparison between the host/viral interplay for different infections.
We have amended the abstract to remove the statement that the HBV, HCV and HIV cohorts are 'comparable' and instead say the cohorts are 'in the same setting'. This removes any implication that direct comparison is appropriate. We have removed panel F and the sentence that compared median viraemia that supported this figure panel in the text of the results section.
In the discussion, the authors stated that the between-host factors explain >80% of the variation in VL during HBeAg-negative CHB. But the VL variability within and between patient was not given in the parts of results.
The results about within/between patient variability is already included in the results, as follows: 'Inter-patient variation accounted for 82.7% and 88.0% of the variability in UK and South African longitudinal datasets respectively, whilst within-patient variation accounted for 17.3% and 12.0%. This provides support for a stable SPVL within individuals with CHB. ' We think this provides the information that the reviewer is seeking, but would welcome further specific feedback if additional amendment is thought to be required.
Is the work clearly and accurately presented and does it cite the current literature?
○
Yes
Is the study design appropriate and is the work technically sound?
Partly
Having improved the aims and methods to state more clearly our intention to present an observational comment about distribution of viral loads, and removing the direct comparison between HIV, HCV and HBV, we believe we have addressed any concerns. Are sufficient details of methods and analysis provided to allow replication by ○
others? Yes
If applicable, is the statistical analysis and its interpretation appropriate? ○ Partly As above, we think that changes to the methods and results (specific details set out above, and removal of panel 1F) tackle any deficiencies in the first version. Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results? ○ Partly An improved and expanded discussion section has allowed us to present conclusions more clearly, and we have improved on objective reporting of where primary conclusions that can be drawn directly from this dataset, and where future research is still required. | 2020-06-19T06:04:12.612Z | 2020-10-14T00:00:00.000 | {
"year": 2020,
"sha1": "ef02e295c516ffc0ddfdaa1caaf3ecee8437031b",
"oa_license": "CCBY",
"oa_url": "https://wellcomeopenresearch.org/articles/5-113/v2/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d298afb7540393b55aa3345abdd2f590eb2e85ae",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49316335 | pes2o/s2orc | v3-fos-license | Centella asiatica increases hippocampal synaptic density and improves memory and executive function in aged mice
Abstract Introduction Centella asiatica is a plant used for centuries to enhance memory. We have previously shown that a water extract of Centella asiatica (CAW) attenuates age‐related spatial memory deficits in mice and improves neuronal health. Yet the effect of CAW on other cognitive domains remains unexplored as does its mechanism of improving age‐related cognitive impairment. This study investigates the effects of CAW on a variety of cognitive tasks as well as on synaptic density and mitochondrial and antioxidant pathways. Methods Twenty‐month‐old CB6F1 mice were treated with CAW (2 mg/ml) in their drinking water for 2 weeks prior to behavioral testing. Learning, memory, and executive function were assessed using the novel object recognition task (NORT), object location memory task (OLM), and odor discrimination reversal learning (ODRL) test. Tissue was collected for Golgi analysis of spine density as well as assessment of mitochondrial, antioxidant, and synaptic proteins. Results CAW improved performance in all behavioral tests suggesting effects on hippocampal and cortical dependent memory as well as on prefrontal cortex mediated executive function. There was also an increase in synaptic density in the treated animals, which was accompanied by increased expression of the antioxidant response gene NRF2 as well as the mitochondrial marker porin. Conclusions These data show that CAW can increase synaptic density as well as antioxidant and mitochondrial proteins and improve multiple facets of age‐related cognitive impairment. Because mitochondrial dysfunction and oxidative stress also accompany cognitive impairment in many pathological conditions this suggests a broad therapeutic utility of CAW.
The cognitive enhancing effects of the plant have been supported by a handful of small clinical trials in healthy middle aged and older adults (Dev, Hambali, & Samah, 2009;Wattanathorn et al., 2008). A number of preclinical studies have also demonstrated similar cognitive enhancing effects of Centella asiatica in multiple rodent models of pathological cognitive impairment (Gupta & Srivastava, 2003;Kumar & Gupta, 2002;Soumyanath et al., 2012;Veerendra Kumar & Gupta, 2003). Our laboratory has previously shown that a water extract of Centella asiatica (CAW) added to the drinking water can attenuate spatial memory impairments in healthy aged mice (Gray et al., 2016).
Yet the effects of CAW on cognitive domains beyond spatial memory remains relatively unexplored as does its mechanism of improving age-related cognitive impairment. Here we explore the effects of CAW on multiple cognitive domains beyond spatial memory, including recognition memory and executive function in healthy aged mice. We also examine the effects of the extract on synaptic density as well as mitochondrial and antioxidant protein expression in the brains of treated animals.
| Aqueous extract of Centella asiatica
Dried leaves of Centella asiatica was purchased (Oregon's Wild Harvest, GOT-03193c-OHQ01) and its identity was confirmed by comparing its thin layer chromatographic profile with that reported in the literature (Günther & Wagner, 1996) and the Centella asiatica samples used in our previous studies (Gray et al., 2014;Gray et al., 2016;Soumyanath et al., 2012). CAW was prepared by refluxing Centella asiatica (160 g) with water (2,000 ml) for 2 hr, filtering the solution and freeze drying to yield a powder (~16-21 g).
| Animals
Twenty-month-old male and female CB6F1 mice were obtained from the NIH National Institute on Aging (NIA) aged rodent colony.
Mice were maintained in a climate-controlled environment with a 12-hr light/12-hr dark cycle, and fed AIN-93M Purified Rodent Diet (Dyets Inc., Bethlehem, PA). Diet and water were supplied ad libitum. Mice were exposed to CAW in their drinking water (2 g/L) for 2 weeks prior to the beginning of behavioral testing. Control animals were given normal, unsupplemented drinking water. Thirty-six mice (18 male, 18 female) were randomly assigned to treatment groups.
Water consumption was monitored throughout the experiment to ensure the addition of CAW did not affect overall water intake.
Following 3 weeks of behavioral testing, animals were sacrificed and tissue harvested as outlined in the timeline below ( Figure 1). All mice completed behavioral testing and thus none were excluded from analysis. Based on pilot experiments we expected to see changes of 20%-25% in the behavioral tests after CAW treatment (standard deviation ~5%). Based on these estimates, we calculated 5-8 animals per condition to obtain adequate power with the Odor Discrimination Reversal Learning (ODRL) task requiring the most animals due to more subtle changes observed in our pilot experiments.
F I G U R E 1 Timeline of CAW treatment and behavioral assessment. Mice were treated with CAW 2 weeks prior to the beginning of behavioral testing and treatment continued throughout the experiment. After testing, animals were sacrificed and tissue was harvested. CAW treatment lasted a total of 5 weeks After 2 hr and 24 hr, mice were placed again in the apparatus, where this time one of the objects was replaced by a novel one (distinct at each time point). Mice were allowed to explore for 5 min. Preference for the novel object was expressed as the percent time spent exploring the novel object relative to the total time spent exploring both objects. The objects were a glass cylindrical votive, a metal cylindrical jar, a plastic rectangular box, and a plastic trapezoidal prism, all with approximately the same height. The identity of the objects-which one was novel or familiar-was balanced between groups. No preference was observed in this study for any object over the others.
| Behavioral testing
Object Location Memory task (OLM): The OLM also capitalizes on the exploratory nature of mice and evaluates location memory.
The experimental apparatus, habituation, and training for this task were identical as described above for the NORT. For the testing phases after 2 hr or 24 hr, mice were placed back in the apparatus but one of the objects was displaced to a novel spatial location (a third location was used at 24 hr). Mice were again allowed to explore the environment for 5 min Time spent exploring the displaced and nondisplaced objects was measured. Exploration was analyzed during both the training and testing phases.
In both the NORT and the OLM, objects were cleaned between trials with chlorhexidine (Nolvasan) to eliminate odor cues. All testing and training sessions were videotaped and analyzed by an experimenter blind to the treatment of the animals. It was considered exploration of the objects when mice were facing and sniffing the objects within very close proximity and/or touching. Mouse behavior was recorded with a video camera positioned over the behavioral apparatus and the collected videos were analyzed with the ANY-MAZE software (Stoelting Co., Wood Dale, IL, USA).
Odor Discrimination Reversal Learning test (ODRL): This task, also called attention set-shifting task, evaluates executive function.
The test is comprised of four phases: shaping, learning, acquisition, and shift. Mice can be readily trained to dig in small bowls to retrieve food rewards (Bissonette et al., 2008;Young, Sharkey, & Finlayson, 2009;Young et al., 2007). Plastic cups (4.5 × 3 cm) were used for digging bowls and filled with a digging material of home cage bedding, dried black beans, or alder wood chips. The digging material was scented with lavender, mint or vanilla all commercially available (Fred Myers brand). The food reward was a part of a Froot Loop for each correct trial. The test apparatus was a gray rigid PVC enclosure (12″ × 8″7″) with a removable divider in the center. The digging bowls were placed on one side of the divider and the mouse on the other. At the beginning of each trial, the divider was removed.
In the shaping phase, mice were introduced to the testing chamber and trained to dig for a food reward in lavender scented bedding material. Mice were presented with a single bowl containing the food reward that was progressively filled with bedding in five stages, 0%, 25%, 50%, 75%, and 100% filled. The mouse advanced to the subsequent training step when it had successfully retrieved the food reward 5 times in a row.
The acquisition phase began after mice had completed the shaping phase. In this phase, mice were presented with two cups one containing dried beans and the other wood chips. In every trial, one digging material had the vanilla odor and the other the mint odor and the odor and material pairings were randomly alternated between trials but balanced over the acquisition phase so that each mouse was exposed to roughly equal combinations of each odor and digging material. Whether the baited cup was presented on the right or left side of the apparatus was also balanced throughout testing. In the acquisition phase, the mint-scented bowl was always baited regardless of digging material. Example trials are found in Table 1. Each trial was initiated by raising the divider and allowing access to both bowls. Mice were required to make eight correct digs in any bout of 10 in order to reach criteria. Trials to criteria and latency to retrieve the reward were recorded. After a mouse reached criteria in the acquisition phase, they immediately proceeded to the shift phase. As in the previous phase in the shift phase were presented with two cups one containing dried beans and the other wood chips. In every trial, one digging material had the vanilla odor and the other the mint odor and again the odor + digging material pairings were balanced throughout the trial as was right/left location of the baited cup. In the shift phase, however, the cup with the dried beans was always baited regardless of odor.
Again, criteria were defined as eight correct trials in any bout of 10 and trials to criteria as well as latency to retrieve the reward was recorded. Mice were food restricted the night before each phase of the ODRL in order to motivate the animals.
| Golgi
The FD Rapid GolgiStain™ Kit (FD Neurotechnologies) was used as per the manufacturer's instructions. In brief, one hemisphere of the brain was fixed for 9 days, sectioned coronally into 200um slices on a vibratome and mounted on gel-coated slides. After drying, slides were stained and coverslipped using Permount (Fisher Scientific). Images were acquired using an Axio Imager M2 with an Apotome™ attachment and two cameras; an Axiocam 512 color and an AxioCam 506 mono. The system is driven by Zen
| Graphs and statistics
All bar graphs have error bars reflecting standard error of the mean.
Statistical significance was determined using one-or two-way analysis of variance or with appropriate t tests. Bonferroni post hoc tests were also conducted. Significance was defined as p ≤ 0.05. Analyses were performed using Excel or GraphPad Prism 6.
| CAW improves location memory in aged mice
We previously demonstrated the CAW improves spatial memory in aged C57BL6 mice (Gray et al., 2016). To validate this finding in the CB6F1 mouse line we used the OLM test. Aged CB6F1 mice (20 months) were treated with CAW in their drinking water (2 g/L) for 2 weeks prior to behavioral testing, and exposure to CAW con-
| CAW improves recognition memory in aged mice
CAW treatment also improved performance in the NORT in both male and female mice (Figure 3a). While control animals did not display a preference for the novel object at either 2 hr or 24 hr post-training, CAW-treated animals spent significantly more time exploring the novel object than the familiar object at both time points, with males and females spending 63% and 58% of the 5 min test time, respectively, at 2 hr and 65% and 59% of their time, respectively, after 24 hr (Figure 3b,c).
| CAW improves learning and executive function in aged mice
The ODRL, also called attention set-shifting task, evaluates executive function. Executive function includes behaviors like attentional selection, behavioral inhibition, cognitive flexibility, task switching, planning, and decision-making (Buckner, 2004).
The two test components of the OD RL are the acquisition and shift phases. While the acquisition phase assesses learning, the shift phase probes executive function, specifically cognitive flexibility. In the acquisition phase of the ODRL male CAW-treated mice took significantly fewer trials than controls to reach criteria ( Figure 4a). Female CAW-treated mice showed a similar trend toward improved performance in the acquisition phase but it did not achieve significance. In contrast, in the shift phase, CAW-treated female mice needed significantly fewer trials to reach criteria than their control counterparts. In the male mice, CAW treatment reduced the number of trials necessary as well but not significantly ( Figure 4a). Interestingly in male mice, CAW treatment also significantly increased the latency retrieve the reward in both the acquisition and shift phases. Female CAW-treated mice showed a similar trend toward increased latency but it was not significant in either test phase (Figure 4b).
| CAW increases synaptic density in the hippocampus of aged mice
CAW treatment resulted in a significant increase in dendritic spine density in the CA1 region of hippocampus in male mice. There was a comparable increase in hippocampal spine density in CAW-treated female mice (Figure 5a,b). CAW also increased the hippocampal expression of the presynaptic protein synaptophysin in female mice when normalized to GAPDH. A similar trend was observed in CAW-treated male mice although it did not reach significance (Figure 6a-c).
| CAW increases hippocampal expression of antioxidant and mitochondrial proteins in aged mice
CAW increased the expression of the antioxidant regulatory protein NRF2 in the hippocampus of male and female mice (Figure 6a-c).
The ratio of phosphorylated NRF2 to total NRF2 was also increased both genders. The mitochondrial protein porin (also called VDAC1) was robustly increased in the brains of the CAW-treated animals.
| D ISCUSS I ON
The beneficial effects of CAW on neuronal health and cognitive function have been well-documented both in vitro and in vitro (Gupta & Flora, 2006;Gupta & Srivastava, 2003;Kumar & Gupta, 2002;Shinomol & Muralidhara, 2008;Soumyanath et al., 2005;Veerendra Kumar & Gupta, 2003). Our laboratory has previously reported that CAW improves performance in the Morris Water Maze (MWM) in mice exposed to Aβ as well as healthy older mice (Gray et al., 2016;Soumyanath et al., 2012). In this study, we further explore the effects of CAW on age-related cognitive impairment using a battery of behavioral tests assessing learning, memory, and executive function.
We found that treatment with CAW for 2 weeks prior to the beginning of, and continuing throughout behavioral testing (Figure 1), improved the performance of both male and female 20-month-old CB6F1 mice in the OLM test. The OLM is a hippocampal-dependent test of location memory (Assini, Duzzioni, & Takahashi, 2009;Cipolotti, 2006) which is known to be impaired with aging in both Three animals were evaluated per treatment condition with 3-6 images quantified per animal. *p < 0.05, **p < 0.01 Gray, Zweig, Murchison, et al., 2017;Soumyanath et al., 2012). It is likewise consistent with our laboratory's previous finding that CAW improves MWM performance, another hippocampal-dependent task, in healthy aged mice (Gray et al., 2016). Interestingly in that task, there was a more pronounced effect of CAW in the male animals while in the OLM no gender differences were observed.
We also observed improvements in the NORT performance following CAW treatment in aged CB6F1 mice. This suggests that the beneficial effects of CAW may not be restricted to the hippocampus as both the hippocampus and cortex are known to play an important role in object recognition memory (Aggleton, Albasser, Aggleton, Poirier, & Pearce, 2010;Buckmaster, Eichenbaum, Amaral, Suzuki, & Rapp, 2004;Clark, Zola, & Squire, 2000;Hammond, Tull, & Stackman, 2004). Age-related impairments in object recognition are seen in both rodents and humans (Diaz et al., 2017;Kaviani et al., 2017;Li et al., 2015;Merriman, Ondřej, Roudaia, O'Sullivan, & Newell, 2016;Singh & Thakur, 2014). While to our knowledge this is the first report of Centella asiatica affecting recognition memory in aged rodents, it has been demonstrated that performance in the NORT is improved following administration of other polyphenolcontaining plant extracts (Carey et al., 2014;Matias et al., 2017;Nam et al., 2013;Yu et al., 2013).
This study is also the first report, to our knowledge, of effects of Centella asiatica on executive function. Executive function includes elements like impulse control, attention, planning, cognitive flexibility, and problem solving. It is mediated by the prefrontal cortex and is very sensitive to age-related decline (Buckner, 2004;Raz & Rodrigue, 2006). The Wisconsin Card Sorting Test (WCST) is one of a number of widely used tests to assess executive function in humans. In this task subjects are required to adapt behavioral responses to choose the "correct" stimulus array based on sudden rule changes across multiple modalities (Eling, Derckx, & Maes, 2008).
Performance in this task declines with age (Ashendorf & McCaffrey, 2008). The ODRL, also called attention set-shifting task, is a parallel test that has been developed for rats and, more recently, mice (Birrell & Brown, 2000;Garner, Thogerson, Würbel, Murray, & Mench, 2006). Like the WCST the ODRL requires paying attention to relevant stimuli while ignoring irrelevant stimuli and subsequently shifting the attention, either within dimensions or between dimensions of the test stimuli (Birrell & Brown, 2000). Also like the WCST performance in the ODRL declines with age (Barense, Fox, & Baxter, 2002;Beas, Setlow, & Bizon, 2013;Young et al., 2010).
We found that CAW treatment improved performance in both the acquisition and shift phases of this test. The acquisition phase of the ODRL assesses classical learning and the improvement observed following CAW treatment is in line with our previous report of enhanced performance of CAW-treated aged mice in the hidden platform phase of the MWM. The shift phase of ODRL is the metric of cognitive flexibility. Here we also observed an improvement following CAW treatment.
Interestingly while the number of trials to reach criteria in both ODRL phases decreased with CAW treatment, the latency to find the reward appeared to increase in treated animals, especially the treated males. This combination of effects could suggest decreased impulsivity in the CAW-treated animals or an improved accuracy trade-off strategy with a shift toward more goal-directed action instead of habitual action. The selection of goal-directed actions is governed by F I G U R E 6 CAW increases antioxidant, mitochondrial, and synaptic proteins in the hippocampus of aged mice. (a) Representative western blot from aged animals. (b) Quantification of multiple blots. CAW increased the expression of NRF2 protein as well as the ratio of phosphorylated NRF2 to total NRF2 in the hippocampus of aged male mice. CAW treatment also significantly increased expression of the mitochondrial protein porin (F = 41.28). (c) Quantification of multiple blots. CAW increased the ratio of phosphorylated NRF2 to total NRF2 in the brains of aged female mice. The expression of porin and synaptophysin were similarly increased in these animals (F = 31.03). n = 8-9 in each group, *p < 0.05 associations between the value of the consequences and is sensitive to changes in the causal relationship between the action and those consequences whereas habituation actions are controlled through stimulus-response associations without the association with the value of the outcome (Griffiths, Morris, & Balleine, 2014). Imbalances between goal-directed and habitual action are observed in many neurological disorders including Parkinson's disease, Tourette Syndrome and obsessive-compulsive disorder (Gillan & Robbins, 2014;Pappas, Leventhal, Albin, & Dauer, 2014;Redgrave et al., 2010) so these results may indicate a broad therapeutic benefit of CAW beyond agerelated cognitive impairment. It would be interesting in future studies to see if similar effects are seen in young animals.
In this study, we also observed increased synaptic density in the CAW-treated animals. We have previously demonstrated that CAW can increase spine density in primary hippocampal neurons in culture Gray, Zweig, Murchison, et al., 2017) but here we show oral administration of the extract exerts the same effects in vivo. In addition, the CAW-induced an increase in the expression of synaptophysin is consistent with our previous report of increased gene expression of synaptophysin and postsynaptic density protein 95 the brains of aged CAW-treated mice (Gray et al., 2016). As increased synaptic density is known to correlate with improved cognitive function (Terry et al., 1991) this is likely the physiological underpinning of the improvement in hippocampal-dependent tests seen in this study. The fact that we also saw improvements in executive function suggests that these structural changes are likely occurring in other brain regions as well, specifically the prefrontal cortex. In fact, the changes in synaptic gene expression that we reported previously (Gray et al., 2016) were found to occur in multiple brain regions further supporting this idea.
The observed changes in NRF2 and porin are in accordance with our previous gene expression data as well (Gray et al., 2016).
These increases suggest an activation of the antioxidant pathway and an increase in mitochondrial content respectively. It remains to be seen what contribution each of these play in the memory enhancing properties of CAW, but in mice cognitive decline is associated with dysfunctional mitochondria (Masiero & Sandri, 2010) and increased oxidative damage (Forster et al., 1996). Moreover, both over-expressing antioxidant enzymes and increasing mitochondrial content has been shown to improve memory in rodents (Cao et al., 2010;Chen, Na, & Ran, 2014;Olsen et al., 2013). Experiments are underway in the laboratory to evaluate the effects of the extract on NRF2KO mice to determine if activation of the NRF2 pathway is required for the cognitive enhancing effects of CAW.
Our findings here further demonstrate the cognitive enhancing effects of CAW. Relatively short treatment with extract improved several different domains of cognitive performance in aged animals and enhanced synaptic density as well as mitochondrial and antioxidant response pathways in vivo. While the exact relationship between these effects remains to be elucidated, the fact that synaptic dysfunction and cognitive impairment accompany oxidative stress and mitochondrial dysfunction in many pathological conditions (Emerit & Bricaire, 2004;Lin & Beal, 2006) suggests the potential utility of CAW is far broader than for aging alone.
ACK N OWLED G M ENTS
This work was funded by NIH-NCCIH grant R00AT008831 (Gray), NIH-NCCIH grant R01AT008099 (Soumyanath), and a Department of Veterans Affairs Merit Review grant awarded to J. Quinn. The authors acknowledge Dr. Matthew Lattal and Dr. Gregory Peters for their assistance with the behavioral assays.
CO N FLI C T O F I NTE R E S T
None declared. | 2018-07-03T20:13:54.516Z | 2018-06-19T00:00:00.000 | {
"year": 2018,
"sha1": "8bd5386ebf4d15cc2264598c659d3beff259e871",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1024",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bd5386ebf4d15cc2264598c659d3beff259e871",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
246783119 | pes2o/s2orc | v3-fos-license | Influence of initial soil moisture in a Regional 1 Climate Model study over West Africa . Part 1 : 2 Impact on the climate mean
12 The impact of soil moisture initial conditions on the mean climate over West Africa was 13 examined using the latest version of the Regional Climate Model of the International Centre for 14 Theoretical Physics (RegCM4) at a horizontal resolution of 25 km × 25 km. The soil moisture 15 reanalysis of the European Centre Meteorological Weather Forecast’s reanalysis of the 20th 16 century ERA20C is used to initialize the control experiment, while its minimum and maximum 17 values over the entire domain are used to establish the initial dry and wet soil moisture 18 conditions respectively (hereafter dry and wet experiments). For the control, the wet and dry 19 experiments, an ensemble of five runs from June to September are performed. In each 20 experiment, we analyzed the two idealized simulations most sensitive to the dry and wet soil 21 moisture initial conditions. The impact of soil moisture initial conditions on precipitation in West 22 Africa is linear over the Central and West Sahel where dry (wet) experiments lead to rainfall 23 decrease (increase). The strongest precipitation increase is found over the West Sahel for wet 24 experiments with a maximum change value of approximately 40%, while the strongest 25 precipitation decrease is found for dry experiments over Central Sahel with a peak of change of 26 approximately −4%. The sensitivity of soil moisture initial condition can persist for three to four 27 months (90-120 days) depending on the region. However, the influence on precipitation is no 28 longer than one month (between 15 and 30 days). The strongest temperature decrease is located 29 over the Central and West Sahel with a maximum change of approximately −1.5 °C in wet 30 experiments, while the strongest temperature increase is found over the Guinea Coast and 31 Central Sahel for the dry experiments, with a maximum of change around 0.6°C. A significant 32
The impact of soil moisture initial conditions on the mean climate over West Africa was 13 examined using the latest version of the Regional Climate Model of the International Centre for to have a much stronger impact on daily maximum temperature variability than on daily mean 59 temperature variability, but generally has small effects on daily minimum temperature, except in 60 the eastern Tibetan Plateau. They showed that soil moisture has a prominent contribution to 61 precipitation variability in many parts of western China. Zhang et al., 2008b). However, at local and regional scales, the land-atmosphere coupling studies 68 with AGCMs, present significant uncertainties (Xue et al. 2010 where the ability of the model to reproduce the climate mean has been validated. The 83 descriptions of the model and experimental setup used in this study are presented in Section. 2; 84 in the Section 3, the influence of wet and dry soil moisture initial conditions on the subsequent 85 climate mean is analyzed and discussed; and in Section 4 the main conclusions are presented.
86
While this Part I investigates the impacts on the climate mean, the Part II of this article will be 87 focused on the influence of soil moisture initial conditions on climate extremes. Research (NCAR) Community Climate Model Version three (CCM3) (Kiehl et al., 1996). 99 Aerosols representation is from Zakey et al. (2006) and Solmon et al. (2006). The large-scale 100 precipitation scheme is from Pal et al. (2000) and the moisture scheme is the SUBgrid EXplicit 101 moisture scheme (SUBEX). The SUBEX take into account the sub-grid scale cloud variability, 102 and the accretion processes and evaporation for stable precipitation following the work of
168
For the two years most sensitive to soil moisture initial conditions, the Student t-test is used to 169 compare the significance of the difference between a wet or dry sensitivity test (sample 1) and 170 the control (sample 2) in assuming that our two samples are independent and in considering that (Fig.4a, c). in magnitude than that of dry experiments over most studied domains (Fig. 7). For dry 258 experiments, the strongest daily precipitation response (about −4mm.day -1 ), is found over the 259 Guinea Coast in the run JJAS 2003 (Fig. 7c). While for the wet experiments, the strongest impact 260 on the daily precipitation is more than 8mm.day -1 and it is found over the West Sahel and the 261 Guinea Coast (Fig. 7b, c, respectively). It is worth to note that the impact of initial soil moisture 262 conditions on daily precipitation is much shorter than the duration of the impact on daily soil 263 moisture. The significant impact on daily precipitation is found only for wet experiments, and 264 9 did not last more than 15 days in large parts of the study domain, excepted over wetter sub-265 region of Guinea Coast where it lasts approximately one month. We noted that the precipitation 266 peaks over West Sahel and Guinea Coast (Fig. 7b and c, respectively) during August and 267 September coincide with fluctuation in the daily soil moisture impact (Fig.6b and c). This The impacts on relative humidity and air temperature (Fig.8 and Fig.9, respectively) Fig.8a and Fig. 9a).
280
For the upper troposphere, the significant impact on relative humidity and temperature is found 281 only for wet experiments, and exhibited a drying and a warming over most of studied domains 282 ( Fig.8 and Fig.9). This impact for the wet experiments was also reported by Hong and Pal
287
For the dry experiments (Fig. 10a, c), we found that the moistening of the lower atmosphere 288 decreases over most of the study domain. However, the strong wind magnitude changes over the 289 Atlantic Ocean bring the moistening from the ocean to the Guinea Coast and West Sahel. This 290 can explain the precipitation increase over these sub-regions in the dry experiments. Over 291 Central Sahel, the strong decrease in precipitation seems to be associated with the decrease of 292 specific humidity which is particularly notable in the run JJAS 2003 (Fig.4a). Conversely, for the 293 wet experiments (Fig.10b, d), an increase in the moistening of the atmosphere is found mainly 294 over the Sahel band while further South, a decrease of the specific humidity is simulated over 295 Guinea Coast. The strong change in wind magnitude shifts the moistening from the North to the 296 South, leading to precipitation increase over most part of study domain (Fig.4 b and d). These (Table 3). Figure 13. The impact on temperature is linear 334 over the Central Sahel, Guinea Coast and the whole West African domain (Fig.13a, c and d). The to temperature increase (Fig.13, Table2). 343 We now analyze the influence of soil moisture initial conditions anomalies on land energy 344 balance, particularly on the surface fluxes sensible and latent heat. Figure 14 shows changes in respect to the control exhibits significant increase (decrease) of the sensible heat (Fig.14). (Fig. 15). The impact in wet experiments is strong over Central and West Sahel 353 compared to the dry experiments, but not for Guinea Coast (Fig. 15, Table 2). In the dry 354 experiments, the strongest sensible heat flux increase is found over Guinea Coast, with 355 maximum change about 9.18 W.m -2 during JJAS 2004 (see Table 2). In the wet experiments, the Table 2). (Table 2).
369
We then examined the impact on the stability of the PBL of the soil moisture initial conditions. (Fig.18 a and c, respectively). For the wet experiments, a PBL 378 decrease is found over most of the studied domains. The PDF of PBL changes (Fig. 19) show
392
The impact of the soil moisture initial conditions on the subsequent summer (JJAS) mean 393 climate over West Africa was explored using the RegCM4-CLM45. In particular, the aim of this 394 study was to investigate how soil moisture initialization at the beginning of the rainy season may The temperature at 2m is more sensitive to the anomalies of initial soil moisture condition than
442
This study is the first investigating the impact of soil moisture initial conditions in West Africa.
443
However, this study is based on idealized experiments: sensitivity experiments such as "wet" and 444 "dry" ones conducted in this study were not intended to simulate real climate since such 445 extremes are very rare. Moreover, this study is very specific to RegCM4. In the future, an 446 investigation using different RCMs in a multi-model framework will contribute to better quantify | 2022-02-13T16:25:39.830Z | 2022-02-11T00:00:00.000 | {
"year": 2022,
"sha1": "588b583b483a5442aaa87cb08da58f0bdcefe1d7",
"oa_license": "CCBY",
"oa_url": "https://hess.copernicus.org/articles/26/711/2022/hess-26-711-2022.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a3c786b4e3f8654773c6e95b9bf12600d9f816a0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
155930880 | pes2o/s2orc | v3-fos-license | Determination of In Vitro Antimicrobial Activity of Five Sri Lankan Medicinal Plants against Selected Human Pathogenic Bacteria
Introduction Antibiotic resistance is one of the greatest threats of the 21st century. Scientists search for potential antimicrobial sources that can cope with antibiotic resistance. Plants used in traditional medicine can be identified as potential candidates for the synthesis of novel drug compounds to act against antibiotic-resistant bacteria. Objective To determine the potential antimicrobial effects of ethanol, aqueous, and hexane extracts of five Sri Lankan medicinal plants against four human pathogens. Methods Asparagus falcatus (tubers), Asteracantha longifolia (whole plant), Vetiveria zizanioides (roots), Epaltes divaricata (whole plant), and Coriandrum sativum (seeds) were used in the study. Plant extracts were screened against four clinically important Gram-positive and Gram-negative bacterial strains, Staphylococcus aureus (ATCC 25923), Escherichia coli (ATCC 25922), Pseudomonas aeruginosa (ATCC 27853), and Klebsiella pneumoniae (ATCC 700603). Antibacterial activity of plant extracts were monitored using the agar disc diffusion method. Eight concentrations of each positive plant extract were used to determine the minimum inhibitory concentration (MIC) by 5-fold dilution of plant extracts yielding a serial dilution of the original extract. Results Ethanol, aqueous, and hexane extracts of E. divaricata gave the maximum zones of inhibition of 16.3 mm, 7.4 mm, and 13.7 mm and MIC values of 0.48 mg/ml, 1.2 mg/ml, and 1.6 mg/ml, respectively, against S. aureus. Ethanol and hexane extracts of V. zizanioides gave the maximum zones of inhibition of 12.1 mm and 11.4 mm and MIC values 2.4 mg/ml and 0.003 mg/ml, respectively, against S. aureus. None of the other plants were effective against any microorganism used for the study. Conclusions It can be concluded that E. divaricata and V. zizanioides crude ethanol, aqueous, and hexane extracts exhibited significant in vitro antibacterial activity against S. aureus, and the active compounds isolated from them can be potential sources for the synthesis of antibacterial drugs.
Introduction
Infectious diseases have increased to a great extent during the recent years [1], and they are the second leading cause of death across the world and the third leading cause of death of economically developed countries [2].
In the recent past, a lot of evidence emerged that human pathogenic microorganisms have developed antibiotic resistance [1]. e existence of microbial strains with reduced susceptibility to antibiotics and increased number of antibiotic-resistant bacterial strains can be caused by indiscriminate use of broad-spectrum antibiotics, immunosuppressive agents, intravenous catheters, organ transplantation, and ongoing epidemic of human immunodeficiency virus (HIV) infections [3]. According to estimations in the USA, 2.22 million hospitalized patients had adverse drug reactions and 106,000 patients died in a single year [4]. Due to multidrug-resistant microbial strains and the adverse effects associated with the synthetic antimicrobial drugs, the scientists search for potential antimicrobial substances from various sources that can combat the issues associated with them [3,5].
Microorganisms, fungi, algae, symbiotic lichens and mosses, and higher plants are used to develop antimicrobial agents with novel mechanism of action against microorganisms [6]. roughout the world, medicinal plants and their products have been used in traditional medicine for centuries. Since the beginning of human civilization, plants and plant products are used as medicines [4]. Due to the plethora of evidence documented, medicinal plants would be the best source to obtain a variety of active constituents to be used as antimicrobial agents [3]. Even today, up to 80% of the population in the world depends on ethnomedicine for their medicinal purposes [7]. Sri Lanka is rich in all the three levels of biodiversity, namely, species diversity, genetic diversity, and habitat diversity. Out of the 3,300 flowering plant species in Sri Lanka, 830 (25%) species are endemic to the island [6]. Plant-derived medicines afford benefits such as profound therapeutic benefits, less or no side effects, low cost, and easy accessibility [1,3].
In the current study, five Sri Lankan traditional medicinal plants were tested for in vitro antimicrobial activity against Gram-positive and Gram-negative pathogenic bacteria (Table 1). Asparagus falcatus has been used as a cure for tuberculosis, sore throat, and as an antiemetic [7]. Asteracantha longifolia has been assayed for antimicrobial activity [8]. Coriandrum sativum is not only used in ayurvedic medicine but also important in Aboriginal medicine [9]. Epaltes divaricata is a seasonal medicinal plant and has been used to alleviate jaundice, urethral discharge, and acute dyspepsia [10]. Vetiveria zizinioides has been used as a cure for rheumatism and malarial fever and as an anthelminthic [2]. All five plants are used by traditional ayurvedic practitioners to alleviate different types of infectious diseases. It was hoped that this research will be beneficial to identify the plant/plants out of the above five plants that would be beneficial to act against the pathogenic bacteria.
Since traditional medicine is growing rapidly, subsequent demand for evidence for the quality, safety, and efficacy of traditional medical services and products has ensued. e scientific validation and creation of research data pertaining to quality, safety, and efficacy of Sri Lankan traditional medicine has become a pressing issue. Such scientific knowledge can help create an evidence-based traditional medicine that is increasingly respected by other public health professionals. Presence of phytochemicals, in addition to vitamins/provitamins and minerals, in fruits and vegetables has recently been considered of crucial nutritional importance in the prevention of chronic diseases [11]. us, a complex mixture of phytochemicals in plants provides a better protective effect on health than a single phytochemical. Saponins, tannins, polyphenols, alkaloids, flavonoids, anthraquinones, glycosides, and reducing sugars are some of the widely known phytochemicals identified in medicinal plants. Plants used in this study were not subjected to a complete phytochemical screening so far. erefore, this study will close that knowledge gap and will provide a better understanding of the phytoconstituents for their various actions reported.
us, the objective of this study was to identify the main groups of phytochemicals present in the five medicinal plants screened and to evaluate the antimicrobial activity of aqueous, ethanol, and hexane extracts of each medicinal plant against Gram-positive and Gram-negative pathogenic bacteria.
Plant Collection and Authentication. Fresh plant parts of
Asparagus falcatus, Asteracantha longifolia, Vetiveria zizanioides, Epaltes divaricata, and Coriandrum sativum (Figure 1) were collected from the gardens and villages of Southern Province, Sri Lanka. Plants were authenticated at the National Herbarium, Botanical Gardens, Peradeniya, Sri Lanka. e collected plants were washed with running tap water, air-dried and ground to a coarse powder, and stored in air tight bottles at 4°C.
Preparation of Plant Extracts.
Extracts were prepared according to the respective method used for the analysis of each phytochemical. Unless stated otherwise, all aqueous extracts were prepared by refluxing 2.6 g of powdered dried plant material in 30 ml of distilled water for 1 hr and concentrating the plant extract to a final volume of 20 ml.
Phytochemical Analysis.
Major classes of phytochemicals (tannins, alkaloids, phenolic compound, cyanogenic glycosides, cardiac glycosides, reducing sugars, saponins, and flavonoids) were determined by the simple and standard qualitative methods described by Trease and Evans [12] and Sofowora [13]. All methods were optimized with a positive control, and necessary precautions were taken to remove the interference from chlorophyll.
Preparation of Crude Extracts.
(1) Solvent Extraction cultures were obtained from Microbiology Department, Faculty of Medicine, University of Ruhuna.
Preparation of McFarland Standard.
McFarland number 0.5 standard was prepared by mixing 9.95 ml 1% H 2 SO 4 in distilled water and 0.05 ml 1% BaCl 2 in distilled water in order to estimate bacterial density [14]. e preparation was stored in an air tight bottle and used for comparison of bacterial suspension whenever required.
Antimicrobial Susceptibility Assays
(1) Disk Diffusion Assay. e antimicrobial susceptibility was initially assayed by the agar disk diffusion method [15]. ree concentrations (crude extract, 10-fold dilution, and 100-fold dilution) of each plant extract were prepared in 10% DMSO. Bacteria cell suspensions were adjusted to 0.5 McFarland turbidity standards to prepare 1 × 10 8 bacterial/ml inoculum. Each bacterial suspension was inoculated on Mueller-Hinton agar plates, and the plates were then allowed to dry for 5 minutes.
e sterile filter paper disks (Whatman No. 1, diameter � 6 mm) were soaked in 10 μl of each plant extract. e extract-soaked filter paper disks were then placed on the inoculated Mueller-Hinton agar plates. Cefotaxime (30 μg) disk was used as the positive control, and 10% DMSO-soaked filter paper disk was used as the negative control. Plates were incubated for 18 hr at 35 ± 2°C. After incubation, the zones of inhibition were recorded as the diameter of the growth-free zones measured in mm using a Vernier caliper.
(2) Minimum Inhibitory Concentration (MIC). Plant extracts that gave a positive result for the disk diffusion assay were used to determine MIC using the microplate dilution method [16]. Serial 5-fold dilutions of the plant extracts were prepared in the 10% DMSO, yielding seven serial dilutions of the original extract. Inoculum of organism was prepared in Mueller-Hinton broth, and the turbidity was adjusted to approximately 0.5 McFarland turbidity standard to prepare 1 × 10 8 bacterial/ml. 150 μl of plant extract was added to each well of the 96-well microplate. 50 μl of bacterial suspension was added to each well except the negative controls. Cefotaxime IV drug was used as the positive control. 10% DMSO and plant extracts without bacterial suspension were used as the negative controls. Microtiter plates were incubated at 35 ± 2°C for 24 hr. Antimicrobial activity was assessed by measuring absorbance at 630 nm of wave length.
Statistical Analysis.
Studies were performed in triplicate. Data were expressed as mean ± SEM.
Phytochemicals.
Out of the main phytochemicals tested, tannins, phenolic compounds, cardiac glycosides, and flavonoids were present in all five plant extracts. Alkaloids were present in all plant extracts except Asparagus falcatus. Saponins were present in all other plant extracts except Coriandrum sativum. Cyanogenic glycosides and reducing sugars were absent in all five plant extracts (Table 2).
Yield of Extraction.
e yield of extraction given by different plant extracts studied is shown in Table 3. Various yields were given by different plant extracts during extraction. Maximum yield was shown by the aqueous extract of Asparagus falcatus (34.73%), while the minimum yield was given by the hexane extract of Asparagus falcatus (0.5%).
Disc Diffusion Assay.
Among the five plants studied, Epaltes divaricata showed a significant inhibition zone in all three extracts (ethanol, aqueous, and hexane) and Vetiveria zizanioides showed a significant inhibition zone in two extracts (ethanol and hexane) against S. aureus (Figure 2). e concentrations of extracts varied among different plants (Table 4). e antibacterial activities of extracts according to the zone of inhibition ranged between 7.4 and 16.3 mm. Maximum zone of inhibition was observed in the ethanol extract of Epaltes divaricata (16.3 mm), and minimum zone of inhibition was given by the aqueous extract of Epaltes divaricata (7.4 mm). Aqueous extract of Epaltes divaricata gave 7.4 ± 0.2 mm zone of inhibition for crude extract against S. aureus. Ethanol extract of Epaltes divaricata gave corresponding zones of inhibition 16.3 ± 0.2, 13.4 ± 0.2, and 7.6 ± 0.1 mm, respectively, for crude extract, 10-fold dilution of crude extract, and 100-fold dilution of crude extract while Vetiveria zizanioides gave 12.1 ± 0.2 and 7.3 ± 0.3 mm, respectively, for crude extract and 10-fold dilution of crude extract against S. aureus. Hexane extract of Epaltes divaricata gave 13.7 ± 0.1 mm zone of inhibition for crude extract while Vetiveria zizanioides gave 11.4 ± 0.2 mm for crude extract against S. aureus. Positive control cefotaxime (30 μg) gave corresponding zones of inhibition 31.2 ± 0.3, 32.9 ± 0.3, 33.7 ± 0.2, and 18.6 ± 0.3 mm, respectively, against S. aureus, E. coli, K. pneumoniae, and P. aeruginosa. However, due to the differences in concentrations of extracts, effectiveness of the plant extracts cannot be accurately compared by comparing the respective diameters obtained in the disc diffusion assay. Hence, minimum inhibitory concentration was used to determine the effectiveness of the plant extracts accurately.
Minimum Inhibitory Concentration.
e MIC values obtained from plants exhibited antibacterial activity ranged between 0.003 and 2.4 mg/ml (Table 5). Ethanol, aqueous, and hexane extracts of E. divaricata gave MIC values of 0.48 mg/ ml, 1.2 mg/ml, and 1.6 mg/ml, respectively, against S. aureus. Ethanol and hexane extracts of V. zizanioides gave MIC values of 2.4 mg/ml and 0.003 mg/ml, respectively, against S. aureus. erefore, the highest antimicrobial activity was observed for the hexane extract of Vetiveria zizanioides.
Discussion
Search for potential antimicrobial drugs increased during the past few decades due to concomitant rise of antibiotic International Journal of Microbiology 5 resistant bacterial strains [5]. Studies conducted in several countries have reported that the use of compounds extracted from medicinal plants may be beneficial in the development of antibiotics [17]. ere are limited data published on the antimicrobial activity of Asparagus falcatus, Asteracantha longifolia, Vetiveria zizanioides, Epaltes divaricata, and Coriandrum sativum extracts. But, those reports are also from other countries where the soil composition of the respective region may alter the composition of active compounds and hence the antimicrobial activity present in each plant extract. In the current study, extracts from two plant species gave positive results during the initial antibacterial screening against S. aureus. None of the plant extracts were effective against Escherichila coli, Pseudomonas aeruginosa, and Klebsiella pneumoniae. Ethanol, aqueous, and hexane extracts of Epaltes divaricata gave maximum zones of inhibition 16.3 mm, 7.4 mm, and 13.7 mm, respectively, against S. aureus. Ethanol and hexane extracts of Vetiveria zizanioides gave maximum zones of inhibition 12.1 mm and 11.4 mm, respectively, against S. aureus. Zones of inhibition observed with the positive control, cefotaxime, for the organisms used in the study were 31.5 ± 0.2 for Staphylococcus aureus, 33.1 ± 0.2 for Escherichia coli, and 18.8 ± 0.2 for Pseudomonas aeruginosa and were compatible with the published zones of inhibitions such as 25-31 mm, 29-35 mm, and 18-22 mm, respectively. e MIC was determined against selected bacteria to quantitate the activity of these extracts. Epaltes divaricata gave MIC values of 0.48 mg/ml, 1.2 mg/ml, and 1.6 mg/ml, respectively, for ethanol, aqueous, and hexane extracts against S. aureus. Ethanol and hexane extracts of V. zizanioides gave MIC values of 2.4 mg/ml and 0.003 mg/ml, respectively, against S. aureus. According to literature, cefotaxime, the positive control used for this study was given a MIC value between 1 and 4 μg/ml. E. divaricata and V. zizanioides have previously been documented as effective against the growth of S. aureus, E. coli, and P. aeruginosa [10]. ese differences can be due to the differences in solvents used for extractions in the study. Some of the results of the present study agreed with their findings. However, two species tested in the present study (E. divaricata and V. zizanioides) exhibited a considerable activity against Gram-positive bacteria. None of the plant species that have been tested in the present study were effective against Gram-negative bacteria.
In a study by Wigmore et al. [5], they have documented that aqueous extracts of plants showed less antibacterial activity than solvent extracts of plants. In the present study also, aqueous extracts showed less antibacterial activity compared to ethanol and hexane extracts. Low polar compounds present in the two active plant extracts may be responsible for this antimicrobial activity.
According to the WHO basic criteria, plants should be nontoxic for the use as therapeutic agents. Five plants used in the study were previously evaluated for subchronic toxicity in mice, and it has already been documented that they are nontoxic in ICR mice [18].
In the present study, E. divaricata and V. zizanioides have exhibited the antibacterial activity on the Gram-positive bacteria than Gram-negative bacteria. e difference in morphological constituents between Gram-positive and Gram-negative bacteria may be the reason for the differences in antibacterial sensitivity. e structural lipopolysaccharide International Journal of Microbiology components in outer phospholipid membrane of Gramnegative bacteria cause the impermeability of cell wall to antimicrobial chemical substances. e Gram-positive bacteria have an outer peptidoglycan layer, which makes the cell wall more permeable to antimicrobial substances than lipopolysaccharide layer. erefore, the complexity of the cell walls of Gram-negative bacteria is higher than that of Grampositive bacteria. erefore, Gram-negative bacteria are less susceptible to antimicrobial chemical substances than Grampositive bacteria [8].
Compounds extracted during the extraction procedure are mainly dependent on the type of the solvent. Water is the primary solvent used in the traditional medicine, but in the present study, compounds extracted in organic solvents (ethanol and hexane) have exhibited more significant antibacterial activity compared to those extracted in water. Among the extracts investigated, hexane extract exhibited the highest antibacterial effect followed by the ethanol extract. ese observations can be due to the polarity of the compounds which were extracted by each solvent and the ability of extracts to diffuse and dissolve in different culture media used in the study. Among the three solvents used in the study, water is the most polar solvent and hexane is the least polar solvent. e lowest MIC value was given by V. zizanioides hexane extract. is MIC value is almost similar to the MIC value shown by cefotaxime, the positive control used for the study. When consider E. divaricata, the ethanol extract exhibited the lowest MIC value. According to the results of the present study, less polar compounds exhibited more antibacterial activity compared to more polar compounds.
Absence of antimicrobial activity does not mean that the bioactive compounds are not present in the plant or the plant has no activity against microorganisms. Presence of inadequate quantities of active constituent or constituents in the extract to exhibit the antimicrobial activity can be the reason for the negative results.
Phytochemical studies have shown that plants with antimicrobial activity contain bioactive constituents such as tannins, flavonoids, alkaloids, and saponins [19]. Polyphenols in plants include flavonoids, phenolic acids, stilbenes, and lignans [20]. Preliminary research indicates that flavonoids may modify allergens, viruses, and carcinogens and so may be biological "response modifiers." In vitro studies show that flavonoids also have antiallergic, antiinflammatory, antimicrobial, anticancer, and antidiarrheal activities [21]. Saponins seem to stimulate the immune system [22]. erefore, presence of polyphenolic compounds, alkaloids, tannins, and saponin may contribute to the antimicrobial activity of above plants.
Conclusion
e results obtained from this study provided evidence that the ethanol and hexane extracts of the Sri Lankan traditional medicinal plants E. divaricata and V. zizanioides and aqueous extract of E. divaricata exhibited beneficial antibacterial activity against S. aureus (ATCC 25923). None of the other plant extracts were active against any of the microorganisms tested. e highest antimicrobial activity was observed for hexane extract of Vetiveria zizanioides which showed a MIC value in the same range as the positive control used. It can be concluded that a low polar active compound present in the Vetiveria extract may be responsible for the significant antimicrobial activity. Many polyphenolic constituents and alkaloids derived from plants are known to have antimicrobial activity. erefore, presence of these compounds in the above plant extracts may be responsible for their antimicrobial action. Further scientific evaluation of these plants should be done including fractionation and further characterization of phytochemicals to identify the active components responsible for the antimicrobial activity, as well as to adjudge the in vivo activities of these constituents.
Data Availability
e data used to support the findings of present study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2019-05-17T13:46:37.298Z | 2019-05-06T00:00:00.000 | {
"year": 2019,
"sha1": "703b489f9b37e704cbb6937933a0c19fd657e101",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijmicro/2019/7431439.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5806d87bdabb1a98a4a36bc2863670ba29c9e01",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
150425797 | pes2o/s2orc | v3-fos-license | Labour Taxation and Personnel Expenditure in the Romanian Public Sector
The global economic crisis, which had a strong impa ct on virtually all states of the world, brought additional challenges to the public sector. The governments had to choose between two alternatives: to decrease public expenditure by adopting austerity measures (option chosen by most EU Member States) or to increase public investments, in an attempt to stimulate economic growth (alternative preferred and supported by the US and Great Britain). The paper at hand aims to analyze the pub lic expenditure policy in Romania, as a result of the economic conditions mposed by the crisis, with a focus on the relationship between the income s collected from taxes on labour and the public expenditure with the perso nnel employed within public institutions. We shall analyze and compare t h figures regarding public expenditure for the wages of persons working in the public sector in the years prior to the crisis and following the adoption of the austerity measures. At the same time, we shall analyze the co rr sponding numbers regarding the amounts collected from taxes on labour. The goal of the paper is to identify the possible connection between the reduction of personnel expenses and the decrease of the budgetary deficit, wh ch was the intended purpose of the austerity measures in the field of public employees’ salaries. Since the labour tax is computed on the b asis of the salary earned, we expect both the expenses with the person nel a d the amounts collected from labour tax, to decrease. However, th is decrease will be in different percentages. The paper will analyze if th e final balance between expenses with salaries and labour tax is positive o r negative, in other words, if the austerity measures helped improve the budgetary deficit or deepened it. The final part of the research focuses on a comparative analysis between the EU Member States, with respect to the levels of taxation on labour, the percentage of labour tax in the GDP, and the public expenditure with the personnel, in an attemp t to show if there are certain similarities or differences between EU and/ or NISPAcee States.
In the first trimester of the year, the social partners sent the Government anti-crisis proposals and subsequently, after consultations, the executive made public a program contested by the trade unions and employers' associations, which did not find inside their own measures.
In April, the government approved the letter of intent negotiated with the International Monetary Fund (IMF) for a loan in the amount of EUR 19.95 billion, out of which EUR 12.95 billion from the IMF, EUR 5 billion from the European Commission (EC) and EUR 1 billion each from the World Bank (WB) and the European Bank for Reconstruction and Development (EBRD). The government claimed that the fundamental objective of the loan was represented by the desire to preserve jobs, to relaunch and credit the economy, such as, indirectly, to ensure the payment of salaries and pensions in Romania.
Year 2009 ended with a drop of the GDP, compared to the previous year, of 7.1%. The recession had a general effect of reducing tax revenues, for all levels of government. As consumer spending falls, indirect taxes revenues fall; as unemployment increases, the volume of taxes on income decrease; as bankruptcies rise and profit fall, income from taxes on profits falls. The recession also increased the payment of unemployment and other benefits and services. Both the fall in taxes and the rise in benefits increased government deficits (see Fig. 1).
Public expenditure policy in times of crisis
The current financial policy, aiming to end the economic crisis in Romania, opted, apparently, as a saving solution, for the reduction of public expenditure. Moreover, the public expenditure of Romania's consolidated general budget was under the careful watch of the international institutions providing financial assistance to our country, in order to support it in surpassing the economic crisis and in ensuring a balanced economic development. Thus, the question is asked what public expenses represent in the entirety of the national economy, what effects the management errors in this field can have at the social-economic level. Public expenditure occurs as a result of the economic-social relations manifested between the state and the natural and legal persons, with the occasion of the redistribution and use of the state's financial resources, for the purpose of fulfilling its functions, on the basis of the Government's economic program (Georgescu, 2011).
In order to execute an analysis of the public expenditure comprised in the consolidated general budget of Romania, we will use the data regarding budgetary execution 1 . Examining the data in Tab. 1 and Fig. 3, we can see that the amount of the total public expenditure comprised in the consolidated general budget registered a permanent increase, of course, at different rates, both as absolute value, and as percentage of the GDP.
Tab. 1: Public Expenditure in
The expenses of the general budget increased in nominal terms in year 2010 with 4.6% compared to year 2009. However, in structure, the expenses had different evolutions. Thus: Expenses with interest for financing the deficit and for refinancing the public debt increased by 20% in 2010. The volume of expenses with interest reached 7.3 billion lei, becoming an important risk factor for the control of the budgetary deficit; Expenses with social assistance increased with 7.3% in year 2010 compared to 2009, as a consequence of the increase of expenses with unemployment aid; Also, the expenses with goods and services continued to rise. Compared to 2009, an increase of 5.2% was recorded in 2010. At the level of the local administration, the increase of these expenses is due to the undertaking of the financing of the activity of Agricultural Chambers, as well as to the decentralization of health units, by their taking over by the local public authorities. At the same time, the increase of these expenses was determined by the increase of the expenditure of the National Single Fund of health social security, for the payment of outstanding amounts; In the last years, the Government was able to fall within the budgetary deficit targets, by sacrificing investments. The expenses meant for them, which also include capital expenses, as well as development programs financed from internal and external sources, although were in 2010 in the amount of 33.7 billion lei, respectively 6.6% of the GDP, they registered a decrease, compared to 2009, with 11.7%; The personnel expenditure decreased in 2010, compared to the previous year, with 8.6%, being performed both lay-offs and reductions of salaries in the public sector.
With these data available, we reach the following conclusion: the entire adjustment of the budgetary deficit, which demonstrates the governmental "performance" in Romania, was achieved due to the 25% reduction of the public sector salaries.
In addition to this severe reduction, the Government also raised the VAT quota by 5 p.p., respectively from 19% to 24%.
The increase of taxation operated by the VAT increase and by other less important measures faces the implacable decline of the economy, such as, although the Government collected more money from VAT (+14.3% in year 2010 compared to 2009), the total fiscal collections stagnated. Practically, the decrease recorded in the collections afferent to profit tax (-4.9%), income tax (-3.2%) and social security contributions (-4.5%) counterbalanced the higher VAT collection.
Moreover, the deficit of the general consolidated budget of 33.3 billion lei, respectively 6.5% of the GDP registered at the end of 2010 is below the limit of the deficit target, in the amount of 34.6 billion lei, set as objective of the budgetary policy for year 2010 and established in the additional letter to the Stand-by agreement concluded with the IMF.
Was it really necessary to reduce the personnel expenditure in the public sector by 25%? We ask this question considering the effects of this reduction on consumption and on the main macro-economic indicators. Namely, the reduction of public sector wages also determined the reduction of the budgetary incomes, both from VAT (due to lower consumption), and from the direct taxes of public sector employees' wages.
Elasticity of consumption depending on the income for families with at least 1 employee and where at least 1 employee works for the state, according to the NSB (National Syndical Block) estimates, and which can save money monthly, is of approximately 60%. This elasticity is used for the realistic scenario regarding the impact of the decrease of public sector wages on consumption and GDP (60% of the 25% reduction will reflect in the decrease of consumption). The realistic scenario is coherent with the situation of a family made up of one state employee and one private employee (hence, an average decrease of 15% in the family income). The optimistic scenario takes into consideration a lower elasticity (40%), assuming that there are other resources for saving, and the pessimistic scenario takes into account a higher elasticity (80%), assuming that other expenses are more rigid (utilities; bank installments). The pessimistic scenario is coherent with the situation of a family in which both spouse are state employees. The computation was performed at the level of a family, using the data from the Inquiry on Family Budgets executed by the NSI (National Statistics Institute). The weight in the consumption of families with at least 1 state employee is of approximately 30%, and the weight of consumption in the GDP of approximately 60%. Therefore, in the realistic scenario, we are dealing with a decrease of the familial consumption with 15%, which leads to a decrease of 4.5% in the total annual consumption, which reflects in a contraction of the annual GDP with 2.7%, respectively 1.58% for the period June-December 2010. Thus, the effect of the decrease in salaries on the aggregated demand will contribute to the reduction of GDP with 1.35% from July until the end of year 2010, in the realistic scenario.
The reduction of the public sector wages will also have repercussions on the budgetary income. Although it is difficult to model the indirect effect (impact of the decreased consumption on the turnover, profit, number of employees and, implicitly, the profit tax and the tax on the work force paid by the private companies), it cannot be considered negligible.
In exchange, with approximation, we can compute the direct effect of the salary drop by 25% on budgetary incomes, in view of the weight of 38% of the state employees' wages in the total wages in the economy.
Next, we take into account the following scenario: the Government decides to increase the VAT quota by 5 p.p., but does not reduce the public sector wages with 25%. We computed the impact of this hypothesis on the consolidated general budget. In the hypothesis of maintaining the public employees' wages in 2010Q3 (third trimester) and 2010Q4 (fourth trimester) at the level registered in 2009Q3 and 2009Q4, the personnel expenditure would have increased with 0.6% of the GDP. At the same time, the incomes collected from the tax on wages and the social security contributions increase with 0.3% of the GDP. Ceteris paribus, the budgetary deficit at the end of year 2010 would have reached 6.75% of the GDR, still below the level agreed with the IMF (6.8% of the GDP). Still, let us not forget that we did not take into account the increase of VAT incomes following this hypothesis, element which, of course, leads to the improvement in the deficit level.
Still, we agree that the reform of the total salary fund must not be eliminated, applying overall reductions with different percentages. As this measure was applied, it was equivalent with a regressive tax, the reduction of wages in an undifferentiated manner having perverse effects and not promoting fiscal sustainability. These reductions affect the quality of public services, which reduce the quantity and availability of public services, which, in the end, can have as result the continuous erosion of the population's trust in the government. Such (mass) reductions have only a limited and short-term effect. Usually, in such a situation, the first people leaving the public sector are those with superior training, which leads to a reduction of labour productivity in the budgetary sector. When the public sector will hire again, it will have to allot additional amounts of the professional training of the newcomers.
How was this decision reached? Is Romania a particular case?
European examples of fiscal crisis management
A number of EU countries have taken policy decisions to cut the pay of government and/or public sector employees between 2008-2010. It is worth mentioning the fact that half of these countries have reached IMF deals.
Greece. In February 2010, the government of Greece adopted a package of cuts in public spending which included a 7% cut in earnings for all public sector jobs, as well as the cancellation of all agreed pay rises. The pay of public employees was further reduced following the agreement in March 2010 by the EC, the IMF, and the European Central Bank on a support package for Greece which included a 'Memorandum of understanding' on economic and fiscal policies. This led to a new law in May 2010, which included an 8% cut in earnings of all government employees and a 3% cut in earnings of all workers employed by stateowned companies. Public sector pay is frozen until 2014 (Lampousaki, 2010, Kapsalis, 2010. Hungary received a support loan from the IMF in October 2008. Part of the agreement was originally that public sector workers would lose their bonus, worth 8% of their pay, and face a pay freeze; the cut in earnings was later restored. However, in June 2010 a newly elected government announced a new package of measures designed to reduce the deficit to the level of 3.8% required by the EU/IMF, which included a 15% cut in the salaries of all 700 000 public sector employees (Bretton Woods Project, 2008).
Ireland. The government confirmed unilateral pay cuts in the budget of December 2009, which specified that from 1st January 2010 basic salaries of public employees would be reduced as follows: 5% on the first €30 000 of salary; 7.5% on the next €40 000 of salary; 10% on the next €50 000 of salary. This produces overall reduction in salaries ranging from 5% to just under 8% in the case of salaries up to €125 000 (Callan -Nolan -Walsh, 2010).
Latvia faced acute problems arising from the financial crisis in 2008, which led to it securing an IMF stand-by arrangement worth more than $2.3 billion at the end of 2008. Public sector pay was cut by a succession of measures in 2008 and 2009: in mid-2008 additional payments and bonuses were cut; conditionalities for the IMF deal included a 15 per cent reduction in local government employees' wages, and a 30% cut in the wage bill in 2009; in July 2009 salaries of state sector workers were cut by between 15% and 20%; from September 2009 teachers pay was cut by 28% (Bretton Woods Project, 2009; Curkina, 2009). Lithuania. In June 2009 the Lithuanian government announced unilaterally that it was planning to cut the basic salaries of public sector employees by 10%, with effect from August. The trade union confederation rejected the decision and organized action, including a hunger strike: the government then entered discussions with the unions, and agreed to suspend the unilateral decision. An agreement was signed in October 2009 between the government, private employers and a number of trade union organizations. It includes an obligation not to reduce basic salaries for civil servants, but also an overall austerity agreement involving general reductions in wages and social benefits. The prime minister claims that the austerity measures have been successful because they are based on 'social consensus': However, some independent trade unions and civil society groups refused to sign the 2009 agreement because of the plans to cut pensions, and criticize the process for lack of transparency and for agreeing that the burden of the crisis should fall on ordinary people (Blaziene, 2009).
Portugal. In early 2010, as a way of reducing the budget deficit, the government proposed a general freeze on wages, cuts in public sector pensions, 5% pay cuts for senior civil servants and politicians only, and unilaterally decided to cut unemployment benefit and the minimum wage. This was strongly opposed by the unions and others, including a strike of 300 000 workers in March, and one of the largest demonstrations ever recorded in Portugal, in May 2010. The private employers also opposed an increase in the national minimum wage, as agreed in the 2006 tripartite agreement: the government approved the increase, but provided a subsidy for employers (Lima, 2010a(Lima, , 2010b. Spain. In response to international markets' forcing up the cost of borrowing by Spain, the government introduced a number of measures in 2010 to try to reduce the budget deficit. In May 2010 the government announced a cut in public sector pay of 5% on average, a freeze on civil service pay in 2011, a freeze on pensions, and reductions in some benefits (Miguel, 2010).
From the data above, it can be seen that not in Romania was decided reductions of the public sector wages, but the percentage applied was, by far, the greatest.
Let's look at the evidence concerning comparative movements in public and private sector wages since the start of the crisis. Table 4 sets out data for the European countries on the changes in wages and salaries costs for the public and private sector. The data is presented using Eurostat's classification. The information must be used carefully because they refer to the statistics of the business economy activities (aggregated according to the homogenous activity), according to NACE Rev. 2, and the public sector includes public administration, education, healthcare and social assistance (includes the private sector for education, health and social assistance, excludes the armed forces and assimilated personnel). These statistics do not take into account the financing form, their goal being to supply information on economic activities, according to NACE 2. The data is incomplete for a number of countries, with data covering the whole of the public/private sector, as defined above, for both 2008Q1 and 2010Q1. Given all this caution, it is still possible to identify some patterns in the relative movements in private and public sector earnings in 2-year period since the recession (between the first quarter of 2008 and the first quarter of 2010). In 8 countries, public sector earnings increased more rapidly -or decreased less -than the earnings in the private sector. We include here the case of Lithuania, where public sector earnings fell by 3.6%, less than the 9.5% fall in private sector earning, and Estonia, where public sector earnings fell by 0.5%, less than the 3.7% fall in private sector earning. Only in Bulgaria public sector earnings rose more slowly than private sector earnings, but closet o the rate of private sector. In 5 countries (Greece, Spain, Hungary, Portugal, and Romania) public sector earnings have fallen relative to private sector.
Tab. 2: Change in wages and salaries, Europe, 2008Q1 -2010Q1
Within the private sector, earnings in financial services performed relatively badly -on average there was a fall even in nominal terms, and in all countries, earnings in financial services did much worse than in the general movements in the private sector. In half of the countries, public sector pay performed significantly better than the financial services sector alone; earnings in the electricity and gas sectors also did consistently better than earnings in financial services.
However, if we analyze the data reflecting the modifications occurred in year 2010, we will notice that in all countries, except Lithuania, public sector earnings have fallen relative to private sector. The highest discrepancy is registered in Romania, where public sector earnings dropped by 21.5%, while the private sector earnings increased by 2.7%. Most of the countries involved in process of reducing public sector wages have been the subject of external economic pressures, including pressure from the European Commission to keep deficits below the level of 3% specified in the Maastricht Treaty and policies required by the IMF as condition for loans supporting national currencies.
Tab. 3: Change in wages and salaries, Europe, 2010Q1 -2010Q4
Is there an economic justification for the cutting of the public sector wages? A document published in June 2010 by the ECB (see Holm-Hadulla, 2010), the evidence seems to suggest that there is no scientifically solid argumentation that would link the public sector pay with the economic recession. Nor is there evidence to support the idea that public sector wages reflect on the business cycle.
Moreover, at the level of the EU countries, there is no clear relation between the public and private wages. While in countries such as France or Finland, where the number of public employees is high, the salaries of the private and public sectors are almost identical. Still, there are countries where the average wage of a state employee exceeds by nearly 50% the average salary in the private sector.
However, we must remember that public sector pay determination is much less likely to affect private sector wage settlements where trade unions have negotiation powers, which coincides with a weaker influence of the public wages.
In the previous part of the paper we attempted to demonstrate that it was not the public sector wages that determined the crisis, but governments proceeded to their reduction in order to achieve economic growth. As indicated, the reduction of public sector wages in Romania did not bring about merely the reduction of public expenditure, but also the diminishing of the taxes levied on (employed) labour income, which are usually withheld at source (i.e. personal income tax levied on wages and salaries income plus social security contributions).
In addition to the reduction of salary expenses in the public sector, Romania proceeded to a dramatic, sudden, violent reduction of the number of state employees without distinguishing criteria and, especially, without a preparation for the absorption of the work force laid off. In the almost total absence of measures for stimulating the real economy, compensatory for the massive personnel lay-offs, the perspective of increasing budgetary incomes and, respectively, salary incomes of the state employees appears as little probable in the immediately following period.
That is why we shall study the impact of labour taxation on the labour market.
Supporting labour demand and monitoring incentives to work calls for the assessment of both tax and benefit systems. The tax burden on labour as measured by the tax wedge is on average very high in Europe, although substantial differences exist across Member States. This heavy tax burden has been considered by some analysts as one of the main factors behind the unsatisfactory European employment performance in recent years.
The tax barrier to employment is usually measured by the tax wedge, the proportional difference between the cost of workers to their employer and the amount of net earnings that the worker receives (take-home pay). The tax wedge is composed of several elements. First, employers have to pay employers' social security contributions. Second, employees have to pay social security contributions on their wage income. Finally, the labour income is subject to the personal income tax. The tax wedge is calculated for different household types and different income levels relative to the gross wage earnings of an average worker. The effect of the tax wedge on labour demand and labour supply (and eventually on employment) depends on whether and to what extend the tax burden increases the total labour cost for the employer or is transferred on to the worker, translating into a lower net wage. When increasing the total labour cost, taxes on labour (notably in the form of employer's social security contributions) tend to reduce labour demand. On the labour supply side, taxes levied on wages (both direct taxation on labour income and employee's social security contributions) reduce the net income and drive a wedge between the marginal product of labour and the marginal value of leisure. They thus tend to discourage the availability to work, especially at the lower end of the wage scale due to higher labour supply elasticity of low income workers. In the EU, employers' social security contributions constitute the largest part of the tax wedge for the single average income worker in about two thirds of the EU countries (17.6% of labour costs for the unweighted EU average in 2009). The second largest component of the tax wedge is income tax (12.4%), followed by employee's social security contributions (9.6%). Compared to the European average, Romania's situation presents as follows: indeed, the largest part of the tax wedge is constituted by the employers' social security contributions (with 3 p.p. higher than the EU average), the second largest component of the tax wedge is employee s social security contributions (with 22.7 p.p. higher than the EU average), and lastly, the income tax (cu 2.8 p.p. lower than the EU average).
Tab. 5: The composition of tax wedge in 2009, single average income worker
Moreover, a recent study performed by KPMG (2010) places Romania on 4 th position in what concerns the effective employer and employees social security rates on USD 100 000 of gross income and on 3 rd position taking into account USD 300 000 of gross income (after France and Belgium). The study also reveals the fact that these effective rates are the same, regardless of the level of gross income, i.e. effective employer social security rate is 27.2% and effective employee social security rate is 16.5%. Having figures in mind, we strongly suggest the reduction of social security contributions for employers. A reduction by 3 p.p. of the SSC paid by the employers and employees (from a total percentage of 44% to 41% of the contributions afferent to each gross salary) could lead to the creation of 100 000 new work places. On the other hand, the state collects from social contributions approximately 22.9 billion lei during a period of 6 months, at a contribution amount of 44% of the employee's gross incomes. If the contribution percentage is reduced to 41%, the amount collected by the state decreases to 21.3 billion lei, which means that the collections to the budget are reduced by 1.6 billion lei (approximately 380 million Euros), money that could remain at the disposal of companies. This measure would mean a reduction of the cost with wages of the employer, but the impact depends on the company size. At an average wage of 2 000 lei in a company with 1 000 employees, the reduction would mean savings of 60 000 lei per month (approximately 14 300 Euros), but in a company with two employees, at the same average wage, the reduction would be insignificant, of 80 de lei (19 Euros). Still, for a company with 100 employees who receive the average wage in the economy, with an average gross expense of the employer of 2 500 lei/employee, the savings derived from the reduction of the social security contributions paid by employer with 3 p.p. would reach 90 000 lei, meaning 21 500 Euros per year, money with which an employer would be able to support the salary expenses of at least 3 new employees.
A big problem that Romania has is represented by the low number of employees working legally. Actually, Romania has reached the minimum number of employees in the last 50 years -only 4.095 million persons with legal employment contracts at the end of January 2011 2 .
The reduction of social contributions is a good measure for stimulating economic growth because, at the current level, the social security contributions remain a fiscal burden on the employer. A reduction by 3 p.p. of the social security contributions rate paid by the employer would mean good news for the private sector, in the conditions in which this sector has experienced more than 700 000 lay-offs since 2009 (year during which it was decided to increase the contributions for pensions paid by the employer, from 18.5% 3 to 20.8%, the current level).
Considering the high level of the social contributions rate, we believe that their reduction is necessary in for the increase Romania's competitiveness and attractiveness before investors.
Conclusions
Public administration, or, better said, the professional quality and its moral responsibility represent the key institutional factors of the political efficiency of governance in this difficult period, under the economic, financial and social aspect. The reduction of wages of public sector employees is a strictly accounting issue, which has only a minor result on the improvement of the condition of the consolidated general budget. As demonstrated in the paper, the impact of the wages' reduction, combined with the loss of income from labour taxation, was of 0.3% of the GDP. Moreover, we performed a simulation of the hypothesis of cancelling the decision to reduce public sector wages, in the conditions of maintaining the increase of the VAT level with 5 p. p. and we demonstrated that, in this case, the deficit of the general consolidated budget would have fallen within the limit agreed with the IMF. Still, cutting the public sector wages may have adverse effects of salary constraints, which may refer especially to the reduction of the moral responsibility or the decrease of professional competence, which we consider crucial for increasing the efficiency of public administration, including for the increase of the budgetary collections which would support the wages, which, in their turn support moral responsibility and professional competence.
Corrected depending on the total index of consumption prices, public expenditure recorded a real negative growth of 30.67%. At first glance, this appears a positive fact, but this reduction is not corroborated with an increase of investments generating economic added value and, therefore, it (namely, the cutting of public expenditure) cannot be considered a measure stimulating economic growth. Moreover, at the EU level, Romania has the lowest percentage of expenditure in the GDP (41.0% compared to the EU average of 50.8%, figures according to the ESA95methodology), but public incomes also have the lowest percentage of the GDP (32.4% compared to the EU average of 44.0%, figures according to the ESA95methodology). Hence, the effect of the measure to diminish public expenditure in the matter of increasing incomes is null. That is why we agree with those opinions that support the reorientation of the public budget management in view of certain measures for increasing the degree of collection of budgetary incomes (e.g. combating tax fraud and evasion, diminishing of the black economy), for reducing the social contributions shares paid by employers, rather than a financial policy focused on the drastic reduction of personnel expenditure, as was in the case in the latest period.
In countries that experience market pressures or where are in place IMF or EU programs, or a combination of the two, the impact on the public sector pay comes more from the political responses to the government stimulus measures. These responses are inherently political because they explain the economic mechanisms that make it necessary for the burden of the economic recession to be shared between all societal actors.
ABSTRACT
The global economic crisis, which had a strong impact on virtually all states of the world, brought additional challenges to the public sector. The governments had to choose between two alternatives: to decrease public expenditure by adopting austerity measures (option chosen by most EU Member States) or to increase public investments, in an attempt to stimulate economic growth (alternative preferred and supported by the US and Great Britain). The paper at hand aims to analyze the public expenditure policy in Romania, as a result of the economic conditions imposed by the crisis, with a focus on the relationship between the incomes collected from taxes on labour and the public expenditure with the personnel employed within public institutions. We shall analyze and compare the figures regarding public expenditure for the wages of persons working in the public sector in the years prior to the crisis and following the adoption of the austerity measures. At the same time, we shall analyze the corresponding numbers regarding the amounts collected from taxes on labour. The goal of the paper is to identify the possible connection between the reduction of personnel expenses and the decrease of the budgetary deficit, which was the intended purpose of the austerity measures in the field of public employees' salaries. Since the labour tax is computed on the basis of the salary earned, we expect both the expenses with the personnel and the amounts collected from labour tax, to decrease. However, this decrease will be in different percentages. The paper will analyze if the final balance between expenses with salaries and labour tax is positive or negative, in other words, if the austerity measures helped improve the budgetary deficit or deepened it. The final part of the research focuses on a comparative analysis between the EU Member States, with respect to the levels of taxation on labour, the percentage of labour tax in the GDP, and the public expenditure with the personnel, in an attempt to show if there are certain similarities or differences between EU and/or NISPAcee States. | 2018-10-16T00:11:11.497Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "d55c0e7fb8773bc5c0fb060529a521ed330821db",
"oa_license": null,
"oa_url": "https://www.vse.cz/polek/download.php?jnl=efaj&lang=en&pdf=33.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "892da6b4ed2a30dad02654b8aba2ecc307c9a727",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
252258069 | pes2o/s2orc | v3-fos-license | Sustainable Hyperautomation in High-Tech Manufacturing Industries: A Case of Linear Electromechanical Actuators
Hyperautomation is a promising but sparingly implemented concept in intelligent manufacturing. One of the reasons for the suboptimal adoption of hyperautomation is the large gap between current theoretical frameworks and practical methodologies and tools that can be applied in a real industrial production scenario. This situation has become much more complicated in high-tech enterprises, which face a particular set of issues in terms of innovation, cost-effectiveness, and supply chain management in today’s globalized environment. This manuscript provides a new conceptual business framework and technological background for achieving sustainable hyperautomation in the manufacturing of linear electromechanical actuators (LEMA), a key component of several cyberphysical actuators. A set of digital tools and innovative concepts, such as intra-enterprise 3-level factory and definitive designs based on unified solutions, which enable mass customization and offer up to 1000 variants of the LEMAs, are introduced to achieve synergistic interaction between different business functions and provide significant cost and technological advantages. To make manufacturing more customizable, a modular design approach is used, and simultaneously, to facilitate mass production, the focus is given on roller screw transmission modules, representing approximately three-fourths of the added value of LEMA. Furthermore, the concept of synergetic forward integration is proposed and explained using an example of robotic resistance spot welding. This framework involves a closed loop of industrial mature digital tools that enables autonomous product design and manufacturing via Responsive R&D (Research and Development) and feedback-driven dynamic interactions with the market and production system. These steps allow intelligent and automatic decision making throughout the digitally connected systems within the company and out of the company through a digital networked connected intra-enterprise world inside the supply chain with minimal human intervention.
in the context of sustainable hyperautomation in the high-tech 89 industry. 90 Electromechanical actuators (EMAs) are integral compo-91 nents of CPSs used in various industries. With the grow-92 ing trend towards automation and robotics, the demand for 93 EMAs is expected to grow. However, because of stiff com-94 petition in this segment, there has been a substantial reduc-95 tion in the market price of EMAs. Recent economic crises 96 have adversely affected market dynamics. In the context of 97 decreasing demand, falling market prices, and technological 98 advances, sustainable and intelligent manufacturing of EMAs 99 is only possible by developing a reconfigurable production 100 line for mass customization in small and medium batches, 101 while maintaining technological superiority and production 102 costs comparable to mass production. 103 Automating manufacturing using integrated DTs and pre-104 dictive models allows for iterative real-time product and 105 production process optimization. However, technical com-106 plexity, innovation challenges, rigidities caused by external 107 and internal systems, and turbulent market dynamics pose 108 a substantial barrier to high-tech industries achieving suc-109 cessful digitalization and automation [20], [21]. Furthermore, 110 although digitalization has received great interest in scholarly 111 research and managerial practices, there is limited under-112 standing of the comprehensive business framework that can 113 be applied to achieve hyperautomation in high-tech manu-114 facturing firms. Indeed, even in industries where CPSs and 115 other digital tools have been implemented, to the best of 116 the authors' knowledge, automation objectives have only 117 been partially realized, and there is no prior study reporting 118 a business framework for hyperautomation in a high-tech 119 firm. 120 The term ''de-massified production'' was first used in 121 1980, and in 1993, the concept of mass customization was 122 introduced [22]. In 1995, a description of the structure of a 123 decentralized enterprise appeared [23]. Wang et al. proposed 124 a framework to bridge the gap between mass ñustomiza-125 tion and mass personalization using I4.0 technologies [24]. 126 Specifically targeting intermediate product configurations 127 that are neither generic nor standardized, Song et al. proposed 128 an uncertain decision-making model for mass personalization 129 of production within I4.0 [25]. These authors presented the 130 theoretical foundations and practical implementation of an 131 assembly system for high-tech products, using the principles 132 of standardization and redundancy. Mourtzis et al. presented 133 a web-based support platform for mass customization and 134 personalization [26]. The platform is responsive and allows 135 interaction with customers during the product design phase. 136 The proposed solution was integrated with a decentralized 137 manufacturing platform implemented using web technolo-138 gies. In a recent study, Lee et al. demonstrated the feasi-139 bility of an OrderAssistant system that generates product 140 specifications from customers' voices [27]. To maximize 141 customer satisfaction, the authors used the Kano model and 142 various optimization methods. The resulting characteristics 143 were transferred to a top-level decision support system that 144 VOLUME 10, 2022 allowed for the simultaneous design of a new product config-145 uration and a change in the production cycle. 146 Recently, significant attention has been paid to the recon-147 figuration and optimization of assembly production using a 148 late customization approach [28]. Rossit et al. presented a 149 framework for reconfiguring the assembly line sequence in 150 the final stages of production depending on the requirements 151 of the customer [29]. It was suggested that the framework 152 be implemented as an interactive online system for setting ods that can be easily combined or transformed into smart 178 distributed scheduling [30], [31]. The emergence of big data 179 and artificial intelligence has brought new insights into inno-180 vation in decision support systems (DSS) [32], [33]. The main 181 impact and key aspects of these smart systems are the product 182 lifecycle approach and the use of DTs. For example, an effec-183 tive DSS can be built using a set of digital tools to describe 184 and model a product and its production processes. This is 185 also termed the DT of production facilities and involves 186 collecting, storing, and using data at all stages of the decision-187 making cycle. The methodology for calculating metrics of the 188 complexity of the production of customized products has also 189 been recently implemented in a real enterprise that manufac-190 tures laser-processing equipment [34]. It was also shown that Recently, Grassi et al. [35] proposed a semi-heterarchical 203 manufacturing planning and control architecture. Based on 204 this architecture, the production management model can 205 dynamically distribute assignments according to various dis-206 patch rules based on the queueing theory. The performance 207 of the model was evaluated for various production scenarios 208 using hybrid modeling systems. Modeling was carried out 209 exclusively for the shop floor, but the authors claimed that 210 the methods under consideration could also be applied to 211 dynamic dispatching at other levels of enterprise manage-212 ment. It is also worth noting that the developed planning 213 algorithms did not consider feedback from the market.
214
Going beyond of what is currently known, this paper 215 presents a comprehensive account of a sustainable hyper-216 automation approach describing the application of a novel 217 methodology for achieving hyperautomation in manufactur-218 ing of LEMAs within cyberphysical production systems. The 219 business framework, associated to the hyperautomation of the 220 LEMA production relies on responsive Research and Devel-221 opment, mass customization and use of DTs for Industry 222 4.0-compliant reconfigurable products, production processes, 223 production control and management, and added-value ser-224 vices for intra-and inter-enterprise businesses. Table 1 com-225 pares the properties of frequently reported hyperautomation 226 approaches with the framework provided in this study for 227 sustainable hyperautomation, highlinghting the differences 228 and the novel aspects introduced in this manuscript.
229
At this point, it is important to reinforce the fact that the 230 applicability and particularly the impact of the hyperautoma-231 tion approach discussed in this paper has been validated based 232 on the real experience with hyperautomation implementation 233 at Diakont premises, a prominent multinational corporation 234 that develops and manufactures a wide variety of high-tech 235 goods in different regions of the world [36].
236
Following this introductory section that includes a brief 237 literature review addressing relevant reported related works, 238 Sections 2, 3, and 4 introduce and develop the novel hyper-239 automation components. In these sections, after providing the 240 major characteristics of the business framework, the techno-241 logical background for achieving sustainable hyperautoma-242 tion in the manufacturing of LEMAs is presented. A set of 243 digital tools and new concepts, such as intra-enterprise 3-level 244 factory and definitive design-based unified solutions that 245 allow mass customization and provide over 1000 LEMA vari-246 ations, are also described. These novel techniques promote 247 synergistic interactions across many business functions while 248 providing considerable economic and technological benefits. 249 The complete framework, based on the application of the 250 DIN Specification 91345 RAMI4.0, as it has been codified 251 and effectively implemented in the Diakont industrial system, 252 is described, together with an explanation of the research 253 methodology, in Section 5. Section 6 discusses the impact 254 and mainly favorable effect of the hyperautomation method 255 on production costs and overall return on investment. Finally, 256 [51]. The following discussion is centered on a specific 285 example of hyperautomation in the manufacturing of LEMA 286 with Roller Screw Gear (RSG); however, the core concepts of 287 the hyperautomation framework presented in this manuscript 288 have also been employed by authors in the manufacturing of 289 components of CPS, such as feedback sensors, servo drives, 290 and electric motors. The market for LEMA has grown in the recent past, and 293 is expected to grow further. However, increased competi-294 tion and economic crises have reduced the market prices of 295 LEMAs to unexpectedly low levels. The prevailing market 296 prices are much lower than the forecasts for different periods. 297 Taking the example of LEMA with roller screw transmission 298 for spot contact welding, Figure 1 illustrates the actual market 299 price, forecast market price, and target production cost [52], 300 [53], [54]. In 2012, the market price was EUR 4500, and 301 the market forecast indicated that the price would decrease 302 for approximately 15% and stabilize in the coming years. 303 However, the market price followed a markedly downward 304 trend compared to the expected price, forcing industries to 305 review their target costs significantly in 2017. In turn, the 306 reduction in the target production cost makes it necessary to 307 revise the product design and production process. In partic-308 ular, the market price for 2012 allowed the manufacturing 309 of customized products in small and medium batches and 310 offered additional technical advantages to the product, such 311 as longer service life and an integrated system of lubrica-312 tion replacement. In 2015, the business landscape changed 313 remarkably with the arrival of Asian manufacturers, forcing 314 European and American car manufacturers to reduce their 315 costs. This development led to a further reduction in the 316 market price of LEMA as part of the spot welding equipment 317 [55]. Since price is the primary factor affecting consumer 318 decisions, the presence of additional characteristics such as 319 an increased life cycle and additional features ceases to be a 320 competitive advantage, and a customized product with tech-321 nological superiority over competitors is the only means to 322 sustain the competition. With a current target cost, which has 323 decreased by 2.5 times, the unified modular design of LEMA 324 using reconfigurable assembly lines and integrated automa-325 tion and digitalization of the production cycle is a plausible 326 means to remain competitive. The next section describes the 327 The following methodology is based on Diakont's experi-367 ence in achieving successful hyperautomation for sustainable 368 manufacturing of LEMAs at Diakont premises in Lucignano, 369 AR, Italy.
One of the key aspects of sustainable manufacturing of 373 LEMAs is identifying areas that can offer synergistic advan-374 tages. In several scenarios, manufacturing a complete CPS 375 rather than just LEMAs can provide a synergistic advantage 376 in terms of cost and technical quality. For example, the CPS 377 involved in robotic resistance spot welding consists of several 378 components, including an industrial robot, a welding gun 379 with electrodes, and LEMA that controls the gun during weld-380 ing. Analysis of the system reveals two tasks to be solved: 381 the first is the delivery of a welding toolset to the welding 382 point using an industrial robot, and the second is the primary 383 technological cycle of welding performed by the welding 384 gun and actuator. While the first task is auxiliary and can be 385 performed by any five-or six-axis industrial robot, the second 386 task is critical and impacts the performance and quality of the 387 entire process. The technological setup that implements the 388 primary welding cycle is currently not independent of CPS. 389 The control of the welding cycle was assigned to the control 390 system of the robotic arm, which sent a signal to the welding 391 current controller to perform welding and control the actuator 392 that provided the closure of the gun with a given force. The 393 required timing diagram of the force is provided by either the 394 servo drive of the robot or a separate actuator servo drive. Any 395 type of architecture implies certain restrictions caused by the 396 need for motor feedback: the robot controller may not interact 397 with any external servo drive, and the robot servo drive may 398 not interact with any sensor. Thus, it is advisable to create 399 a ''smart tool'' as a separate CPS that would implement the 400 primary technological cycle of welding, an additional ''7th'' 401 axis of an industrial robot. 402 Such a system consists of an actuator, position and force 403 sensors, a control device implementing the functions of a 404 controller, and a servo drive that will be integrated with 405 the welding controller [56], [57]. The advantages of such a 406 system include independence from the industrial robot that is 407 needed to deliver the tool to the welding point and simplified 408 interaction with an industrial robot. It sends a signal that the 409 necessary position is taken and that the welding can be started 410 and receives a response, which means that the welding cycle 411 is completed. This decentralized control approach is particu-412 larly relevant for upgrading existing welding lines when the 413 robot has already been defined, and before that, a pneumatic 414 solution was used as a gun actuator. The price advantage 415 of the integrated solution over the currently used analogs 416 is the use of a less expensive and designed specifically for 417 this application ''smart device'' (controller and servo drive), allowing quick customization and delivery of LEMAs with 474 desired specifications without adversely affecting the produc-475 tion cost, time, and technical features. Rather than focusing 476 on the composite design of LEMAs, in the hyperautomation 477 approach used at Diakont, the design and engineering stage 478 of integrated R&D begins by identifying the common compo-479 nents of different types of LEMAs. Furthermore, achievable 480 sales and production volumes are regularly updated based 481 on the new information received from the market analysis or 482 from within the company. Technical analysis of the LEMAs 483 revealed that roller screw transmission is the key element for 484 all types of LEMAs. Therefore, the focus was on develop-485 ing a range of roller screw designs that cover the technical 486 specifications desired for various applications. As a matter of 487 fact, the form in which the unification of the product / pro-488 duction design goes, in connection with the new information 489 received from the marked, provided to and from the company 490 engineering department, in a digitalized and interconnected 491 management information system is enhancing the novelty of 492 the hyperautomation approach.
493
Once the designs are ready, the next stage of R&D is to 494 determine the optimal manufacturing technology to produce 495 roller screw components with the advantages of flexibility, 496 versatility, and performance over conventional manufacturing 497 technology. The entire range of products that can be manufac-498 tured using the common key element is allocated to a single 499 class of devices based on common principles. The modular 500 design concept was implemented using common design and 501 technological solutions (Figure 2).
502
As a matter of fact, the Responsive R&D addressed here 503 above means not only ''responsive to the marked change'', 504 but also means ''interconnected'', because through this R&D 505 not only a product is developed, but also the process tech-506 nology and methods for the production organization are 507 designed, which are supported by the result of the analysis 508 provided by the MCOFP. The concept of Repsonsive R&D 509 is novel by itself, including into R&D not only problems and 510 tasks of the product design, but technology, process, factory 511 building, equipment and processes, as well as support to 512 automation and management decision making processes.
513
Three basic unified modules -the roller screw, the rotor, 514 and the stator make up to 75% of the added value of the prod-515 uct and that do not require changes during the development of 516 a new product or product customization. The unified design 517 includes 20 different variants of fastening and connecting 518 elements, 4 dimension types, 6 types of feedback sensors 519 for actuator control, and 6 external options, providing the 520 possibility of creating up more than a thousand variants of 521 the final product. Upgrading the product with properties not 522 provided for in the basic universal design can be carried out 523 in the process of minimum customization (refinement) of 524 additional parts and can be implemented in a short time with 525 minimal cost. This approach resulted in a low cost of produc-526 tion owing to the mass production of basic unified modules, 527 and it allowed the use of high-performance manufacturing 528 technologies, such as thread whirling, circular grinding using 529 VOLUME 10, 2022 Figure. 3 [61], [62], [63], [64]. The factories of domestic markets and provides cost advantages for several 553 components. Figure 3 shows the geographically distributed 554 production and supply chain created by the Diakont. One 555 of the key tasks in the concept of ''three-level factories'' 556 is to provide rational inventory management, which would 557 ensure the minimum amount of goods necessary to maintain 558 production under unstable demand conditions.
559
The real implementation of the ''three-level factories'' 560 concept, where the production system is considered as a 561 combination of 3 different logic levels, physically combined 562 or not, inside one integrated (digitalized and networked) 563 management and supply chain system, allows, among others, 564 a real-time and visible presence for the customer, shortening 565 and optimizing schedules / terms and lower costs. This is 566 rather novel and is an essential requirement to be fulfilled by 567 the hyperautomation framework. also covers the strategic and management decision-making 628 level (through its ACS (Accounting and Control System), 629 where the models, described below as parts of the DTs are 630 performed). The iEMS had three structural levels: strategic, opera-697 tional, and execution (Table 3). A feedback-driven inter-698 action between the DT of the production process and the 699 TABLE 3. Structural organization and main tasks of the intelligent enterprise management system. produc-tion cost plays a key role in the complete automation 700 of the production process ( Figure 5 and 6). The Model of 701 Evaluation of Solutions on the Organization of Production 702 (MESOP) provides a comparative analysis of preferable orga-703 nizational decisions, covering individual technologies and the 704 overall parameters of the manufacturing system. Depend-705 ing on the optimal manufacturing technology predicted by 706 MESOP, a Model of Calculation, analysis, and Optimiza-707 tion of the Financial-economic Parameters of production 708 (MCOFP) determines the number of personnel, equipment, 709 and machines required for automation to reach a given sales 710 volume. MCOFP also predicts fixed and variable costs and 711 the average value of production costs using input cost data 712 such as salaries and costs of materials and types of machin-713 ery, energy resources, and services obtained from a third 714 party. The calculation is repeated cyclically with a specified 715 increase in sales volume. Finally, based on the results of 716 the model, a declining curve is formed that characterizes the 717 average cost under specified conditions. Autonomous production control is achieved using an iEMS 720 that integrates the planning and forecasting system (PFS) with 721 The end-to-end traceability of products in the manufac-740 turing process is organized such that the PES is the digital 741 shadow of the product instance in the manufacturing lifecycle 742 phase [79]. In manufacturing, updated information can be Following the identification of the primary needs related to 758 the development of a new product and its associated business, 759 it is critical to identify the links between several natural 760 problems, tasks, and parameters. The next step is to quan-761 tify and qualitatively analyze the technicality and underlying 762 financial imperatives of these interconnections. This is fol-763 lowed by the development of a collection of models and tools 764 for implementing all the interconnections and computations 765 in a digital environment, while also devel-oping tools for 766 early data gathering and incorporating these tools into the 767 core processes. Solving the research/study problem was an 768 important part of the process; thus, a large set of case data 769 was analyzed, and then, based on the case data, a group of 770 problem points that were more representative were selected 771 and deeply analyzed, with the goal of creating a solution 772 that avoids all representative problems. Indeed, a distinctive 773 part of the research and innovation methodology is the direct 774 application to the entire design and manufacturing process 775 of a critical component in diverse cyberphysical systems, the 776 Linear Electromechanical Actuator (LEMA).
777
To ensure a larger and higher effect on the existing indus-778 trial ecosystem, a major feature of the innovative approach 779 is that the outcomes of development and implementation, 780 as well as the related business framework, are completely 781 aligned with commonly used industrial reference specifica-782 tions. Following the digital transformation impulse carried 783 out by two major representatives, industrial digitalization 784 and networking initiatives such as the Industrial Internet of 785 Things and Industry 4.0, it was decided to position the hyper-786 automation infrastructure within the Reference Architecture 787 Model for Industry 4.0 (RAMI4.0) (DIN SPEC 91345:2016-788 05) [80], [81], [82], [2]. It is important to recall here that 789 RAMI4.0 aims to formally specify industrial assets and 790 asset combinations positioning them within a 3-dimensional 791 space covering its/their position (1)
806
The hyperautomation approach described in this work has 807 a significant positive impact on production cost and overall 808 returns on investment; however, the benefits of this approach 809 VOLUME 10, 2022 in earlier stages) and larger volumes in the long term. Taken 848 together, these factors provide the possibility of achieving 849 sales volumes corresponding to the production volume of 850 50,000 units of standard products per year. The dependence 851 of production costs on sales volumes is shown in Figure 9 for 852 the various scenarios. These results were obtained using the 853 MCOFP method described in the previous section.
854 Figure 9 (a) shows the production cost when universal 855 Computer Numerical Control (CNC) machines are used, and 856 Figure 9 (b) shows a scenario in which mass customization 857 is performed with the use of specialized CNC machines. 858 Figure 9 (c) represents the production cost when mass cus-859 tomization using specialized CNC machines is achieved with 860 a high level of automation. Figure 9 (d) shows the produc-861 tion cost when the concept of a 3-level factory is imple-862 mented (intra-enterprise), along with the scenario mentioned 863 in Figure 8 (c).
864
Brief analysis of Figure 9 shows that from an annual 865 production volume bigger than 1,000 units per year (iden-866 tified as point A), the use of universal CNC machines 867 becomes less preferable than the other three described 868 alternatives. With an annual production of more than 869 2,500 units the more feasible scenario is production using 870 within the Supply Chain. This paper has provided the knowl-908 edge background, scientific, technical and business and tech-909 nical background, which is basically necessary for achieving 910 sustainable hyperautomation, with a real industrial applica-911 tion scenario in the manufacturing of cyberphysical actuators. 912 The hyperautomation approach outlined in this study allows 913 efficient interconnections between various control, automa-914 tion, and business functions facilitated by the digitaliza-915 tion and networking of different industrial tools. Moreover, 916 by uncovering and automating previously inaccessible data 917 and processes, this approach also shows the unique benefit of 918 creating a set of Digital Twins, provided by these tools and 919 positioned within the real industrial engineering, automation 920 and management infrastructure inside a real industrial organi-921 zation. It has also been shown that the promotion of definitive 922 designs based on unified solutions for typical and special-923 ized applications is an effective strategy for achieving mass 924 customization while retaining a sufficiently high production 925 volume. The paper presented a set of industrial-mature digital 926 tools formally positioned within the company infrastructure, 927 according to the DIN SPEC 91345 RAMI4.0. This digital-928 ization approach also allows online monitoring of market 929 changes and provides optimized feedback that can be used for 930 responsive digital modeling of the entire production process 931 and associated businesses, within the company and out of the 932 company within the connected digital world represented by 933 the Supply Chain.
934
At this point, it is important to reinforce the fact that the 935 applicability and particularly the impact of the hyperautoma-936 tion approach discussed in this paper has been validated 937 based on real experience with hyperautomation implementa-938 tion at Diakont premises. Moreover, shown in the Table 1, 939 the authors were able to compare the properties of fre-940 quently reported hyperautomation approaches with the major 941 characteristics of the framework provided in this study for 942 sustainable hyperautomation, identifying the differences and 943 highlighting the novel aspects introduced in this manuscript. 944 On a holistic level, this paper has illustrated the key aspects 945 of attaining long-term sustainable growth in the manufac-946 turing of high-tech equipment by using mass customiza-947 tion, cutting-edge technology in the production process, and 948 system-wide interconnected digital tools. The use of mathe-949 matical models for market forecasting, which are regularly 950 updated by feedback from the market and customers, pro-951 vides valuable knowledge on potential pro-duction volume 952 and product cost estimates, enabling a fine balance in demand 953 and supply. Responsive R&D, which can rapidly adapt to 954 changes in production requirements and product features, 955 is another central feature of the proposed approach. Mass 956 customization is supported by a modular design approach 957 that can provide up to 1000 variants of the product, whereas 958 individual designs are used to produce definitive segments. 959 Hyperautomation using digital twins of product and pro-960 duction processes, forecasting models, and interconnected 961 enterprise management systems provides all-pervasive syn-962 ergy across the entire business function. Although the present 963 VOLUME 10, 2022 study focused mainly on hyperautoma-tion in the manufac- the maximum (or minimum) of the function under the given 1019 constraints, we use the simplex method, which allows us to 1020 perform calculations rather fast even at a very large dimension 1021 of the problem.
1022
This approach, based on a single database of available 1023 optional solutions, allows, for example:
1024
• To choose the routes for manufacturing of product com-1025 ponents that will provide minimum costs at maximum 1026 throughput among other available routes, considering 1027 the available production resources and infrastructure, the 1028 organization of production processes and the level of 1029 automation;
1030
• To calculate and optimize such parameters as the number 1031 of production staff, the number and modes of operation 1032 of machine tools considering the available production 1033 infrastructure, ways of organization of production pro-1034 cesses and the level of automation;
1035
• To compare several options for the organization and 1036 automation of production processes;
1037
• To check the sustainability of the production system to 1038 the changes in batch sizes and product customization 1039 grade.
1040
To obtain all these solutions, we use a number of optimiza-1041 tion problems from linear programming, united by a common 1042 set of constraints. Also, the model for evaluationg solutions 1043 on the organization of production performs the function of 1044 checking the balance of production resources under condi-1045 tions of multiple-machine service mode of the production 1046 operators. In our opinion, such addition is essential for orga-1047 nization of automated production of the smart factory, since 1048 with a high degree of automation operators are no longer 1049 assigned to the machines and act like an additional restriction 1050 when scheduling the operations. No scheduling system can 1051 provide perfect schedules that would ensure accurate timing 1052 of production tasks for machines and personnel, which can 1053 lead to either machine downtime while waiting for the setup 1054 or excessive staff and overstating production costs when oper-1055 ators maintain machines in multiple-machine mode. Thus, 1056 there is a problem of defining the balance of production 1057 resources.
1058
The developed model of checking the balance of produc-1059 tion resources for modeling possible losses uses an approach 1060 based on Markov's theorem. A Markov process with N dis-1061 crete states is modeled for the considered production area. 1062 One of the states corresponds to the need for interven-1063 tion of the operator into the process, while all the others 1064 correspond to the maintenance of one of the machine units 1065 and a combination of machines in the service queue. A system 1066 of Kolmogorov's equations is compiled: | 2022-09-15T15:54:25.864Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "cd84a09d2744067e1884392637d59ab3a48debe4",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09885179.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "eab29bea8793a5d95dc713f1c79c3149bbc95822",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
259911459 | pes2o/s2orc | v3-fos-license | A study of UG 2 pillar strength using a new pillar database
Synopsis A recent experimental pillar extraction project at a UG2 bord-and-pillar mine presented a unique opportunity to compile a new pillar database. Currently, the South African hard rock bord-and-pillar mines are designed using the Hedley and Grant formula with a modified K -value. This empirically derived formula was developed for uranium mines in the Elliot Lake district of Canada. The use of this formula for the design of pillars in South Africa is questionable. Very few pillar failures have nevertheless been observed and its current calibrations for the various reef types are possibly too conservative. A new UG2 pillar database of 66 pillars, of which seven are classified as failed, was compiled by the authors. This enabled a revised ‘first-order’ calibration of the K -value for the Hedley and Grant formula. The new estimated value for the UG2 is K = 75 MPa. This gives a pillar strength that is more conservative than the PlatMine formula. This work should nevertheless be considered as only a preliminary calibration as the database was small. Further work is also required to determine whether the exponents in the formula for the width and height parameters are appropriate for UG2 pillars.
Introduction
Empirically derived pillar strength formulae are commonly used in the global coal and hard rock mining industries. The hallmark coal pillar strength formula proposed by Salamon and Munro (1967) has been used in the design of many South African collieries. The success of this empirically derived formula can be attributed to the size of the original database used, and the fact that all the pillars included are from South African mines. The database included 125 pillar cases of which 27 were failed. The updated South African coal pillar database, used by van der Merwe and Mathey (2013) included 86 failed pillar cases and 337 intact pillar cases. As a more recent development, van der Merwe (2019) noted a major shortcoming of the statistical back-analysis of coal pillar strength as it relies on the 'as-mined' pillar dimensions and ignores time-related pillar scaling with subsequent reduction in pillar width. By considering this, an equation for pillar strength which predicts significantly greater pillar strength than the previous statistical analyses was derived. Van der Merwe's paper gives valuable databases of failed and intact cases. An important aspect related to coal is that the pillar shapes and layouts of the bord-and-pillar mines in South Africa do not differ greatly, and this facilitates the development of empirically-derived pillar strength formulae.
In contrast, the Hedley and Grant (1972) pillar strength formula, which is still being used for the design of pillars in the hard rock mining industry of South Africa, was derived based on a data-set of only 28 pillars. This included only three crushed pillars and two partially crushed pillars. The source of this database was quartzite pillars in the Elliot Lake district of Canada and it did not include any South African pillars. The use of this formula in the South African mining industry is therefore questionable . It should be noted that both the Hedley and Grant and Salamon equations assume a power law strength formulation; the motivation for this is given below.
Following the work by Hedley and Grant (1972), there have been several other attempts to develop hard rock pillar strength formulae. These are given by Martin and Maybee (2000) and are listed in Table I. These formulae were developed based on observed pillar failures. Note the small number of pillars in most of the databases used. It is clear that the formulae take the form of either a power-or linear-type equation. These equations have been used to predict the pillar strength for a wide range of pillar shapes and rock mass strengths.
[3] Hedley and Grant (1972) [4] Von Kimmelmann (1984) [5] Krauland and Soder (1987) [6] Potvin, Hudyama, and Miller (1989) [7] Sjöberg (1992) [8] Lunder and Pakalnis (1997. Some discussion on the original adoption of the power-law formula for the coal mining industry is insightful. The general form of the equation is given as where σp (MPa) is the pillar strength, K (MPa) is the strength of a unit volume of coal, w is the width of the pillar, h is the mining height, and α and β are exponents. The selection of a power-law equation was motivated by Salamon and Munro (1967) as follows: 'The strength of a pillar depends on the strength of the material of which it is composed, its volume and its shape. Presumably, the effect of shape is due to the constraint imposed on the pillar by the roof and floor through friction or cohesion. The volume and shape of square pillars are completely defined by their width (w) and height (h). The most commonly-occurring pillar strength formula in the literature is a simple power function composed of these variables. ' The emphasis on square pillars in the quote above is evident and its application to irregular-shaped pillars is uncertain. Work was conducted in the PlatMine research programme to develop local pillar strength formulae for the platinum industry. Watson et al. (2008) compiled a database of 179 Merensky Reef pillars, of which 109 were stable. In 2020, developed a new UG2 pillar strength formula for the platinum industry based on a larger data-set of 167 UG2 pillars. The conventional crush pillar layouts of the mines (Figure 1), from which these data-sets were compiled, differ from the shallow bord-and-pillar mines in the Bushveld Complex. Most of the pillar width:height ratios in the PlatMine database ranged between 1.5 and 4, and the heights were in the limited range of 1.5 m to 2 m. It should also be noted that the crush pillars in the conventional layouts are typically irregular in size, and this may cause difficulties when calibrating a strength formula for an assumed square pillar.
An experimental pillar extraction project at a mine in the eastern Bushveld Complex offered a unique opportunity to the authors to compile a new database of UG2 pillars in a mechanized bord-and-pillar mine. This database enabled an improved calibration of the Hedley and Grant formula. This can be used in areas of similar geotechnical conditions and layouts to optimize pillar design. As a cautionary note, however, this new calibration should be carefully tested in trial sections with suitable instrumentation and monitoring before it is adopted. The database is limited in size, and it should be expanded in future to verify its applicability.
Observations of pillar condition in an experimental pillar mining area
The experimental pillar mining area is illustrated in Figure 2. The site is described in . The mine established this experimental section in an attempt determine the strength of UG2 chromitite pillars. A central pillar (pillar A in Figure 2) was instrumented, and the surrounding pillars were progressively mined until failure of the central pillar occurred. The central pillar showed signs of limited scaling, while the small neighbouring pillars were severely fractured. A peak average pillar stress (APS) value of 160 MPa acting on the pillar was inferred (although it should be noted that the stress measurements were done in the hangingwall above the central pillar and not directly in the pillar). Further details on the measurements recorded are described in . As can be seen in Figure 2, the experiment resulted in pillars of different sizes and the observations indicated these are at varying degrees of stability. This is particularly valuable to estimate pillar strength as it is in the same area and therefore in the same geotechnical area. Napier and Malan (2021) simulated this area using a displacement discontinuity code and a limit equilibrium failure model. They found it difficult to reconcile the simulated APS of the central pillar with the peak APS value presented in . Napier and Malan (2021) simulated a peak APS in the order of 40-50 MPa for simulations where the central pillar failed, and 60-70 MPa for simulations where the pillar was still intact. The reason for the large discrepancy between the measured peak APS and the simulated APS is not known. An aspect that should be considered with these experiments is that strength variability will be encountered when conducting underground pillar strength experiments. As an example, Figure 3 illustrates the data collected by Cook, Hodgson, and Hojem (1971) in an attempt to verify the Salamon power-law strength formula. These were actual underground strength tests of large coal pillars loaded to failure. The significant scatter of the data is striking. It is reasonable to assume that a similar variability will be found for the in-situ testing of hard rock pillars. This is concerning, as a single underground hard rock pillar reduction test may therefore not be enough to verify the applicability of any formula. This emphasizes the need of building large site-specific pillar databases to obtain improved estimates of pillar strength.
Site observations
In June 2021 a number of underground visits were conducted to assess the pillar conditions at the mine. A total of 66 pillars in six different areas ( Figure 4) were selected for detailed observations and for populating the database. The pillars selected for the database typically fell into the following categories: ➤ Pillars with dimensions smaller than the design specifications ➤ Pillars in the experimental project area ➤ Pillars with 'anomalous' behaviour ➤ 'Normal' pillars at different depths.
Only limited observations were recorded in the old areas of the mine owing to the following reason. Pillars with 'anomalous' behaviour or with dimensions smaller than the design specifications are rehabilitated by the mine using shotcrete and a double row of resin bolts spaced 1 m × 1 m. Apart from examining the integrity of the shotcrete lining, visual observations of these pillars were not useful for making meaningful statements about pillar stability. The following information was collected for each pillar: ➤ Dimensions of the pillar ➤ Photographs of each side of the pillar ➤ Comments on geological structures in or nearby the pillar ➤ The pillar classification based on Figure 5. The pillar classification proposed by Esterhuizen et al. (2006) was used to categorize the pillars ( Figure 5). They based this classification on the systems developed by Lane, Yanske, and Roberts (1999), Siefert et al. (2003), Krauland and Soder (1987), Lunder (1994), and Pritchard and Hedley (1993). The similarity of the classification system, for the different databases in literature, enables their use in a larger, consolidated database. Esterhuizen et al. (2006) noted that pillars with a classification of 3 and below are typically made safe with regular scaling procedures and may require occasional rib bolting or screen. Pillars classified as 4 and above are generally barricaded off and require extensive support systems to preserve the integrity of the pillars. Regarding the current study, the classification was modified and pillars 1-2 were defined as stable, 3 as unstable, and 4-5 as failed. This slight modification was used as it was considered a more appropriate ranking of pillar behaviour at the UG2 mine. Pillars classified as failed were deemed to no longer function as stable pillars with an intact core which can carry the required tributary area load. Pillars in the unstable and stable categories used in this study were considered functional pillars that still carry their full load. It should be noted that this is a subjective measure based on the extent of fracturing and visual condition recorded for a pillar. This is a potential flaw in all studies, including the previous PlatMine study, that attempt to classify pillars based on visual observations alone. Measurement techniques to objectively determine if the core of the pillar is still intact, and the magnitude of the load carried, are unfortunately too expensive and time-consuming.
The experimental pillar project area was the only area in the mine where pillars in each of the categories could be found. It should be noted that all the failed and unstable pillars in the database are from the 25 pillars in the experimental pillar project area. A total of 7 of the 25 pillars in the area were classified as failed and are circled in red in Figure 6. An example of a Class 5 failed pillar is shown in Figure 7. A total of 5 of the 25 pillars were classified as unstable and are circled in yellow in Figure 6. Based on the classification system, the centre pillar (Figure 8) was classified as a Class 3 unstable pillar. The remaining pillars in the experimental project area were classified as Class 1-2 stable pillars and are circled in green in Figure 6. An example of a pillar in this class is shown in Figure 9.
The other pillars in the database are classified as either stable or 'geologically disturbed' (Class G) pillars. Class G pillars were recorded as a separate class as the failure of these pillars is driven by multiple joint sets ( Figure 10a) and proximity to reef rolls and potholes. Figure 10b shows the type of scaling observed for these pillars. Figure 11 summarizes the different pillar classifications in the database for various width to height ratios (W/H) and the simulated APS values (described in the following section). The surface topography around the mine is mountainous, making it difficult to determine the precise mining depth for the different areas. The overburden was nevertheless assumed to be flat above each of the modelled sections to simplify the modelling. The depths of the pillars in this database were calculated by determining the centre of mass of the overburden above the modelled areas. Deswick mine planning software was used to calculate the centre of mass using the surface contour mapping and excavation layers (Deswick, 2021). The depth distribution of the pillars in the database is shown in Figure 12.
The effective widths for the pillars were calculated using the 'perimeter rule' method proposed by Wagner (1974). Malan and Napier (2011) highlight the problems associated with the use of this method. The perimeter rule is nevertheless widely adopted in the industry. Further work is currently under way to determine the applicability of the rule (Maritz and Malan, 2023). The pillar database described in this paper should be re-examined in future if improved methods are developed to cater for pillars that are not square.
The distribution of the effective width to height ratios of the pillars is shown in Figure 13. It is noteworthy that all pillars in the database with an effective width to height ratio smaller than approximately 1.5 were classified as either failed or unstable.
Numerical simulations to estimate pillar stress
A database of pillar behaviour is of value only if the stress acting on each pillar can be estimated. For the earlier studies on pillar strength, such as Salamon and Munro (1967) and Hedley and Grant (1972), the pillar stress was estimated using tributary area theory (TAT). TAT is a conservative approach as it assumes a regular layout, with all pillars of equal size, with the layout continuing to infinity in all directions. The effect of large pillars and abutments are not considered (see Napier and Malan, 2011). A more accurate approach was used for this study by simulating the average pillar stress (APS) using numerical modelling techniques. The TEXAN code used in this study is a displacement discontinuity boundary element code that was specifically developed to simulate bord-and-pillar layouts. It incorporates the use of triangular boundary elements to enable an accurate representation of irregular-shaped pillars and layouts (Napier and Malan, 2007;Esterhuyse and Malan, 2018). The code can explicitly simulate small pillars and the crushing of the pillars using a limit equilibrium model. The use of TEXAN to simulate small crush pillars is described in du Plessis and Malan (2018). Owing the restrictions on the number of elements that can be practically solved in TEXAN (270 000 elements in the version used by the authors), smaller areas were simulated in detail for this study and not the entire mine. To simplify the digitizing of the outlines and the meshing procedure, the pillar outlines were approximated by using straight line segments.
The mined areas were covered using a triangular mesh. In terms of element sizes, the centroids of adjacent triangular elements are spaced approximately 1 m apart, but this varied from area to area. Where necessary, elements were modelled with the centroids spaced approximately 0.5 m apart to accurately simulate the APS values of the smaller pillars. This spacing is referred to as the 'element size' in this paper, but this is not strictly correct as the meshes consisted of triangular elements. The pillars of interest also had to be meshed for the APS calculations. In displacement discontinuity codes, any area not covered by elements is considered as solid material and therefore not all the pillars had to be meshed to get an accurate solution. Also note that element size can affect the simulated pillar APS values -this is described in Napier and Malan (2011).
The six areas of the mine visited ( Figure 3) were simulated using TEXAN. The depths of the different areas are given in Table II. The overburden density was assumed to be 3100 kg/m3. This requires further verification. The other model parameters were Young's modulus = 70 000 GPa, Possion's ratio = 0.2. The elastic parameters were only estimated values for the particular rock mass, but are nevertheless considered acceptable. Young's modulus does not affect pillar APS values unless total closure occurs. The pillars were simulated as 'rigid' pillars that were not allowed to deform. Figure 14 illustrates the mesh used to simulate one of the areas, namely the experimental pillar project area. Approximately 230 000 elements were used to simulate the 500 m x 500 m layout. The APS values for all the pillars in the database were calculated using these simulations. Figure 15 illustrates the mesh used for one particular pillar within this model to illustrate the element size.
Rock strength
A geotechnical testing programme on the rock types in the stratigraphy of the UG2 reef was conducted by the mine. The Bieniawski and van Heerden (1975), it is difficult to determine the actual strength of the rock mass material in the pillars. The seemingly large variation in UCS values may make it difficult to calibrate pillar strength formulae. The possible variation in pillar strength in different areas of a mine has been largely ignored to date. This needs to be studied in future, and it is also recommended that additional laboratory testing be done on the UG2 material.
Pillar strength estimation
The Class G pillars were not included in the studies to estimate the pillar strength. The pillar data-set was also simplified by adopting the three categories shown in Figure 16. An analysis of the data using statistical methods was attempted, but this was not successful. The maximum likelihood estimation (MLE) used by Salamon and Munro (1967) and the overlap reduction technique used by van der Merwe (2003) were not suitable for this study owing to the limited size of the database. Pillar failure also occurred in one area only where the mining height was constant. Furthermore, without variability in the height for the failed pillars, there is no accurate statistical method to determine the appropriate exponent for the height parameter. The pillar data-set was initially evaluated using the common hard-rock pillar strength formulae (Table IV). Figure 17 illustrates the formulae with adjusted K-values by using the new pillar database. Ideally, pillars in an unstable condition should be close to the failure envelopes with the red (failed) pillars above the line and green (stable) pillars below it. It is clear from the figure that the original Hedley and Grant calibration with K = 35 MPa is too conservative for this particular mine. Notably the Hedley and Grant (1972), Bieniawski and van Heerden (1975), and Obert and Duval (1967) formulae with an adjusted K = 65 MPa provide a good approximation of the pillar strength. This K-value was an arbitrary value of approximately two-thirds of the UCS of the pillar material.
A preliminary calibration of the Hedley and Grant (1972) formula was done by the authors by adjusting the K-value to obtain a reasonable fit to the data. The Hedley and Grant (1972) formula is well established in the design of hard rock pillars in South Africa and, as no suitable alternative was available to the authors, this formula was recalibrated using this new data-set. Future work will need to be conducted to verify the applicability of the exponents for the width and height parameters in this formula.
The estimated value of K was 75 MPa. The fitted curve is illustrated in Figure 18. This calibration seems to give a reasonable failure envelope to separate the failed and stable pillars with all the intact pillars below the envelope. Note that this line is only a first trial-and-error estimate done by hand. Additional data will be required to verify the most appropriate failure envelope as there is one failed pillar data-point far below the line. This may be an outlier, but further work needs to be done to refine this calibration with a larger database. Of particular significance, however, is that this calibration is substantially more conservative than the PlatMine formula and all the failed pillars are below the PlatMine strength envelope. This is one of the important findings of this study.
Conclusions
The compilation of a new UG2 pillar database for a bord-andpillar mine in the eastern Bushveld Complex will assist with the development of locally derived and calibrated pillar strength formulae. This study illustrated that care should be exercised when using the traditional statistical methods employed to evaluate pillar databases. Owing to the limited variability in mining height in the database, it is difficult to determine the effect of pillar height on the strength of the pillar and to calibrate the exponent of the height parameter in a power-law strength equation. Analysis of the database indicates that the Hedley and Grant formula with a K-value of 35 MPa is too conservative for this particular mine. The 'first-order' calibration indicates that a value of K = 75 MPa may be more appropriate. Additional data will nevertheless be required to verify this value. Further work is also required to determine if the Hedley and Grant formula provides an accurate reflection of the changes in pillar strength for pillars at different widths and heights. Of interest is that this calibration is substantially more conservative than the PlatMine formula, and all the failed pillars in the new database are below the PlatMine strength envelope. As further cautionary note, this new calibration should be tested in trial sections with suitable instrumentation and monitoring before it is adopted at any mine. | 2023-07-16T15:13:15.140Z | 2023-07-13T00:00:00.000 | {
"year": 2023,
"sha1": "769ae897f5a493b7be40e293fd65ec185b66a34d",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.org.za/pdf/jsaimm/v123n5/09.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "24ef3c3670d82b080d770fd0d6e9dd21f0e23a80",
"s2fieldsofstudy": [
"Engineering",
"Geology"
],
"extfieldsofstudy": []
} |
256257861 | pes2o/s2orc | v3-fos-license | Primer terminal ribonucleotide alters the active site dynamics of DNA polymerase η and reduces DNA synthesis fidelity
DNA polymerases catalyze DNA synthesis with high efficiency, which is essential for all life. Extensive kinetic and structural efforts have been executed in exploring mechanisms of DNA polymerases, surrounding their kinetic pathway, catalytic mechanisms, and factors that dictate polymerase fidelity. Recent time-resolved crystallography studies on DNA polymerase η (Pol η) and β have revealed essential transient events during the DNA synthesis reaction, such as mechanisms of primer deprotonation, separated roles of the three metal ions, and conformational changes that disfavor incorporation of the incorrect substrate. DNA-embedded ribonucleotides (rNs) are the most common lesion on DNA and a major threat to genome integrity. While kinetics of rN incorporation has been explored and structural studies have revealed that DNA polymerases have a steric gate that destabilizes ribonucleotide triphosphate binding, the mechanism of extension upon rN addition remains poorly characterized. Using steady-state kinetics, static and time-resolved X-ray crystallography with Pol η as a model system, we showed that the extra hydroxyl group on the primer terminus does alter the dynamics of the polymerase active site as well as the catalysis and fidelity of DNA synthesis. During rN extension, Pol η error incorporation efficiency increases significantly across different sequence contexts. Finally, our systematic structural studies suggest that the rN at the primer end improves primer alignment and reduces barriers in C2′-endo to C3′-endo sugar conformational change. Overall, our work provides further mechanistic insights into the effects of rN incorporation on DNA synthesis.
DNA polymerases catalyze DNA synthesis with high efficiency, which is essential for all life. Extensive kinetic and structural efforts have been executed in exploring mechanisms of DNA polymerases, surrounding their kinetic pathway, catalytic mechanisms, and factors that dictate polymerase fidelity. Recent time-resolved crystallography studies on DNA polymerase η (Pol η) and β have revealed essential transient events during the DNA synthesis reaction, such as mechanisms of primer deprotonation, separated roles of the three metal ions, and conformational changes that disfavor incorporation of the incorrect substrate. DNA-embedded ribonucleotides (rNs) are the most common lesion on DNA and a major threat to genome integrity. While kinetics of rN incorporation has been explored and structural studies have revealed that DNA polymerases have a steric gate that destabilizes ribonucleotide triphosphate binding, the mechanism of extension upon rN addition remains poorly characterized. Using steady-state kinetics, static and time-resolved X-ray crystallography with Pol η as a model system, we showed that the extra hydroxyl group on the primer terminus does alter the dynamics of the polymerase active site as well as the catalysis and fidelity of DNA synthesis. During rN extension, Pol η error incorporation efficiency increases significantly across different sequence contexts. Finally, our systematic structural studies suggest that the rN at the primer end improves primer alignment and reduces barriers in C2 0 -endo to C3 0 -endo sugar conformational change. Overall, our work provides further mechanistic insights into the effects of rN incorporation on DNA synthesis.
DNA polymerases catalyze the essential process of DNA synthesis during DNA replication and DNA repair. Although distinct in their sequences and structures and categorized into seven enzymatic families, DNA polymerases contain similar active sites, follow similar kinetic pathways, and employ similar catalytic mechanisms (1,2). They adopt a right-hand architecture with the active site in the palm domain, the thumb domain that interacts with primer:template DNA duplex, and the finger domain that interacts with the incoming nucleotide (Fig. 1A). Conserved acidic residues line the active site, and once DNA and the incoming nucleotide bind, replicative polymerases (A, B, C, D, and RT-family) and some X-family polymerases (β and λ) exhibit large-scale finger domain conformational changes, while Y-family polymerases and some X-family polymerases do not. Following finger domain movement, multiple metal ions are recruited in promoting DNA synthesis (Fig. 1C) (1,(3)(4)(5)(6). The B-site metal ion (Me 2+ B ) is associated with the triphosphate motif of the incoming deoxynucleotide triphosphate (dNTP) and stabilizes its binding; the A-site metal ion (Me 2+ A ) lies between the primer end and the incoming dNTP and aligns the primer 3 0 -OH with the substrate α-phosphate to promote primer 3 0 -OH deprotonation and nucleophilic attack; following the binding of dNTP and the Me 2+ A and Me 2+ B , the C-site metal ion (Me 2+ C ) binds between the αand β-phosphates on the other side of the active site and drives α-β-phosphate bond breakage. Following product formation in A-, B-and some X-family DNA polymerases (7)(8)(9)(10)(11)(12)(13)(14), the primer terminus sugar pucker remains in a C3 0 -endo conformation to avoid steric clashes. For Y-family polymerases like DNA polymerase η (Pol η) (15), the primer terminus sugar pucker changes from a C3 0 -endo conformation to a C2 0 -endo conformation to avoid steric clashes with the nonbridging oxygen on the incoming nucleotide. Concurrently, the metal ions and pyrophosphate dissociate from the polymerase active site, and the newly synthesized primer end translocates out of the nucleotide insertion site for the next round of dNTP incorporation.
Genomic DNA constantly faces endogenous and environmental assaults. Although there exist proficient repair pathways, polymerases inevitably face DNA lesions while traveling along DNA. Such lesions may create bulky obstacles or alter base-pairing, pi-stacking, and the DNA backbone, promoting polymerase stalling and replication stress (16). Eukaryotic cells contain repair and translesion polymerases that can bypass various lesions (2). Different translesion mechanisms are used to bypass different lesions with varying chemical properties (2). DNA-embedded ribonucleotides (rNs) are the most common lesion and pose major threats to genome integrity (17,18). A vast majority of rNs are incorporated during DNA replication by DNA polymerases δ and ε (19,20). In addition, rNs are incorporated by Pol α as primers during Okazaki fragment formation (21). During nonhomologous end joining or base excision repair, rN incorporation by X-family polymerases facilitates downstream ligation (22,23). Having a OH group on the 2 0 carbon ( Fig. 2A), DNA embedded rNs can lead to nicks on the phosphate backbone and the accumulation of nonsynonymous mutations (20,24). Although rN incorporation does not significantly affect the processivity of Pol η, Pol α, Pol δ, and Pol ε (25,26), rNs, if left incorporated on the nuclear or mitochondria template strand, can lead to error-prone impairment and stalling of the replication and transcription machinery (26)(27)(28)(29). Because of their lethal nature, the cell has evolved two specific pathways to remove DNA-embedded rNs. The first involves ribonuclease H2 (RNaseH2), which nicks at the rN site for error-free ribonucleotide excision repair (30,31). The second involves topoisomerase I, which can lead to small DNA deletions (24,(32)(33)(34). In addition, mismatch repair has been implicated as an alternative pathway in rN removal (35), but the mechanism of rN recognition is unclear.
The structural basis of rN incorporation by DNA polymerases is well understood. A steric gate for discriminating the 2 0 -OH group of rN was first proposed in 1995 and 1997 (36,37). In A-family (38), B-family (39,40), and Y-family polymerases (41), a steric gate formed by tryptophan and phenylalanine residues sterically clashes with the hydroxyl group on the 2 0 carbon of ribonucleotide triphosphates (rNTPs) and prevents rNTP binding. As a consequence, the A-family Klenow fragment polymerase, B-family RB69, and Y-family pol η discriminate rNTP incorporation by 3400- (42), 6400- (39), and 770-to 3400-folds (25), respectively. X-family polymerases such as DNA polymerase μ, β, and λ contain only a peptide backbone in place of a steric gate. With a tyrosine backbone in place of this steric gate, Pol β and Pol λ discriminate rNTPs by 2000-to 8200-fold (43) and 4000-fold (44), respectively. On the other hand, Pol μ which contains a glycine backbone instead, prefers to incorporate dNTP over rNTP by only 2-fold (45). During DNA replication, if a rN does get incorporated, DNA synthesis extending from the rN can still occur because the rN contains a functional 3 0 -OH group. Recently, it was revealed that the catalytic rate of DNA polymerase ε drops 3300-fold during rN extension compared to deoxynucleotide (dN) extension (46), hinting the involvement of translesion polymerases in rN extension. Kinetic analysis of Pol β revealed insignificant changes in catalytic efficiency (K cat /K M ) between dN and rN extension (43). Structural studies of DNA polymerase λ showed minimal differences when the 3 0 -end of primer was a dN versus a rN during correct nucleotide incorporation (47). Despite such efforts, whether and how the 2 0 -OH at the primer end affects the structure and dynamics of the active site as well as polymerase catalysis and fidelity are not fully explored.
Pol η is a Y-family translesion polymerase responsible for bypassing cyclobutane pyrimidine dimers (48,49). Additionally, Pol η has been implicated in translesion DNA synthesis against a variety of lesions (2). People with mutations on the Pol η gene develop a predisposition for skin cancer and xeroderma pigmentosum (50,51). Furthermore, Pol η widely participates during lagging strand synthesis (52) and has the ability to incorporate rNs and exhibits reverse transcriptase activity on RNA:DNA duplexes, although the biological role of the latter remains unclear (53). Pol η has been used as a model system for investigating mechanisms of polymerase catalysis. Kinetic studies have revealed that Pol η follows a similar kinetic pathway to other polymerases (54). The reaction process of Pol η was captured at atomic resolution with recent time-resolved crystallography (3)(4)(5)(55)(56)(57). By tracking the DNA synthesis by Pol η in crystallo, we have showed that the Me 2+ C binds between the αand β-phosphates on the opposite side of Me 2+ A and Me 2+ B and plays an essential role in driving α-β-phosphate bond breakage. For Pol η, the primer terminus sugar pucker changes from a C3 0 -endo conformation to a C2 0endo conformation to avoid steric clashes with the nonbridging oxygen on the incoming nucleotide (15). In addition, primer alignment by Me 2+ A is perturbated during misincorporation and contributes to intrinsic polymerase fidelity (5). Moreover, structural snapshots of Pol η bypassing various lesions such as cyclobutene pyrimidine dimers, 8,5 0 -cyclo-2 0deoxyadenosine, phenanthriplatin, and cisplatin were captured for illustrating mechanisms of translesion synthesis (49,(58)(59)(60). Bypassing a cyclobutene pyrimidine dimer and cisplatin was promoted by finger domain movement that helped minimize DNA changes in pi-stacking and alignment. On the other hand, perturbations in primer-substrate alignment prevented Pol η-mediated bypass of 8,5 0 -cyclo-2 0 -deoxyadenosine and phenanthriplatin on the DNA template. Thus, we sought to use the Pol η system to investigate the consequences of rN extension at atomic resolution.
Here, we present biochemical and structural studies of Pol η extending rNs. Steady-state kinetic data on correct and incorrect single-rN extension suggest that Pol η can extend rNs with high efficiency. Interestingly, the rN primer end significantly decreases substrate discrimination. Corresponding crystal structures of Pol η complexed with both Mg 2+ and Mn 2+ and single rN-primed DNA substrate suggest that having a rN at the primer terminus stabilizes it in a productive conformation for nucleophilic attack. Furthermore, we compare the misincorporation process extending from ribose uridine (rU) and deoxyribose thymine (dT) primed DNA substrate with time-resolved crystallography. The results further confirm that the decreased Pol η fidelity during rN extension is due to the stabilization of the active aligned conformation during nucleophilic attack.
Kinetics and misincorporation efficiencies of rN extension by pol η
During the DNA synthesis reaction, the primer 3 0 -OH aligns with the α-phosphate to initiate the nucleophilic attack. It was recently revealed that primer 3 0 -OH alignment promoted by Me 2+ A is the key step in substrate discrimination (5, 61). The distance between the respective catalytic efficiencies represents a ratio of the efficiencies and is a measure of discrimination. Data are generated from Table S1. dGTP, deoxyribose guanine triphosphate; Pol η, DNA polymerase η.
Interestingly, Gregory et al. captured the rN primer terminus in the aligned conformation in the absence of the Me 2+ A (57), suggesting the 2 0 -OH group alters conformation of the sugar ring and possibly facilitates the nucleophilic attack. We thus hypothesized that a rN at the 3 0 -primer end may promote primer 3 0 -OH alignment and stimulate incorrect nucleotide incorporation. Steady-state kinetic assays of Pol η with native and single-rN primed DNA substrate were conducted to detect for changes in misincorporation efficiencies. The correct nucleotide incorporation efficiency during rN extension was 1.3 to 2-fold lower than during dN extension ( Fig. 2B and Table S1). However, misincorporation efficiency from a rN primer was enhanced by over 10-fold compared to dN extension. Previous studies suggested that the misincorporation efficiencies of Pol η is sequence dependent and decreases when extending from deoxyribose adenine (dA) and dT, also known as the WA (W, A or T) motif (62). We thus also examined DNA substrates with different terminal nucleotides. Similar as reported, Pol η catalyzed DNA synthesis with different misincorporation efficiencies on substrates with different primer termini, with deoxyribose cytosine (dC) as the highest and dA/dT as the lowest. For all sequence context, substrate discrimination dropped over 10-fold with a rN primer end. Notably for rU extension, correct incorporation was only twice more efficient than misincorporation during rN extension.
Misincorporation efficiencies of S113 A pol η Previous crystal structures of Pol η showed that the primer is coordinated in the down aligned conformation by serine 113 in the active site before Me 2+ A binding (Fig. 3A) (3,57). This widely conserved serine among Y-family polymerases is important for primer alignment. We investigated whether a S113 A Pol η mutant would perturb primer alignment and alter polymerase extension. With a dN primer end, the S113 A mutant catalyzed DNA synthesis with 6-fold lower K M and 3-fold lower k cat compared to the WT (Table S2), consistent with previous studies (3). Misincorporation of deoxyribose guanine triphosphate (dGTP) over dT template affected both K M and K cat , resulting in a 63-fold drop in catalytic efficiency compared to correct dATP incorporation over dT (Table S2). A ribose adenine (rA)-ended primer increased the catalytic efficiency of correct dATP incorporation around 3-fold but stimulated incorrect dGTP incorporation around 16-fold, resulting in a 5-fold change in misincorporation efficiency. The stimulation of the misincorporation by rA-ended primer is mainly due to the elevated k cat (Fig. 3B and Table S2). Our kinetic studies were consistent with previous studies, where rA at the primer termini overrides the effect of S113 to help in aligning the primer for nucleophilic attack. The results further confirmed the significant role of primer alignment in fidelity control and the conformation selection of the rN primer end.
Binary complex of pol η with dN and rN ended primer
To test how the rN primer end would affect primer alignment, we first obtained crystals of Pol η in the absence of bound incoming nucleotide. We captured the binary state of Pol η complexed with dT-ended DNA at 1.75 Å resolution and rU-ended DNA at 2.1 Å resolution (Fig. 4). Both structures were similar to the previous ternary complex (RMSD 0.2 and Figure 3. Mechanisms of S113 A polymerase η catalysis and misincorporation. A, WT Pol η ground state with the dT primer terminus close to S113 (PDB ID 4ECQ). The primer 3 0 -OH is slightly too low or close to S113 to be in an aligned conformation. The angle of the primer 3 0 oxygen, substrate phosphorus, and its bridging oxygen are at 160 . dT primer terminus close to S113 (light blue) in the S113 A Pol η ground state (PDB ID 7M7Q). The dT primer terminus moves 1.3 A toward the misaligned conformation when an alanine is substituted in place of S113. B, S113 A Pol η DNA polymerase correct and incorrect substrate incorporation efficiency during dN or rN extension in the presence of Mg 2+ based on steady-state kinetics. The fold-change between correct and incorrect substrate incorporation efficiencies represents a measurement of substrate discrimination. The bars represent the mean of triplicate measurements for the catalytic efficiencies (k cat /K M ) for incorporation of dATP (blue) and dGTP (red) opposite dT. Data are generated from Table S2. dGTP, deoxyribose guanine triphosphate; dN, deoxynucleotide; Pol η, DNA polymerase η; rN, ribonucleotide; dT, deoxyribose thymine. 0.3, respectively) and binary complex (RMSD 0.3 and 0.4, respectively), confirming that there are no significant conformational changes during incoming nucleotide binding (63). The dT and rU structures are almost identical. In addition, the sugar ring conformations were the same. Compared to the ternary complex, the sugar ring for both primers termini in the binary complex is in a C2 0 endo geometry. The 3 0 -OH group exists in the misaligned up conformation with the substrate phosphate 4.4 Å and 3.5 Å relative to the aligned 3 0 -OH group and stabilized by R61 through hydrogen bonds within the dT and rU structures, respectively. These binary structures suggested that the primer 3 0 -end with either dN or rN is not aligned in the absence of the incoming dNTP and Me 2+ .
Structures of pol η misincorporation complex with rN at the primer terminus
We further investigated how a rN primer end affects primer alignment by capturing the ternary misincorporation complex. To prevent catalysis, we prepared ternary Pol η structures with rN-ended primers complexed with 2 0 -deoxyguanosine-5 0 -[(α,β)-imido]triphosphate (dGMPNPP). We determined structures of Pol η with rA, rU, rC, ribose guanine (rG) at the primer terminus. In the rA structure, the primer is 100% aligned even with the incorrect substrate, in contrast to only 25% in the dA structure (Fig. 5, A and B) (5). It is interesting to note that for the rA primed structure, the sugar pucker of the rA base was already in a C3 0 endo conformation, as opposed to that of the dA structure, which was in a C2 0 endo conformation (Figs. 5A and 6). Similarly, in the rU structure, 100% of the rU primer existed in a C3 0 endo aligned down conformation (Fig. 5, C and D). In contrast, the primer termini in the rC and rG structures existed in the misaligned up conformation, similar to the structures with dC and deoxyribose guanine (dG) (Fig. 7, A and B and Fig. S1) (62). The decreased efficiency of bypass mediated by perturbations in primer termini alignment has also been observed during 8,5 0 -cyclo-2 0 -deoxyadenosine and phenanthriplatin bypass (49,58,59). Interestingly, all of our rN primed structures showed minimal changes in base-pairing and planar geometries similar to what has been observed during Pol η-mediated cyclobutene pyrimidine dimer and cisplatin bypass (Figs. S2 and S3) (49,60). In contrast, during phenanthriplatin bypass weaker pi-stacking interactions between a template phenanthriplatin and the incoming nucleotide were suggested to explain the 6-fold drop in nucleotide binding.
We suspect that the primer end is in dynamic equilibrium between the misaligned and aligned conformations as observed in previous in crystallo studies (5,61). The occupancy might be too low for the aligned conformation of rC and rG structures to be refined at the current resolution. Because Mn 2+ has been shown to improve primer alignment, we captured the same rC and rG-ended Pol η mismatch structures with Mn 2+ . The rG primed structure with Mn 2+ showed increased presence of the primer terminus in the down aligned conformation (from 0% to 30%) ( Fig. 7C and Fig. S1). The sugar pucker of the down conformation was also in C3 0 endo configuration. In contrast, for the structures with rC at the primer terminus, the primer remained in the up misaligned conformation, consistent with the kinetic studies, showing dC/ rC-primed DNA with the highest incorrect nucleotide discrimination ( Fig. 7D and Fig. S1).
In crystallo rN extension of pol η
Previous studies have shown that nonhydrolyzable nucleotide analogs such as dNMPNPPs may impede primer alignment (5). Thus, we visualized the misincorporation process of dGTP across dT in the presence of Mn 2+ from a single uridine primed DNA substrate. We crystalized Pol η complexed with dATP, Ca 2+ , K 1+ to prevent the reaction. In this ground state, 30% of the rU primer termini existed in the aligned down conformation and 3.6 Å away from the target α-phosphate (Fig. 8A). dGTP formed a wobble base-pair with the template dT. Similar to dT extension (5), 70% of R61 in the ground state was already flipped away from the triphosphate motif of the incoming nucleotide to stabilize the wobble dG-dT base pair.
To track the reaction process, we have determined six structures of Pol η soaking in 10 mM Mn 2+ for 30 to 300s (Table S3). Mn 2+ was chosen over Mg 2+ due to the higher occupancies of the incoming nucleotide and its better signal in X-ray diffraction. The resolution of these structures are similar with different soaking time, ranging from 2.05 to 2.2 Å. After 30s soaking in 10 mM Mn 2+ , 70% of the Me 2+ A was already saturated with Mn 2+ , and 75% of the rU primer had moved down in the aligned conformation, residing 3.6 Å away from the target α-phosphate (Fig. 8B). In comparison, after 30s soaking with a dT-ended primer, 65% of the Me 2+ A was saturated with Mn 2+ , and 40% of the thymine primer was aligned for misincorporation. The Me 2+ A during rU extension All electron density maps apply to the molecule colored in pink. The 2F o -F c map for everything including Me 2+ A and Me 2+ B , dGMPNPP, and catalytic residues and S113 (blue) was contoured at 2 σ. The F o -F c omit map for the primer terminus (green) was contoured at 3.5 σ in A, B, and 2.7 σ in C, D. rU, ribose uridine; dN, deoxynucleotide; Pol η, DNA polymerase η; rN, ribonucleotide; dT, deoxyribose thymine. was assigned with 70% occupancy and was in an optimal octahedral geometry. Despite the improvement in alignment, the 3 0 -OH of rU resided 2.6 Å away from the Me 2+ A , 0.5 Å further compared to that during correct incorporation. After 180s of soaking, clear electron density for newly formed bond was visible and assigned to 50% between the rU 3 0 -OH and target α-phosphate (Fig. 8, C and D). Unlike during dT extension in which the Me 2+ C appeared before product formation, the Me 2+ C here appeared simultaneously with product formation.
Discussion
Many efforts have investigated the kinetic and structural mechanism surrounding polymerase fidelity (64)(65)(66)(67)(68)(69)(70)(71)(72)(73). It was proposed that the exonuclease proofreading and the finger domains' conformational changes (64, 65) play significant roles in substrate discrimination. However, polymerases that lack proofreading exonuclease and finger domain conformational changes still incorporate correct bases versus incorrect bases more efficiently than what can be provided by Watson-Crick base pairing melting energies, which has been estimated to be only 0.3 kcal/mol (2,66,69,(74)(75)(76). More recently, studies of Pol β and Pol η have shown that primer alignment contributes to the intrinsic polymerase incorrect nucleotide discrimination and is perturbed during misincorporation (5,61). Here, we show that during rN extension, misincorporation efficiency increases over 10-fold (Fig. 2B). Systematic structural studies confirmed that this rise in misincorporation is likely due to improved primer alignment in the presence of the incorrect incoming nucleotide (Figs. 5, 7 and 8). Pol η misincorporation is sequence dependent, with dA/dT at the primer end showing higher misincorporation and dC with lower misincorporation (62). The wobble base pair during rN extension looked identical to the previous reported dN extension mismatch structures, suggesting similar pi-stacking interactions (62). Consistent with the trend of misincorporation, the rA and rU primer ends are better aligned compared to rC (Figs. 5 and 7). These multiple levels of correlation of primer end alignment and misincorporation highlighted the critical role of primer alignment in polymerase substrate discrimination control. All of our binary, ternary, and in crystallo reaction structures were determined in the same space group with similar parameters, and the observed difference in primer alignment was not likely influenced by the crystal lattice.
During Pol η-promoted DNA synthesis, the primer terminus overcomes a C2 0 -endo to C3 0 -endo barrier (3) to avoid steric clashes with the nonbridging oxygen of the incoming nucleotide. Our structures suggested that an extra 2 0 -OH on the sugar motif like on a rN affects sugar pucker conformation. The rN primer termini are more readily in the C3 0 -endo form in the aligned conformation before product formation (Fig. 6). In addition, this suggests weaker barriers for product formation compared to DNA extension. Minimal steric clashes of rNs at the penultimate primer (P-2) position in A-family Pol I, B-family RB69, X-family Pol β, and Y-family Pol η may suggest that the C3 0 -endo conformation is preferred as the product form during rN incorporation (Fig. S4). The combination of improved primer alignment and weaker C2 0 -endo to C3 0 -endo sugar pucker conversion might explain the changes in substrate discrimination by Pol η. Many nucleoside analog drugs have modifications on the 3 0 and 2 0 carbon of the sugar ring and inhibit DNA synthesis at the extension step (77). Cytarabine, which is similar to dC but has a 2 0 -OH in the β direction, is an effective chain terminator and a drug for leukemia treatment (77)(78)(79)(80). Elevated Pol η expression has been found to help bypass cytarabine (81,82). Structural studies show that cytarabine exists as a C2 0 -endo at the primer terminus even during correct nucleotide incorporation (83). The altered sugar ring conformation might increase barriers in C2 0 -endo to C3 0 -endo conversion and thus also inhibit polymerase extension (83,84). Similarly, nucleoside analog drugs such as entecavir and galidesivir contain C2 0 -modifications and possibly exert their inhibitory effect through similar mechanisms.
Our study on rN extension suggests that the 2 0 -OH significantly affects polymerase misincorporation. As Me 2+ A -mediated primer alignment is a required step in DNA synthesis, we hypothesize that the observed elevated incorporation error rate might be a universal property for all polymerases. However, polymerases from different families may have evolved specific structural features to discriminate the 2 0 -OH at the primer end. Although Pol η and X-family polymerases can extend past rN primers with high efficiency, Pol ε efficiency decreases over 3300-fold during rN extension (46). Further biochemical and structural studies of different polymerases may be needed to clarify the effect of the 2 0 -OH group on catalysis, misincorporation, and primer extension. The reduced efficiency of Pol ε in extending rN primer might indicate involvement of translesion polymerases in extending rN primers (5,61).
Experimental procedures Protein expression and purification
Wildtype human polymerase η (Pol η) (residues 1-432) was cloned into a modified pET28p vector with a N-terminal 6-histidine tag and a PreScission Protease cleavage site as described (56). For protein expression, this Pol η plasmid was transformed into BL21 DE3 Escherichia coli cells. When the absorbance of the E. coli cells reached 0.8, isopropyl ß-D-1thiogalactopyranoside was added to a final concentration of 1 μM isopropyl ß-D-1-thiogalactopyranoside. After 20 h (16 C) of induction, the cell paste was collected via centrifugation and re-suspended in a buffer that contained 20 mM Tris (pH 7.5), 1 M NaCl, 20 mM imidazole, and 5 mM ß-mercaptoethanol. After sonification, Pol η was loaded onto a HisTrap HP column (GE Healthcare), which was pre-equilibrated with a buffer that contained 20 mM Tris (pH 7.5), 1 M NaCl, 20 mM imidazole, and 5 mM ß-mercaptoethanol. The column was washed with 300 ml of buffer to remove nonspecific-bound proteins and was eluted with buffer that contained 20 mM Tris (pH 7.5), 1 M NaCl, 300 mM imidazole, and 3 mM dithiothreitol (DTT). The eluted Pol η was incubated with PreScission Protease to cleave the N-terminal 6-histidine-tag. Afterwards, Pol η was buffer-exchanged and desalted to 20 mM 2-(N-morpholino)ethanesulfonic acid (MES) (pH 6.0), 250 mM KCl, 10% glycerol, 0.1 mM ethylenediaminetetraacetic acid, and 3 mM DTT and was loaded onto a MonoS 10/100 column (GE Healthcare). The protein was eluted with an increasing salt (KCl) gradient. Finally, Pol η was cleaned with a Superdex 200 10/300 Gl column (GE Healthcare) with a buffer that contained 20 mM Tris (pH 7.5), 450 mM KCl, and 3 mM DTT.
DNA synthesis assay
The nucleotide incorporation activity was assayed by the following: The reaction mixture contained 1.3 to 180 nM Pol η (WT or S113 A), 5 μM DNA, 0 to 400 μM dNTP (either dATP or dGTP), 100 mM KCl, 50 mM Tris (pH 7.5), 5 mM MgCl 2, 3 mM DTT, 0.1 mg/ml bovine serum albumin, and 4% glycerol. The incorporation assays were executed using DNA template and 5 0 -fluorescein-labeled primer listed in Table 1. Reactions were conducted at 37 C for 5 min and were stopped by adding formamide quench buffer to the final concentrations of 40% formamide, 50 mM ethylenediaminetetraacetic acid (pH 8.0), 0.1 mg/ml xylene cyanol, and 0.1 mg/ml bromophenol. After heating to 97 C for 5 min and immediately placing on ice, reaction products were resolved on 22.5% polyacrylamide urea gels. The gels were visualized by a Sapphire Biomolecular Imager and quantified using the built-in software. Quantification of K cat , K M , V Max and fitting and graphic representation were executed by Graph Prism. Source data of urea gels are provided as a Source Data file.
Crystallization
Pol η was concentrated to 300 μM in buffer that contained 20 mM Tris (pH 7.5), 0.45 M KCl, and 3 mM DTT. Then DNA, dGTP or dGMPNPP, and Ca 2+ and low salt buffer [20 mM Tris (pH 7.5), and 3 mM DTT] were added to this polymerase solution at the molar ratio of 1 : 1.2: 1: 1 for Pol η, DNA, dGTP or dGMPNPP, and Ca 2+ , bringing Pol η 0 s concentration to 100 μM. Then after this solution was kept on ice for 10 min, more dGTP or dGMPNPP was added to a final concentration of 0.5 mM. DNA template and primer used for crystallization are listed in Table 2. All crystals were obtained using the hanging-drop vapor-diffusion method against a reservoir solution containing 0.1 M MES (pH 6.0) and 9 to 15% (w/v) PEG2K-MME at room temperature within 4 days.
Chemical reaction in crystallo
The crystals were first transferred and incubated in a prereaction buffer containing 0.1 M MES (pH 7.0, titrated by KOH), 100 μM dGTP, and 20% (w/v) PEG2K-MME for 30 min. The chemical reaction was initiated by transferring the crystals into a reaction buffer containing 0.1 M MES (pH 7.0), 20% (w/v) PEG2K-MME, and 10 mM MnCl 2 . After incubation for a desired time period, the crystals were quickly dipped in a cryo-solution supplemented with 20% (w/v) glycerol and flashcooled in liquid nitrogen.
Data collection and refinement
Diffraction data were collected at 100 K on LS-CAT beam lines 21-D-D, 21-ID-F, and 21-ID-G at the Advanced Photon Source (Argonne National Laboratory). Data were indexed in space group P6 1 , scaled, and reduced using XDS (85). Isomorphous Pol η structures with Mg 2+ PDB: was used as initial models for refinement using PHENIX (86) and COOT (87). Initial occupancies were assigned for the substrate, reaction product, PP i , Me 2+ A , Me 2+ B , and Me 2+ C , for the ternary ground state, following the previous protocol (4). After there were no significant F o -F c peaks and each atom's B value had roughly similar values to its ligand, we assigned occupancies for the same regions for the timepoints in between (Figs. S5-S7). Occupancies were assigned to the misaligned and aligned conformations of the primer termini until there were no more significant F o -F c peaks. For the structures in which both primer conformations were at an equilibrium (not 100% in either misaligned or aligned), occupancies were assigned until the F o -F c peaks for both conformations (while they still remained) did not increase. In addition, for the structures in which some positive F o -F c peaks were present around the Me 2+ binding sites or primer termini, no change in the assigned occupancy was executed when a 10% change in occupancy (e.g., 100% to 90%) failed to significantly change the intensity of the F o -F c peaks. Source data of the electron densities in r.m.s. density are provided as a Source Data file. Each structure was refined to the highest resolution data collected, which ranged between 1.75 and 2.2 Å. Software applications used in this project were compiled and configured by SBGrid (88). Source data of data collection and refinement statistics are summarized in Table S3, A-C. All structural figures were drawn using PyMOL (http://www.pymol.org).
Supporting information-This article contains supporting information.
Table 1
Kinetic assay DNA sequences | 2023-01-26T16:01:08.282Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "2a02d0f638a57318071631b4df2b8461493fc216",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jbc.org/article/S0021925823000704/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f71814a895cacbc72d6d51fe1987a9d8c80db37b",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52053301 | pes2o/s2orc | v3-fos-license | Adenosine deaminase activity in type 2 diabetes mellitus: does it have any role?
Background Diabetes mellitus is a group of metabolic disorders of carbohydrate metabolism in which glucose is underused, producing hyperglycemia. Diabetic patients are prone to opportunistic infection, thus serum ADA levels in these patients is very important as a screening test for Tuberculosis and autoimmune diseases. Thus, the present study was conducted to estimate the Serum ADA activity, glycated Haemoglobin (HbA1c), fasting and postprandial glucose level in patients with T2DM and to correlate the serum level of ADA with glycated Hemoglobin (HbA1c), fasting and postprandial glucose level in T2DM. Methods This is a Hospital based cross-sectional study done in BPKIHs, Dharan, Nepal. 204 diagnosed patients (102 males and 102 females) with T2DM and 102 healthy controls were enrolled in the study. Diabetic patients were categorized into Uncontrolled and Controlled Diabetes on the basis of HbA1C; HbA1c > 7% = Uncontrolled Diabetes, HbA1c < 7% = Controlled Diabetes. Results Serum ADA levels (U/L) was significantly raised in Uncontrolled Diabetic patients (49.24 ± 16.89) compared to controlled population (35.74 ± 16.78) and healthy controls (10.55 ± 2.20), p value < 0.001. A significant positive correlation was obtained between Serum ADA and HbA1c, Fasting Plasma Glucose and Post-prandial Glucose respectively. Conclusion There is a significant increase in Serum ADA activity in DM with increase in HbA1c levels which may play an important role in predicting the glycemic and immunological status in these patients.
Background
Diabetes mellitus (DM) refers to a group of common metabolic disorders that share the phenotype of hyperglycemia [1]. There is an estimated 143 million people worldwide suffering from diabetes [2], almost five times more than the estimates ten years ago. This number may probably double by the year 2030 [3]. Data in Southeast Asia highlights that more than 436,000 people have been affected by Type 2 diabetes Mellitus (T2DM) and there is a definite probability for the number rising to 1,328,000 by 2030 [4]. The chronic hyperglycemia of diabetes is associated with long-term damage, dysfunction, and failure of various organs, especially the eyes, kidneys, nerves, heart and blood vessels. Autoimmune destruction of theβcells of the pancreas with consequent insulin deficiency and abnormalities that result in insulin resistance are the processes involved in the development of diabetes [5]. Also, T2DM has been always pigeon-holed as an intricate metabolic syndrome with multifactorial etiologies. The disease has been characterized by an atypical metabolism of all biomolecules i.e. carbohydrates, fats and proteins. Thus, collectively leading to an increased levels of blood glucose and lipid levels within the blood [6,7].
Insulin resistance is associated with low-grade tissue specific inflammatory responses induced by various proinflammatory with oxidative stress mediators notably proinflammatory cytokines such as Interleukin-1 beta, Interleukin-6, Tumor Necrosis Factors-alpha along with numerous adipocytokines and chemokines, epigenetic factors and other transcriptional and metabolic pathways. Moreover, chronic exposure of pro-inflammatory mediators stimulates the activation of cytokine signaling proteins which ultimately block the activation of insulin signaling receptors in β-cells of pancreatic islets [7][8][9]. Adenosine deaminase (ADA) is a polymorphic enzyme that catalyzes the irreversible deamination of adenosine to inosine and has an important role in regulating adenosine concentration. Inosine and 2′-deoxyinosine are converted to the metabolic products i.e. hypoxanthine, xanthine and consequently to uric acid [10]. ADA is still considered as a marker for the assessment of cell mediated immunity [11]. ADA is suggested to be an important enzyme for modulating the bioactivity of insulin [12], but its clinical significance in T2DM has not yet been proven. In cases of oxidative stress and cell membrane damage, serum ADA is increased [13].
Previously, ADA has been reported to be a marker for insulin function [10], [13]. But, the correlation between ADA in diabetic patients has not been studied extensively in our country. Even though there are some reports available on ADA levels in diabetic subjects, these are all inconclusive and controversial. Since a relationship exists between ADA and cell mediated immunity [14]. This study was undertaken as a preliminary study to determine its serum activity and highlight its importance in the immuno-pathogenesis of T2DM. Thus, the present study was conducted to estimate the Serum ADA activity, glycated Haemoglobin (HbA1c), fasting and postprandial glucose level in patients with T2DM and to correlate the serum level of ADA with glycated Hemoglobin (HbA1c), fasting and postprandial glucose level in T2DM.
Study design
Hospital based comparative cross-sectional study conducted in the Department of Internal Medicine and Department of Biochemistry at B. P. Koirala Institute of Health Sciences (BPKIHS), Dharan.
Sample size
Two hundred four patients diagnosed with T2DM. 102 healthy patients served as control.
Sampling technique
Diabetic patient were selected from medicine OPD by convenient sampling technique and the biochemical parameters was assessed in Department of Biochemistry, BPKIHS. Control patients were recruited from routine laboratory (Department of Biochemistry).
Inclusion criteria
Newly diagnosed and follow-up cases of T2DM visiting medicine OPD.
Exclusion criteria
Those having any other chronic disease. Patients having any complications due to diabetes.Blood was collected in EDTA and serum vial from the study population for fasting and postprandial blood glucose. Serum was separated and stored at − 20°C until test was performed. Obtained serum sample was used for the analysis of biochemical parameters. Serum ADA was performed by the manual method described by Giusti and Galanti, (1984) [15]. The unit for ADA is expressed in Enzyme units/litre (U/L).
The HbA1c determination is based on the turbidimetric inhibition immunoassay (TINIA) for hemolyzed whole blood.
Glycohemoglobin (HbA1c) in the sample reacts with anti-HbA1c antibody to form soluble antigen-antibody complexes. Only one specific HbA1c antibody site is present on the HbA1c molecule, complex formation does not take place. Addition of R2 (Polyhapten reagent) and start of reaction: The polyhaptens react with excess anti-HbA1c antibodies to form an insoluble antibody-polyhapten complex measured turbidimetrically Fasting blood glucose (FPG) and Postprandial blood glucose (PPG) was done by Hexokinase Method (Cobas c311 Autoanalyser). FPG and PPG is expressed in mg/dl. DM was classified as Controlled and Uncontrolled on the basis of HbA1c level (< 7% = Controlled & > 7% = Uncontrolled) respectively. Data was collected and entered using Microsoft Excel™ and analyzed using Statistical Package of Social science (SPSS) version 11.5. Data was expressed in terms of figure, percentage, mean and standard deviation. Independent sample t-test was used to compare the baseline and biochemical data between controlled and patients with T2DM. Analysis of variance (ANOVA) was used to compare the ADA level between healthy controls and controlled and uncontrolled T2DM. Spearman's correlation was applied to correlate serum ADA level with different markers for glycemic control viz., HbA1c, FPG and PPG respectively.
Results
The study population comprised of total of 204 participants diagnosed of T2DM and 102 healthy controls. Demographic data has been depicted in Table 1. Biochemical parameters showed that Mean serum ADA was significantly higher in Diabetic patients compared to healthy controls as illustrated in Table 2. Further, ADA levels was significantly higher in Uncontrolled T2DM than in controlled T2DM as shown in Table 3 respectively. Spearman's Correlation of FPG, PPG, and HbA1c with serum ADA showed a significant positive correlation with all of the three glycemic parameters. This suggests the use of ADA as an alternative marker of Glycemic control in Diabetic patients. Serum ADA showed a significant positive correlation with HbA1c which is considered as a good marker for long term glycemic control. Hence, it highlights the role of ADA as a potential marker for glycemic control as depicted in Table 4.
Discussion
T2DM is the predominant form of diabetes worldwide, accounting for 90% of cases globally [17]. It is a multifactorial systemic disease with hereditary and environmental causes being the major attributable factor. These lead to insulin resistance and defective secretion of insulin by pancreatic beta-cell. Immunological disturbances of cell-mediated origin are believed to initiate from T-lymphocyte dysfunction. Invitro studies have shown that in T2DM, inappropriate immune responses may result from the defects in the action of insulin that is required for the function of T-lymphocytes [18]. ADA plays a crucial role in lymphocyte proliferation and differentiation and shows its highest activity in T-lymphocytes [17,18]. The present study shows a significant elevation in the ADA levels (U/L) in uncontrolled diabetic subjects (49.24 ± 16.89) compared to controlled population (35.74 ± 16.78). The high plasma ADA activity might be due to abnormal T-lymphocyte responses or proliferation; may point towards a mechanism that involves its release into the circulation [19,20]. Hence, we suggest that increased ADA activity in diabetic individuals could be due to altered insulin related T-lymphocyte function or due to increased immunological dysfunction. Previously, Chang and Shaio, have demonstrated that impaired cell mediated immunity was associated with abnormal lymphocyte proliferation [21].
This study shows that ADA is raised in patients with diabetes mellitus, with values increasing in uncontrolled compared to controlled T2DM. Thus, the altered serum ADA levels may help in predicting immunological dysfunction in diabetic individuals. The present study depicts that there is a significant positive correlation between Serum ADA levels and Hba1c, which suggests that Serum ADA level can also be used as a biomarker in predicting glycemic control in diabetic patients. Studies have shown that there is a direct correlation with the expression and activity of ADA with the extent of severity of inflammation, as T2DM is associated with chronic hyperglycemia and an ongoing low-grade systemic inflammation [22]. The role of ADA in the cellular immunity was first identified in patients with severe combined immune-deficiency (SCID) [23,24]. The high activity of this enzyme was considered to be a reflection of immunological disturbances observed in tuberculosis [25] infectious mononucleosis [26], jaundice [27], leukemia [28] and other conditions ( [29][30][31]).
Conclusion
Assessment of Serum ADA level is cost-effective and the efficient use of this biomarker may help in establishing this enzyme as a good marker for assessing cell-mediated immunity in diabetic individuals. Thus, we conclude that elevated ADA activity may be an important indicator in the
Funding
The author did not receive funding from any national and international organization to conduct the study.
Availability of data and materials
The datasets generated and/or analyzed during the current study are not publicly available due to the unavailability of disclosure of patient's identification but are available from the corresponding author on reasonable request.
Author's contribution AN has designed the study, collected the history and blood samples from the study participants after the inclusion criteria has been met, done the required laboratory parameters on the blood samples on her own. The measurement of the analyte, statistical analysis of the data and writing of the paper has also been completed by the principal author. ST has contributed by the implementation of the idea by designing the study. He has been actively involved in statistical analysis and report writing of the study. SK has contributed during the sample collection and been actively involved in analysis of the analytes. He has also helped during the correction of the manuscript. ML is the guide of the principal author during her Postgraduate study. He has contributed as a supervisor and mentor to the author by helping in design of the study and during writing of the article. NB is one of the mentor of AN. He has helped in editing of the manuscript and have provided intense help for the final drafting of the article. RM has actively contributed during the sample collection with the proper diagnosis and also helped in the statistical analysis. All of the author mentioned above are genuine and have helped equally for the completion of the study. All authors have read and approve the final manuscript.
Ethics approval and consent to participate This study has been approved and obtained the ethical clearance from Institutional Review Committee (IRC, BPKIHS). The ethical clearance number for the study is IRC/636/015. All the study participants were enrolled only after obtaining the written informed consent.
Consent for publication
Not Applicable. | 2018-08-20T17:22:33.436Z | 2018-08-20T00:00:00.000 | {
"year": 2018,
"sha1": "d55c6666cd2d32001405128c6633a5f55ee7f4d2",
"oa_license": "CCBY",
"oa_url": "https://bmcendocrdisord.biomedcentral.com/track/pdf/10.1186/s12902-018-0284-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d55c6666cd2d32001405128c6633a5f55ee7f4d2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212666742 | pes2o/s2orc | v3-fos-license | Defining the substrate for ventricular tachycardia ablation: The impact of rhythm at the time of mapping
Background Voltage mapping is critical to define substrate during ablation. In ventricular tachycardia, abnormal potentials may be targets. However, wavefront of activation could impact local signal characteristics. This may be particularly true when comparing sinus rhythm versus paced rhythms. We sought to determine how activation wavefront impacts electrogram characteristics. Methods Patients with ischemic cardiomyopathy, ventricular tachycardia, and without fascicular or bundle branch block were included. Point by point mapping was done and at each point, one was obtained during an atrial paced rhythm and one during a right ventricular paced rhythm. Signals were adjudicated after ablation to define late potentials, fractionated potentials, and quantify local voltage. Areas of abnormal voltage (defined as <1.5 mV) were also determined. Results 9 patients were included (age 61.3 ± 9.2 years, 56% male, mean LVEF 34.9 ± 8.6%). LV endocardium was mapped with an average 375 ± 53 points/rhythm. Late potentials were more frequent during right ventricular pacing (51 ± 21 versus 32 ± 15, p < 0.01) while overall scar area was higher during atrial pacing (22 ± 11% vs 13 ± 7%, p < 0.05). In 1/9 patients, abnormal potentials were seen during a right ventricular paced rhythm that were not apparent in an atrial paced rhythm, ablation of which resulted in non-inducibility. Conclusion Rhythm in which mapping is performed has an impact on electrogram characteristics. Whether one rhythm is preferable to map in remains to be determined. However, it is possible defining local signals during normal conduction as well as variable paced rhythms may impart a greater likelihood of elucidating arrhythmogenic substrate.
Introduction
Catheter ablation is a widely accepted therapy for ventricular tachycardia (VT) refractory to drugs in structurally abnormal hearts [1,2]. Entrainment and activation mapping have been considered the gold standard to identify areas containing critical isthmuses for the VT circuit. However, in the presence of hemodynamic instability and multiple or non-inducible VTs, substrate-based approaches may be an acceptable alternative [3e5].
Substrate guided ablation involves targeting areas of local abnormal ventricular activation (LAVA), i.e., late potentials and regions of fragmented signals suggestive of local slowing of conduction, or targeting low voltage areas as traditionally defined by pre-specified cutoff values. Previous reports suggest that targeting the scar region and/or complete elimination of LAVA is associated with improved VT free survival [3,6]. However, the anatomical location of scar within ventricular myocardium has been previously described to affect signal characteristics [7]. It is further known that the direction of activation wavefront may alter the appearance of local electrograms. These rhythm associated changes could include the voltage amplitude (and thereby definition as scar versus border zone versus normal tissue), and local signal properties (including differentiation as a fractionated potential, late potential, or split potential). Such changes could, in turn, impact identification of areas of interest for a substrate-based ablation strategy.
In this study, we sought to rigorously define how LAVA characteristics, scar region and areas of abnormal voltage are affected during an atrial paced rhythm, during which ventricular activation utilizes the His-Purkinje system, versus a right ventricular paced rhythm.
Study population
This study enrolled 9 ischemic cardiomyopathy patients undergoing VT ablation using a 3D electroanatomic mapping system. All patients had recurrent sustained VT refractory to antiarrhythmic drugs resulting in implantable cardioverter defibrillator therapy. A written informed consent was obtained in all patients. The Mayo Clinic Institutional Board Review approved the study.
Electrophysiological study
All anti-arrhythmic drugs were discontinued at least 5 half-lives prior to the ablation if there was no sustained VT (one month in the case of amiodarone). ICD therapies were turned off prior to the start of the procedure. A 5 French (Fr.) quadripolar catheter was placed in the right ventricular apex and a 7 Fr. decapolar catheter was placed in the coronary sinus. The use of other diagnostic catheters was up to the discretion of the provider. The left ventricular endocardium was accessed either via retrograde aortic or transeptal approach. Epicardial mapping or ablation was only performed if clinically indicated during the study.
Electroanatomic mapping
Mapping was performed using a CARTO 3 (Biosense Webster, Diamond Bar, CA) mapping system. A detailed point-to-point map using an ablation catheter (THERMOCOOL® Biosense Webster) was created. At each location, a point would be acquired during pacing from an atrial catheter and a point would then be acquired during right ventricular pacing at the same rate. All bipolar signals were recorded between the distal electrode pair and filtered from 30 to 400 Hz and displayed at 100 mm/s. Low voltage ventricular myocardium was defined as peak to peak bipolar voltage of <1.5 mV and a voltage of <0.5 mV with inability to capture at high output was defined as scar. Fractionated signals were defined as sharp high frequency local signals of low amplitude showing multiple components. Late signals were defined as sharp high-frequency local ventricular signals of any amplitude occurring after the onset of QRS, separated from the farfield ventricular signal by an iso-electric segment.
A complete LV map was made in both rhythms and all points were retrospectively reviewed and adjudicated by two independent operators. Maps were set to a fill threshold of 10 mm.
Radiofrequency ablation
The ablation procedure was performed independent of the results of the study. Activation and entrainment mapping were used to guide ablation where feasible. When a substrate-based approach was used, the areas targeted for ablation were at the sole discretion of the physician. All patients underwent pre-and post-ablation programmed ventricular stimulation using drive trains of 600 ms and 400 ms with single, double, and triple extra stimuli from two different sites.
Statistical analysis
All continuous variables are expressed as mean ± SD and were compared using the student t-test. Categorical variables are expressed as absolute number and percentages. All tests were twotailed with a p < 0.05 considered significant.
Demographics
Baseline characteristics of the patients are described in detail in Table-1. The average age was 61.3 ± 9.2 years with 56% male. The mean LVEF was 34.9 ± 8.6% The LV endocardium was mapped with an average of 375 ± 53 points/rhythm for each patient.
Effect of rhythm on electrogram characterization and bipolar voltage
Of all the electrograms reviewed, late potentials were noted to be more frequent during RV pacing (Table 2). There was no net effect on number of fractionated potentials seen. Low voltage points were more frequent during atrial pacing. Fig. 1 shows an example of a late potential during atrial pacing that fuses during ventricular pacing. Fig. 2 shows as example of change in the voltage distribution and scar burden with change in rhythm in 1 patient.
Outcomes
Amongst the 9 patients, 3 patients had at least 1 mappable VT and all patients had at least 1 unmappable VT. Thus, substrate based ablation and ablation targeted at pace mapping within the scar region were performed in all patients. Five of the patients were ablated based on substrate identified during an atrial paced rhythm and four patients were ablated based on substrate identified during a right ventricular paced rhythm. Programmed stimulation performed after ablation demonstrated non-inducibility in 3/5 atrial paced patients and 4/4 ventricular paced patients. In the 2/2 atrial paced patients who were still inducible post ablation, review of the ventricular paced map demonstrated a corresponding region where there were apparent late potentials not apparent in the atrial paced map, extending 2.5 mm beyond the margins where ablation had been performed in the latter (Fig. 3). Pace-mapping here exhibited similar morphology to the induced VT and further ablation resulted in noninducibility in 1 though 1 patient was still inducible for a faster, non- clinical VT that was not targeted. There were no adverse events during the study or as part of the ablation. Over 1 ± 0.5 year followup, there was no VT recurrence amongst 7 patients, 4 of whom remained on antiarrhythmic drugs (1 amiodarone, 2 sotalol) and 3 of whom discontinued antiarrhythmic drugs. The 2 patients with recurrence underwent redo ablation at 6 months and 9 months, respectively (1 was inducible at the end of the procedure). Notably, 1 of the patients had been ablated based on the atrial paced rhythm and the other based on a ventricular paced rhythm. 5. Impact of normal His purkinje system conduction versus RV pacing on electrogram appearance in region of substrate. Demonstrated is one potential mechanism for why electrograms may appear different depending on the wavefront of activation. In Fig. 5AeB, there are two wavefronts from fascicular activation with entry into the substrate zone approaching the cathode from 2 angles e one through normal tissue and the other via the scar (5B). As a result, wavefronts all collide in the area of scar simultaneously resulting in appearance of a single signal. In Fig. 5CeF, there is a singular wavefront during RV pacing which results in a single electrogram as it reaches the cathode (5D). Then it slowly traverses the scar zone, eventually passing the bipolar recording electrode again resulting in a second (or late) potential.
Discussion
The main findings of this study are that the vector of ventricular depolarization alters the timing, amplitude, and quality of the signal during mapping. Furthermore, in one patient, the appearance of late potentials in RV pacing outside the scar area noted during atrial pacing were a target for additional ablation, resulting in non-inducibility. These findings support the critical importance of wavefront in discriminating local signals during cardiac mapping and ablation. Furthermore, as noted in Fig. 1, we demonstrate one case where a late potential visible during native conduction was not obvious during right ventricular pacing. This stands within reason as it can be assumed that a path of activation using trans-septal activation of the left ventricle during right ventricular pacing may similarly mask activation of certain potentials due to the activation path through a region of scar. Specifically, relative differences in path of activation and entrance into the scar zone during wavefront propagation from two different sites may alter the potential for unmasking signals suggestive of arrhythmogenicity. This principle partly underlies the reasoning of using at least two sites for stimulation during induction protocols in reentry based arrhythmias. Furthermore, the distribution of scar may impact the relevant effects of activation wavefront, with a greater effect on manifestation of late potentials with RV pacing for lateral scars than septal scars though this did not bear out in subgroup analyses in our study group.
Surviving myocardial cells in scar are a critical part of the VT circuit [8]. Sinus or atrial paced rhythm conducts via the fast Purkinje system and allowing synchronous activation of the LV in healthy and diseased myocardium [9,10]. During RV apical pacing, LV activation typically occurs trans-septally, though it is possible that the native conduction system could be engaged as well. For example, in some instances, part of the left ventricle during RV pacing may get activated via retrograde activation of the right bundle branch, thus resulting in different vectors of depolarization. Further, the orientation of the electrical wavefront, parallel or perpendicular, to the surviving myocardial cells may alter the temporal spacing of the recorded signals [11] (Fig. 4). Thus a late signal may become earlier in relation to the QRS complex depending on the propagation wavefront (Fig. 5). This may be the case in both cases of native conduction based ventricular activation or ventricular pacing.
In our study, there was a significant increase in LAVA points during RV pacing. This may be due to the lack of involvement of the conduction system allowing for slow conduction and better temporal resolution of the local signal. Moreover, this separation may have an effect on the amplitude of the local signal, thus changing the voltage in the area and alter the perceived "low voltage" potentials as seen in our study. Intervening areas of functional and anatomical block from scar may result in further change in the activation wavefront as has been described previously [12]. The lateness of LAVA has previously been shown to be dependent on the location of the scar [7]. This distance will also differ based on the direction of the wavefront as areas of early activation during sinus rhythm may become late during RV pacing, thus affecting LAVA characteristics and may be one of the reasons for the difference in the presented data. It is possible that consistent appearance of LAVA, irrespective of the rhythm, may be a more specific marker to identify high yield areas to target during catheter ablation. However, these inferences based on this small study are speculative at best and require more data. Furthermore, given our study was focused on the same precise spot being obtained during both pacing trains, many points were discarded for purposes of analysis and thus extrapolation of scar area due to the high amount of interpolation between a relatively smaller number of points comprising the overall map could not be done reliably.
Limitations
This is a proof-of-concept study and is best interpreted in the context of its limitations. This study did not account for disease within the conduction system, which may vary from patient to patient. We exclusively used the Carto mapping system for our study and our findings may not apply to other mapping systems due to differences in proprietary filters and algorithms. We did not assess the impact of different pacing cycle lengths as it may also alter local conduction velocity. Moreover, the clinical benefit in terms of improved outcomes of VT ablation of either approach is yet to be determined. Further, we did point to point mapping in order to ensure comparison during RV pacing and atrial pacing at each point dependably. Thus, the limited number of points obtained may limit extrapolation to cases of higher density mapping wherein less interpolation is needed. Finally, given the small number of patients considered, larger studies are required to determine consistency of the findings and potential role in defining ablation strategies when choosing the rhythm in which to map.
Conclusion
In conclusion, we here demonstrate that the rhythm at the time of mapping has a significant impact on the definition of the arrhythmic substrate of interest during ablation of VT. Future larger studies evaluating the role of each technique in overall outcomes (such as whether one rhythm is optimal over another) of the ablation procedure are needed.
Funding (needed only if applicable)
No funding was used for this study.
Ethical approval
The study was approved by the Mayo Clinic Institutional Review Board.
Informed consent
Informed consent was obtained from all included patients.
Declaration of competing interest
None of the authors have any conflicts of interest to disclose. | 2020-03-12T10:13:14.098Z | 2020-03-07T00:00:00.000 | {
"year": 2020,
"sha1": "7ecb1d7a9749a05efbba78b96b6775af413a49de",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ipej.2020.03.005",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cb53350110bcd3e4f7aaf4e9406623cbad5ef420",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.