id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
234034949
|
pes2o/s2orc
|
v3-fos-license
|
Diversity and Threats of Avifauna in Cheleleka Wetland, Central Rift Valley of Ethiopia
Article history Received: 27 November 2020 Accepted: 16 December 2020 Published Online: 30 December 2020 This study was conducted in Cheleleka Wetland, Central Rift Valley of Ethiopia to assess Species diversity and threats of aviafuana from August to February 2019. Data were analysed by using Simpson’s and Shannon-Weiner Index in analysing biodiversity indices. One way ANOVA was applied for analysis of the effect of season on the composition and abundance of species. Questionnaire surveys, key informant interviews and focus group discussion were also used to determine the threats of avifauna in the study area. The result indicated that 49 avian species record under 21 families and 10 orders during both the wet and dry seasons. The Shannon-Weiner diversity index shown that highest bird species diversity (H’=3.42) was recorded during wet season. Over grazing, agricultural expansion, settlement and sand extraction were the major avifaunal threats in the wetland. The result suggests that the need to conserve the avifauna through the conservation of their habitats by creating awareness to the local people and it will enable to decrease biodiversity threats.
Introduction
A vian species play a significant role in enriching the biological diversity of wetlands. Wetlands habitats are considered one of the most fruitful environments in the glob [14,22] . They are homes for wide range of biodiversity including the assemblages of birds, reptiles, amphibians, mammals, fish and invertebrate species [2,25] . These habitats are also vital stores of plant genetic material [8,9] . This is an indication for the recognition by Ramsar International in 1971 as a haven for waterfowl habitats [12] . In line for to their immense biodiversity and ecological features, wetlands are also destinations for recreational and ecotourism opportunities [10,23] . Where wetlands habitats are developed as ecotourism sites, they provide enormous benefits for ecotourism activities [27] .
The (IUCN) "Red List" document shown that amount of extinction are getting worse among species restricted to lesser islands to inland level [7] . This damage is mostly in line for their gradually intolerance to the lowest ecosystem disruption which is related to pollution [19] , habitat type and bird distribution [3] , wetland patch size [17] , farming system and town expansion within the wetland ecosystem [20] and habitat destruction [26] .
These anthropogenic factors at the landscape scale, has structured the diversity and the abundance of bird species due to their highly specific habitat requirements [15] . Sympathetic overall bird reactions to disturbances also require the assessment of the different disturbance consequences on a seasonal basis, because of the impacts of environ-Distributed under creative commons license 4.0 mental factors are many and differ along seasonal tendencies.
The country harbors 864 avian species of which 19 are endemics, 35 are globally threatened and 1 introduced species and a further 13 are shared only with Eritrea [29] . Although these findings were recorded from parts of Ethiopia, the phenomenon suggests that some birds may be under threat or at risk of extinction, giving current undocumented, but observed ecological disturbances on the wetlands. Environmental variations and land use activities within Cheleleka wetland like urbanization, change of shrub by woodland and bush land in to cultivated land and change of Lake Cheleleka in to a swamp are found to be the major changes [28] . And this can possibly disturb bird species diversity and habitat preference.
Systematic studies on bird ecosystem, richness and abundance is inadequate in Cheleleka wetland. There is a vital need for collecting appropriate information on the diversity of the water bird communities to fill gaps on the overall bird list from this habitat types to the country list. The preparation of a list of species is essential to the study of avifauna of an area, because a list indicates species diversity in a common sense [4,5] . Thus, the absence of a scientific exploration makes it impossible to determine the current state of bird diversity and habitat preference on the study area. The result of this study will help provide biodiversity managers with first-hand information on the types of anthropogenic disturbances and how these disturbances could possibly change bird abundance in the future and the selection of proper management method for improving the sustainability of bird abundance.
Description of the Study Area
Cheleleka wetland comprises parts of Oromya Regional State, and Southern Nations and Nationalities Peoples' Regional State ( Figure 1). Cheleleka Wetland is located in the upper side of Lake Hawassa and at the exit of the Tikur Wuha River. The geographical co-ordinates of Cheleleka wetland lies on 07˚ 00' 13'' -07˚6' 37''N and 38˚30' 51''-38˚ 34' 44''E. It is located around 265 km far from Addis Ababa the capital city of Ethiopia with altitudes ranging from 1670-2000m a.s.l. in a total area of 56.6 km 2 . The major vegetation varieties found in the studied wetland are Typha (cattail), which is emergent and herbaceous, and Nymphaea odorata (water lily), which is of the floating-leaved type. Mean annual temperature is around 19 ℃ . The rainfall is much higher (around1250 mm annually) in Cheleleka wetland and the surrounding highlands.
Materials and Methods
Ornithological Data were collected from 6:30 a.m. to 10:00 a.m. in the morning and from 3:00 p.m. to 6:00 p.m. in the afternoon while bird activity was highest and on days with worthy weather conditions [11] . Avian population was assessed using total count method [21] . In this method, representative wetlands were identified and birds in the areas were counted. Weekly visits to the site were made for six months during both wet and dry seasons and an average of 2 weeks was accounted for a month around total of 80 recording hours. During counting of birds the start and end geographical coordinates of each blocks were saved in Garmin 72 GPS unit to ensure same blocks were repeated during the dry season. Date including starting and finishing time, bird species, number and survey site were recorded. Bird identification was carried out on their morphological features and calls [34] and using field guides [34,37] , and observations were assisted by Nikon (8x40mm) binoculars. On each sampling transect line and in each counting session, a species heard without being seen was recorded once to escape overestimation of species abundance due to repeated vocal by the similar individual [37] . Finally, birds' checklist was prepared on the basis of their scientific names, common names and IUCN status as per [7] and [34] .
Secondary and primary data collection methods like Key informants, focus group discussion, and interview were used to identify threats of bird species. Personal observation also used to find out information related to threats on bird species within the wetland habitat. The questions contain a group with closed style items requiring the respondents to rank their percentage of agreement with a particular item such as "yes" or "no"; "increasing", "decreasing" and unchanged (where 1=disagree; 2= neutral; and 3=agree) depend on a particular question as used by [35] . Detailed interviews were conducted using structured and semi structured questions. In doing so the participants for the detailed interview were selected purposively based on their tasks they have, knowledge, and relevance to subjects understudy.
Three FGD were accompanied. The contributors were selected purposively based on their duties they had knowledgeable, and the importance to the problems under study. The first two FGD was held with experts (4 from agriculture, 2 tourism experts, 3 natural resources management expert, 3 plant sciences, 2 animal science, and 4 wildlife experts). The third FGD was carried out with local communities, (2 from religious leaders, 4 from diverse types of community members and 4 village administrators).
Determined, methodical and careful observation and recording of information based on the threats of birds was carrying out by using surveillance checklists. Camera was used to take the pictures of bird species and anthropogenic practices in and around the wetland.
Data analysis
Statistical Product Services and Solutions (SPSS) Version 20 software was used to do the statistical analysis. Diversity of species was also calculated by using Simpson's Index (Simpson, 1949) and Shannon-Weiner Index (Shannon and Weiner, 1949) for both wet and dry seasons.
Where, H' = Shanon-Weiner index S = the number of species observed Pi =the proportion of the total sample ln = natural logarithm D = Simpson's Index The collected data was presented by using descriptive statistics methods. The result from numerical data was untaken or described through tables, bars, and pie charts. In addition, the results of surveys were combined and compared with that of detailed interviews, field observation, focus group discussion and document analysis.
Species Richness
A total of 3500 individual birds belonging to 49 species, 21 families and 10 orders were recorded from the study area. Among the 10 orders Ciconiifores dominates with 14 species followed by Passeriformes and Anseriformes (9) species each. The least species was recorded in the order Accipitriformes, Charadriiformes, Columbiformes and Piciformes one species each ( Table 1). Out of the species recorded in the study area, Wattled Ibis (Bostrychia carunculata) was endemic to both Ethiopia and Eritrea (Table 1).
The species composition of birds during the wet and dry seasons was not significantly different (ANOVA p = 0.23) but there was a significant difference in the abundance of bird species (t=-1.13, P <0.05) The analysis of data on migratory status revealed that out of 49 species, 17 Palaearctic Migrants (34.69%) and 1 Intra-African Migrants (2.04 %) were recorded during the study period. The remaining (31) bird species (63.26%) were residents ( Figure 2).
As per IUCN status (2018), 46 species were least concern, and one species Black-tailed Godwit (Limosa limosa) was near threatened. Two non-recognized species were also recorded during the study period ( Figure 3).
Species Diversity, Evenness and Dominance
The Shannon-Weiner diversity index shown that highest bird species diversity (H'=3.42) was recorded during wet season. During dry season the least diversity of avian species was recorded. The highest even distribution of species was recorded during wet season (E=0.89). During dry season highest dominance index was recorded (0.04) ( Table 2).
Threats to the Avifauna in Cheleleka Wetland
According to community residents, farmers and local communities who have lived in and around the wetland, the main threats of bird species are grazing, urbanization, agricultural expansion, habitat fragmentation, accessibility and resource extraction (Figure 4). The highest respondents approved that overgrazing the wetland (86%), agricultural expansion (85.6%), human settlement (75.8%), sand extraction(45%).4%) and habitat fragmentation were major threats. Whereas, out of the total respondents, 39% and 22% respondents were disagreed to the presence of wetland shrinking and killing and hunting of bird species, respectively. The investigation shown that all of the nominated interviewed the surrounding district community and offices respondents have feeling the highest threat towards bird diversity were due to the highest in grazing (20.29%), wetland degradation and fragmentation(15.71%) and expansion of agriculture (14.29% ). Settlement, sand extraction, district administration problems, pollution and invasive species also highly contributed to threats of bird species (Table 3). Based on direct field observations, there were many human induced threats of birds directly or indirectly ( Figure 5). Settlement, agriculture expansion, direct human disturbance through sand extraction, overgrazing by livestock, and habitat fragmentation were the maximum critical threats directly to the Cheleleka wetland that in turn will effect on biodiversity protection in the habitat. Various development activities, such as roads, agriculture and settlements have also made an edge. The destruction events are conveyed from ( Figure 5) which has been changed into agricultural fields and new human settlements. 6. Discussion
Species Diversity
The significant seasonal variation of species diversity in Cheleleka wetland might be due to the seasonal availability of food for different bird species and nesting sites in the area. Other studies have also shown that seasonal variations in rainfall and food resources have led to seasonal variations in the diversity of birds [6] . The diversity of bird species is influenced by the structure of the vegetation that forms a major component of their habitats. The lowest abundance and richness of species was shown during the dry season in Cheleleka wetland. This may be due to the presence of human disturbance and livestock grazing in the wetland. There was also sand extraction as observed during field visit. Over grazing is associated with the decreased physical density of vegetation; and this forced to the decline and loss of a diversity of bird species in the wetland [36] . This has an effect on the number of birds that depend on such habitats. The impacts of habitat loss and grazing on cover, nesting grounds and food availability to birds reasons for a dangerous situation for the survival of avian fauna [18,24] .
Threats of Avian Species
Various biodiversity habitats in Ethiopia are exposed to habitat loss and degradation [13] . In case of growing human population, agricultural expansion in to the wetland area increments and the presence of additional lands adjacent to the wetland habitat area used for farmland; this makes pressure on bird species inhabitants. Agricultural practices nearby wild life habitats, rural and urban expansion activities have led to the decline and modification of habitats, causing in the losses of biodiversity. The outcomes of this investigation were addressing some of the effects of threats of the wetland habitat which directly impacts to bird species. High demand for natural resources uses consequences to land use changes hence loss to genetic diversity, species decrease and ecosystem changes such as accidental population changes, disease outliers, habitat fragmentation and consequential to biodiversity losses [1] .
As per population growths, there is an aggregate use for space and resource consumption and impacts on wild life ecology [38] . In similar situation, the Cheleleka wetland bird species were decreased and the wetland habitat is threatened in different cases. During the local communities' interview, there were also a many threats that were identified by local communities in Cheleleka wetland.
According to this study, the major threats of the bird species on the study area were habitat disturbances by over grazing, agricultural expansion around the wetland, settlement and sand extraction. The finding of the present study is in agreement with [33] . Anthropological actions impact ecosystem structure and function, specifically the spatial and temporal distribution of wildlife's [32] . This is particularly true for the Cheleleka wetland, in which the wetland becomes increasingly narrow, and become points of contacts. These threats of bird species increased from livestock grazing, settlement and expansion of agriculture. These and other activities resulted in disruption, reduction in diversity of species in line to devastation of habitat and high competition on foraging in the area. According to [30] report, the main problem facing biodiversity areas today is the development in human settlement of adjacent lands and the illegal harvesting of natural resources within the areas. In Cheleleka wetland habitat also there is the expansion of settlements in and the surrounding areas which might be a threat to the wetland and bird populations. Habitat fragmentation and overexploitation are effect on biodiversity sustainability [16] this is in agreement with the present study. [31] Reported that habitat loss is one of the major causes of wildlife habitat loss. Improper disposal of garbage and also Effluent discharge from Hawassa Textile Factory to Cheleleka wetland were observed in the present study area which causes pollution in the habitat. These factors are considered to be threats to the avifauna in the Cheleleka wetland, thus strong conservation measures are needed.
Conclusion and Recommendations
The study area comprised resident, endemic, migratory and globally threatened bird species. The presence a high number of these species suggests that Cheleleka wetland is key conservation habitat of birds. The seasonal variation in avian species and number of individuals in the study area was related to the differences in resource availability of the wetland. During wet season, the highest species richness and abundance of species were recorded in the study area. Generally the study area harbour diverse bird species. However, interferences with the wetland were identified. Overgrazing, human settlement, agricultural expansion, sand extraction and habitat fragmentation were the major threats of avian species. Therefore, conservation measures by involving the local community are needed to protect the biological diversity of the wetland habitat.
Conflict of Interests
The author(s) has not declared any conflict of interests.
|
2021-05-10T00:04:03.469Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d50c27bc7810915afd84323f9affc793be987814",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.30564/re.v2i4.2621",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed457d36dde2aa68a8aa38d729227443fccaa61f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
250293747
|
pes2o/s2orc
|
v3-fos-license
|
Telehealth Encourages Patients with Diabetes in Racial and Ethnic Minority Groups to Return for in-Person Ophthalmic Care During the COVID-19 Pandemic
Purpose The COVID-19 pandemic had a disproportionate impact on patients from racial and/or ethnic minority groups, causing many to delay healthcare. This study evaluates the role telehealth visits played in helping patients with diabetes mellitus (DM) return for subsequent, in-person eye examinations after the outbreak of COVID-19. Methods This retrospective, cross-sectional study analyzed 8147 patients with DM who had completed an outpatient ophthalmology and/or optometry visit in 2019 and who were due for return evaluation after the outbreak of COVID-19 in 2020. Factors associated with return for subsequent, in-person eye examination were assessed. Results The mean age of patients was 68.8 (±13.0) years, and 42% were women. 7.4% of patients identified as Asian; 2.9% as Black; 3.4% as Hispanic or Latin American; 0.92%, as more than one race; 1.78%, as other races; and 80.7% as White. Patients from racial and/or ethnic minority groups completed fewer in-person eye examinations after the outbreak of COVID-19 compared with White patients (35.6% versus 44.5%, χ2=36.172, P<0.001). However, both groups accessed telehealth services at a similar rate during this period (21.1% versus 21.9%, χ2=0.417, P=0.518). Importantly, patients who received telehealth services returned for subsequent, in-person eye examinations at substantially higher rates, regardless of race (51.0% and 46.6%, respectively, χ2=1.840, P=0.175). This offset the otherwise lower rate of return experienced by patients from racial and/or ethnic minority groups compared with White patients among the group of patients who did not receive any telehealth services (32.7% versus 42.7%, χ2=36.582, P<0.001). The impact of telehealth on the likelihood of in-person return remained significant after taking into account age, gender, race, language, residence, severity of diabetic retinopathy (DR), and vision in a multivariate model. Conclusion Telehealth initiatives benefited patients from racial and/or ethnic minority groups by reducing disparities in access to eye care experienced during the COVID-19 pandemic.
Introduction
More than 34 million Americans have diabetes mellitus (DM), comprising 13% of the total adult US population. 1 DM disproportionately affects individuals belonging to racial and/or ethnic minorities in the US, and rates of diabetesassociated complications, including diabetic retinopathy (DR), are growing fastest among members of those groups. [2][3][4][5][6][7] Historically-marginalized populations have also faced barriers in accessing regular diabetic eye care, even when eye examinations are covered by health insurance. [8][9][10][11][12] Importantly, gaps in care can delay the detection and treatment of DR, which are crucial for preventing vision loss and associated disability. [13][14][15][16] The pandemic had a disproportionately greater impact on patients from racial and/or ethnic minority groups, significantly reducing their ability to access eye care. 17,18 Well-equipped offices, especially those associated with academic medical centers, rapidly transitioned to providing eye care through telehealth visits as an important method to care for vulnerable patients after the outbreak of COVID-19. 17,19 These visits protected both patients and providers from the risks associated with exposure to COVID-19, 20 and allowed providers to deliver health education and outreach as a temporizing measure while in-person care was limited by stay-at-home advisories and strict social distancing standards. 17,19 This study evaluates the impact of telehealth visits on the likelihood that patients with DM would return for recommended in-person eye examinations in the first year of the COVID-19 pandemic. In this patient population, gaps in care can delay the detection and treatment of DR, which is crucial for preventing vision loss and associated disability. 8,9,21
Methods
The research followed the tenets of the Declaration of Helsinki and was approved as a quality-improvement initiative by the institutional review board of the Lahey Hospital, Burlington, MA. The requirement for informed consent was waived because of the retrospective nature of the study. Included in the study were those patients with DM who had been seen in the outpatient ophthalmology clinic in 2019 and had not returned for an eye examination in the period after the recognized outbreak of COVID-19 between March 15 to December 31, 2020. 22,23 Telehealth was utilized to deliver eye care to any patient who could not be seen in clinic because of prevailing public health conditions. Responsibility for telehealth visits was assigned to both ophthalmologists and optometrists who provided coverage on a rotating basis and often without reference to prior relationships to the patients or severity of disease. Services delivered by telehealth, as well as the criteria and timing for patient recall, were at the discretion of the treating provider. Included were a check of symptoms, refilling of any medications, and assuring future follow-up. Deceased patients were excluded. We extracted from the electronic medical record patient demographics (age, gender, race/ethnicity, and primary language spoken), clinical characteristics (visual acuity, type of DM, severity of DR, and hemoglobin A1c[%]) and ophthalmology appointment data for each patient by means of a customized reporting tool. 24 Type of DM and DR was defined by ICD-10-CM codes and stage of DR based on the more severely affected eye: mild non-proliferative diabetic retinopathy (NPDR; E10.32/E11.32), moderate NPDR (E10.33/E11.33), severe NPDR (E10.34/E11.34), and proliferative diabetic retinopathy (PDR; E10.35/E11.35). Distance to the nearest eye clinic was computed by using an Excel VBA program to access Microsoft Maps which calculated the distance between each patient's home zip code and the clinic zip code.
Statistical Analyses
Visual acuity was converted to the logarithm of the minimum angle of resolution (LogMAR) for analyses. Categorical variables are presented as percentages and compared using the two-sided chi-square test. Data for continuous variables are recorded as mean ± standard deviation (SD) and compared by using the two-sided Student's t-test. Binary logistic regression analyses were used to identify demographic, clinical, and sociomedical factors associated with follow-up. For the logistic regression of multiple variables, we used a generalized linear model to determine the association between the variables included in the model and inperson return. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for each variable. The Z-ratio was calculated for the significance of the difference in OR between groups. Stepwise multiple regression was performed to understand which of the identified factors predicted in-person return when combined into a single model. All tests were two-sided, and P-values below 0.05 were considered statistically significant (SPSS ® Statistics version 27.0, IBM Corp., Armonk, NY).
Results
In 2019, 9977 patients with DM were seen in the eye clinic. Out of that number, our study looked at 8147 patients who had yet to return for an eye examination by the time that the COVID-19 state of emergency was declared on March 15, 2020. [23] The mean age of these individuals was 68.8 years (SD ± 13.0 years) and 42.1% were female. Most patients identified as White, non-Hispanic (80.73%; herein after referred to as White), 7.39% identified as Asian; 2.87%, as Black or African American (herein after referred to as Black); 3.41%, as Hispanic or Latin American; 0.92%, as more than one race; 1.78%, as other races (including American Indian or Alaska Native, Native Hawaiian, or other Pacific Islander); and 2.90% of patients had unreported information regarding race and/or ethnicity. Additional demographic and sociomedical characteristics are in Table 1 In the study cohort, 1739 patients (21.3%) completed a telehealth visit prior to an in-person, follow-up visit to the eye clinic. Most telehealth visits were conducted by telephone, with only 11 encounters recorded as being video-based (0.63%). 55% of telehealth services were provided by ophthalmologists. Importantly, the rate at which White patients completed telehealth (21.9%) was not significantly different from Asian patients (19.9%, χ 2 =1.21, P = 0.271), Black
Factors Associated with Likelihood of Return for in-Person Eye Care
Patients who had completed a telehealth appointment, most of which took place in the first 90 days after the outbreak COVID-19 (Figure 1), were 25% more likely to return for a subsequent eye examination (50.2% versus 40.3%, χ 2 =55.332, P<0.001). Interestingly, patients who had a telehealth encounter with a physician (54% of all telehealth encounters) had a greater than 50% higher rate of return, compared with those who did not receive telehealth services (64.8% versus 39.4%, χ 2 =222.85, P<0.001). By contrast, patients who had a telehealth visit with an optometrist were 23 The seven-day average number of confirmed cases recorded by the Department of Public Health is shown (grey line). In-person weekly visits to the eye clinic (blue bars) dramatically decreased after the recognized outbreak of COVID-19 and declared state-of-emergency on March 15. 31 The first local peak in COVID-19 cases occurred on April 20, 2020 (2299 cases). This coincided with the largest number of weekly telehealth visits (Orange bars). During the month of July, telehealth visits were nearly equal to in-clinic visits (cross hatched bars). Patients began to return for in-person eye examinations during the summer and fall when local case counts were declining and businesses, including eye care practices, were allowed to re-open. 37 P=0.003). Notably, the rate of return for the subset of patients who received telehealth in 2020 were similar to the rate at which patients seen in 2018 returned over the same months in the year 2019, prior to the pandemic (data not shown). Significantly, patients who were less likely to complete an in-person return visit included those who did not speak English as their primary language (
2161
Finally, stepwise multiple regression was used to develop a multivariate model of the factors that influenced in-person return among patients with DM. Telehealth, age, gender, race, primary language spoken, residence out-of-state, presence of retinopathy, and vision in the worse-and better-seeing eyes were included in this model as significant stepwise predictors ( Table 3). The other demographic and clinical variables included in Table 2 were excluded from our multiple regression model owing to their lack of unique predictive value regarding likelihood of in-person return. Utilizing this model, we examined race and/or ethnicity as a factor that could influence the likelihood of in-person return. On its own, telehealth had a similar impact on the likelihood that patients who were non-White and/or Hispanic would return for inperson eye examination, compared with patients who were White Analysis of individual racial and/or ethnic groups with such a model is limited by sample size but showed similar results for the impact of telehealth on likelihood of in-person return (data not shown).
Discussion
The sudden outbreak of COVID-19 caused a precipitous decline in return visits among patients with DM, compared with the same period one year prior. This decline was most pronounced for patients from racial and/or ethnic minority groups in our study. By contrast, prior to the emergence of COVID-19, the rate-of-return was similar for White patients, compared with those identifying as other races and/or ethnicities at our medical center. Not surprisingly, COVID-19 acted as a barrier to care, exacerbating historic inequities, 17,18,21,[25][26][27][28] especially given that individuals from racial and/or ethnic minority groups were disproportionately affected by COVID-19 infections. 19 In view of the rapid transition required to make services possible for our patients during the COVID-19 pandemic, the finding that telehealth increased the rate at which diabetic patients returned for in-person eye examinations is an encouraging finding, especially with regard to those who had DR and worse vision. It is also encouraging that telehealth partially offset the steeper decline in the rate-of-return for patients from racial and/or ethnic minority groups. Although the Affordable Care Act improved access to eye examination among the US working-age population with DM, disparities among racial and/or ethnic groups persist. 29,30 Additional steps are needed to close the gaps. Our study suggests telehealth could be part of a comprehensive strategy to engage patients and help them follow through with recommended eye examinations. 8,9 A recent study at an urban medical center found that during the first year of the COVID-19 pandemic in the United States, historically-marginalized populations were less likely to receive ophthalmic care, including care delivered through telehealth services. 17 Though our study spanned the same period marked by the outbreak of COVID-19 in the state of Massachusetts, 31,32 our study differed in that we focused on the outcomes for established patients with DM who had had recent eye examinations. Our practice is also based in a suburban, rather than an urban setting. Disparities in care typically identified along lines defined by gender, race/ethnicity, and social class are often most visible in urban settings because of additional barriers to accessing care. 33 Finally, the utilization rate of telehealth was higher for the patients in our study, compared with the patients in their report. Most of our telehealth encounters were phone-based, rather than performed by video as in their practice. This might account for the greater uptake among the patients we looked at, many of whom were older or may not have had access to smart-phone or computer-based, video services. Whether these factors account for why telehealth was delivered to a larger proportion of total patients and with relative equality across racial and/or ethnic groups cannot be assessed. However, our study shares the critical finding that historically-marginalized populations were far less likely to return for in-person care. This indicates that barriers to accessing care in America continue to cross geographic, social, and economic lines.
During the first year of the COVID-19 pandemic, ophthalmologists and optometrists in our cooperative group practice successfully worked together to deliver nearly equal proportions of eye care, whether by telehealth or in-person. However, telehealth was not universally correlated with an increase in in-person return. This suggests that telehealth was not simply delivered to patients who were more engaged in their eye care and therefore more likely to schedule return visits. Our multivariate analysis also excludes disease severity as accounting for this difference, and telehealth visits showed a similar impact for patients in key demographics such as those with PDR who had an increased rate of return, regardless of provider type. On the other hand, patients who lacked a history of DR or had better vision returned less frequently for in-person care during the period of the COVID-19 public health emergency. Future studies should be performed to determine whether
2163
differences exist in the way ophthalmologists, who are physicians, as compared with optometrists, communicate with patients, or recommend timing for a return visit. This may be important for helping to standardize the delivery of telehealth. Such a detailed analysis of telehealth encounter documentation is beyond the scope of this project.
Limitations
The limitations of the present study include its retrospective nature and derivation from a suburban population based at an academic medical center. Repeating this study in a population with an even greater share of patients from racial and/or ethnic minority groups is likely to further our understanding of the utility of telehealth. This is especially important because the number of patients from diverse backgrounds continues to increase 34,35 and comprises the fastest growing segments of patients receiving care at our medical center (data not shown). We did not control for other comorbid eye diseases, treatment history (eg, prior intravitreal injections of medications, laser, or surgery), or provider-recommended follow-up interval. Nor did we directly take into account other barriers to care, such as transportation, level of education, or specifics related to socioeconomic or employment status, all of which could influence the ability of patients to follow-up. The subset of patients who received telehealth services did so largely during the early part of the COVID-19 outbreak. Changes made to statespecific guidelines, as well as the expansion of coverage for telehealth services for those insured by Medicare, may have affected patient eligibility or likelihood to accept telehealth visits. 33,36 The actions taken by providers were also left to individual clinical judgment and were not based on a standardized set of telehealth guidelines. Our analysis based on data derived from billing records also limits our ability to assess who was offered telehealth but declined, who was unreachable, or who was never scheduled for such services. Finally, the short-term nature of our evaluation, under the very specific conditions of the COVID-19 pandemic, limits our ability to draw conclusions about how telehealth will affect the longitudinal risk of patients failing to return or becoming lost to follow-up. Future directions for this research should ideally include a longer study period and assess DR complications and visual outcomes in patients engaged by telehealth.
Conclusion
Telehealth delivered during the COVID-19 pandemic increased the rate at which patients with DM returned for in-person eye examinations. Patients from historically-marginalized groups, many of whom experienced inequalities in access to eye care exacerbated by the outbreak of COVID-19, showed an even more favorable rate of return after completing telehealth visits compared with White patients. Future studies should seek to determine whether telehealth delivered as a part of ordinary eye care can help close gaps and thereby improve health outcomes for our patients.
Ethical Approval
This case series was conducted in accordance with the Declaration of Helsinki. The collection and evaluation of all protected patient health information was performed in a HIPAA (Health Insurance Portability and Accountability Act)-compliant manner.
Statement of Informed Consent
This study received institutional review board approval. A waiver of informed consent was obtained for the research and publication of this article.
|
2022-07-06T15:06:08.717Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "c9ffb13daa753179bcc1a27c30211e76c7507e2e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=81898",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c43a553fdb905f79adfe6e32ca058c7e416ce28",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252089210
|
pes2o/s2orc
|
v3-fos-license
|
Write Me and I'll Tell You Secrets -- Write-After-Write Effects On Intel CPUs
There is a long history of side channels in the memory hierarchy of modern CPUs. Especially the cache side channel is widely used in the context of transient execution attacks and covert channels. Therefore, many secure cache architectures have been proposed. Most of these architectures aim to make the construction of eviction sets infeasible by randomizing the address-to-cache mapping. In this paper, we investigate the peculiarities of write instructions in recent CPUs. We identify Write+Write, a new side channel on Intel CPUs that leaks whether two addresses contend for the same cache set. We show how Write+Write can be used for rapid construction of eviction sets on current cache architectures. Moreover, we replicate the Write+Write effect in gem5 and demonstrate on the example of ScatterCache how it can be exploited to efficiently attack state-of-the-art cache randomization schemes. In addition to the Write+Write side channel, we show how Write-After-Write effects can be leveraged to efficiently synchronize covert channel communication across CPU cores. This yields the potential for much more stealthy covert channel communication than before.
Introduction
It is no secret that the microarchitecture of recent CPUs is riddled with side channels that can often be exploited in ways that threaten the security of the whole system. Many of these side channels are the result of obvious and elementary CPU components that pave the way to achieve the performance levels to which we have become so accustomed. Among others, this includes caches and prefetchers. The way these components are intended to work generates a timing difference that can be observed by user-level processes. On the other hand, there are more subtle aspects of CPU internals and their implementation that lead to measurable timing differences without being essential for CPUs performance or security. This, for example, includes Intel's ring interconnect implementation for last level caches (LLCs) [35] or store-to-load forwarding [43].
Due to the vast performance discrepancy between the CPU core and the memory subsystem, the read-and write path's are subject to immense optimization efforts by the CPU developers. Each saved or predicted interaction with the memory can result in hundreds of saved clock cycles. Though the performance benefit of such optimizations stands without question, ongoing research has found many ways to exploit these to bypass essential security foundations. Early work in this area demonstrated how cache-timing can be used to reconstruct secret keys of AES [2,34]. Over time, these attacks developed towards well-known attack-primitives like PRIME+PROBE [25,34,52] and FLUSH+RELOAD [62]. With these primitives, cache attacks evolved to be very efficient and further CPU components like the TLB moved into focus [12]. In 2018, the disclosure of Meltdown [24] and Spectre [23] shifted the momentum and severity of microarchitectural attacks. The following avalanche of transient execution attacks changed the understanding of the hardware as a trust anchor for secure system development; see generally [8]. The class of transient execution attacks goes beyond control flow speculation, i.e. by the branch predictor. For example, the MDS attacks [7,54] exploit speculative data forwarding of readand write operations.
During transient execution attacks, leaked data is usually recovered via a covert channel. Thereby, the attacker transmits data from the (speculative) victim context to their own process using timing peculiarities of CPU internals. Covert channels can also be used to communicate between co-located VMs in cloud environments [41]. Due to the simplicity and reliability of cache covert channels, FLUSH+RELOAD, PRIME+PROBE and derivatives [14,39] are commonly used in this context. The bandwidth of such covert channels has shown to be more than sufficient to transmit large chunks of data [31]. However, synchronization across cores remains an issue and is frequently evaded by using self-clocking signals [50] with massive oversampling on the receiver end and multiple accesses on the sender side [31,58].
Contributions. In this paper, we present WRITE+WRITE, a new write-based side channel on Intel CPUs that leaks whether two physical addresses collide in a specific range. This side channel is especially worrisome in face of the current development in cache side channel countermeasures. We replicate the behavior in gem5 [27] and demonstrate an improved attack against state-of-the-art cache randomization on the example of ScatterCache [57]. Our attack requires further design constraints to be considered when implementing randomized caches. Secondly, we show how WRITE+WRITE affects traditional cache architectures and leverage the side channel for bottom-up construction of cache eviction sets. In doing so, we break the current speed records in eviction set construction. Third, we present a new, write-based technique, to synchronize processes across CPU cores. We show how this technique can be applied to establish a common clock signal for covert channels. Using our synchronization approach, each signal only needs to be transmitted once which greatly reduces the monitoring surface for detection mechanisms.
A version of this paper was sent to Intel for responsible disclosure prior to submission to RAID'22. Proof-of-concept code is available on GitHub 1 .
Organization of this Paper. The following section introduces background on caches, cache side-and covert channels, as well as some internals of recent x86 CPUs. In Section 3, we introduce the WRITE+WRITE side channel and the foundation for our synchronization technique. We then present a WRITE+WRITE-based algorithm for rapid eviction set construction in Section 4.1. In Section 4.2, we adapt the algorithm for randomized caches and attack a gem5 implementation of ScatterCache. Third, we demonstrate the Write-After-Writebased cross-core synchronization for covert channel communication in Section 4.3. We discuss mitigation techniques and related work in Section 5 and Section 6 respectively. Finally, we conclude in Section 7.
Background
In this section, we introduce some background on caches, covert channels, and the x86 microarchitecture.
Caches
The speed at which modern processors execute instructions greatly exceeds the speed of read and write operations from and to the memory. Since many programs rely on frequent memory accesses, this would normally cause a large number of stall cycles, waiting for the requested data to be fetched. Hence, apart from deeply embedded devices, virtually all current processors feature at least one level of cache. Figure 1: Exemplary architecture of a physically indexed two-way set-associative cache. The index bits of the physical address are used to determine the set (red). The replacement policy chooses which entry is replaced on a miss access.
Caches are small and fast memory modules located in close physical proximity to the CPU. Frequently used data is stored in the cache to accelerate memory operations and hide the latency of the main memory. Most desktop-level processors feature three levels of cache. The L1-cache is the smallest and fastest cache, followed by the slightly larger and slower L2-cache. Both L1-and L2-caches are typically duplicated for each physical CPU core. The last-level-cache (LLC) is the largest level of cache and usually shared among cores. A coherency protocol is implemented to keep the data consistent across all caches and the main memory; for details on recent Intel CPUs, see [32]. Furthermore, the LLC is usually inclusive which means that all entries of the L1 and L2 caches are also stored in the LLC. This brings performance benefits in multi-core systems -if a L2 cache miss occurs, the inclusiveness makes sure that if the data is cached in any other cores private cache, it is also cached in the LLC. Non-inclusive LLCs need to query other cores' private caches or maintain a directory [61] to make sure that these do not hold a modified copy of the requested data.
Cache Internals & Addressing. Since low latency is a key design goal of caches, it is not practical to search the whole cache on every access. To accelerate the lookup, caches are usually implemented as set-associative structures. Each entry (cache line) holds 64 bytes of data alongside a tag, which is used to uniquely identify the cached address, and some flags including valid and dirty. As depicted in Figure 1, the physical address is divided into tag-, setand offset-bits. The offset is used to select a 64-bit word from the cache line to be returned on read-access. The set-bits select the cache set (corresponding to the table row in each cache way in Figure 1). The remainder of the address (i.e. the tag) is stored alongside the data which together with the implicitly stored set index, uniquely identifies the physical address.
When a memory address is accessed, a cache lookup occurs and in each cache way, the tag stored at the index determined by the set bits of the address are compared to the tag of the accessed address. If the tags match in one cache way, a hit occurs and the data is returned with the specified offset. If the tags do not match in any cache way, a cache miss occurs and the data is requested from the next device in the memory hierarchy. When the request is served, the replacement policy selects one of the set entries to be replaced with the new data. Often, this policy is (pseudo)-least-recently-used ((P)LRU) which replaces the entry that has not been used longest. Writes are handled analogously, although a distinction is made between write-back and write-through caches. Write-back caches store a modified version of the data until the entry gets evicted from the cache while write-through caches immediately forward the modification to the memory-side port. Recent LLCs are usually configured as write-back.
In addition to that, many processors use cache slices which can be imagined as load-balanced, parallel instantiations of caches to reduce the workload on each slice and increase the overall bandwidth of the cache. Each physical address is uniquely mapped to a single slice. Recent Intel processors implement complex cache indexing which derives the cache slice by using a recently revealed function that operates on "potentially all" address bits [19]. In [15], this complex addressing function was first reverse engineered manually, followed by [30] which utilizes a generic method based on hardware performance counters to reverse engineer the function for several Intel processors. Both works report a simple xor-based function to obtain the slice for each address.
Cache Side Channels. The design goal of caches is to accelerate slow memory accesses -hence, the fact that timing measurements can reveal whether or not some data was cached is conceptually unavoidable. An attacker can measure the latency of a memory access and therefore determine whether the accessed data was cached before the access. This effect has been exploited for numerous attacks including key-recovery on cryptographic schemes [2,11], bypassing ASLR [13,15], covert channels in shared cloud environments [31], and in the context of speculative execution attacks [8]. The latter -most notably by the disclosure of Spectre [23] and Meltdown [24] -hugely amplified the interest and awareness of cache side channels. The two most common attack vectors are FLUSH+RELOAD [62] and PRIME+PROBE [25,34,52]. FLUSH+RELOAD relies on shared memory between the attacker and the victim as well as the clflush instruction. This instruction takes a memory address as a parameter and flushes the corresponding data from all cache levels. If no data was cached for that address, the instruction has no effect. PRIME+PROBE on the other hand does not rely on shared memory or the clflush instruction. Instead, the attack makes use of eviction sets to flush the victim entry from the cache. An eviction set is a set of w addresses that map to the same cache set, where w equals the associativity of the cache. Any entries that are stored in that cache set prior to accessing the eviction set will be replaced by the eviction set. Since the eviction set addresses then occupy all entries of the set, the attacker can trigger the victim process and measure if the victim accessed that set by probing the eviction set addresses for a cache miss. If a cache miss occurs during the probing phase, the attacker learns that the victim accessed the cache set. Since the attacker does not have full control over the physical address, they can only partially control the set bits of the address, namely those that overlap with the page offset of the virtual address. However, the attacker can choose a large initial set of addresses that acts as an eviction set by sheer size, and then reduce this set to a minimal eviction set using algorithms proposed in [46,55].
Cache Covert Channels. There are a large number of possibilities to transmit data from one process to another without an observer noticing. However, the timing behavior of caches is used disproportionately often in the context of microarchitectural attacks and covert channels, since it allows fast and fine-grained transmission. Often, FLUSH+RELOAD [62] is used for covert channels. Therefore, the receiver first makes sure that the shared address between sender and receiver is not cached using clflush. Note, that this shared address may be read-only. Next, the sender encodes one bit of the message by either accessing the shared address or not. The receiver then measures the latency for an access to the shared address. Only if the access results in a cache hit, the sender accessed the address. The used side channel is interchangeable for any other side channel, e.g., FLUSH+FLUSH [14] or PRIME+PROBE.
This process requires synchronization between sender and receiver which is not trivial. Usually, each symbol is repeated for a fixed timeframe and the sender and receiver perform their actions asynchronously. Due to the repetition, the average latency will reveal whether a zero or a one was transmitted. In order to decode the incoming data stream, often self-clocking signals like Manchester-Encoding are used [50].
Randomized Caches. In an effort to prevent the efficient construction of eviction sets, a variety of randomized cache architectures have been proposed [40,49,57]. These schemes randomize the address-to-cache-set mapping, such that the attacker cannot easily construct eviction sets, even if they have full control over the physical address. One physical address can map to different indices in different cache ways. This allows addresses to partially collide in one cache way but not the others and hence, weakens the properties of eviction sets. It has been shown that finding fully congruent eviction sets is not feasible in reasonable time [38,57], i.e., it is not feasible to obtain sets of addresses that collide with the victim address in every cache way. Purnal et al. generalize the design proposals of randomized caches and present the PRIME+PRUNE+PROBE attack which is a generic attack on randomized caches based on probabilistic eviction sets [38]. Probabilistic eviction sets contain addresses that are known to collide in at least one cache way with the victim address. If the probabilistic eviction set contains enough of such addresses, the attacker has a high probability of occupying all possible entries of the victim address. By changing the randomization function frequently, attacks based on PRIME+PRUNE+PROBE can be prevented, albeit with some performance overhead. More recent proposals [42,51] combine randomization with further measures to prevent PRIME+PRUNE+PROBE attacks by design. Both schemes aim to hide the effects of victim cache accesses by freeing entries in the cache before conflicts occur.
The x86 Microarchitecture
We now discuss some microarchitectural aspects of recent x86 processors. Thereby, we focus on Intel processors although the general information holds for AMD processors as well. Since most of the internals of these processors are not public, we rely on prior reverse engineering efforts and the sparse public documentation.
Store Architecture. Every fetched instruction is converted from the visible x86 instruction to one or more µOPs and is inserted to the pipeline. Once a write µOP is executed, the write is forwarded to the store buffer (SB). On the Skylake microarchitecture, the SB can hold up to 56 entries [28]. Then, the L1 cache is queried. If the request results in a cache hit and the respective cache line is in modified or exclusive state (i.e. the line is owned by the cache), the data will be written into the L1 cache. Otherwise, a request for ownership (RFO) is issued and a line fill buffer (LFB) is allocated to track the outstanding write. On Sandy Bridge processors, there are 10 LFBs [16] although unofficial sources report 12 LFBs for more recent CPU generations. According to the documentation, the SB entry remains active until after the store instruction retires, i.e. the SB entry only retires after the L1 cache line is filled [16].
Serializing Instructions vs. Ordering Instructions. The x86 ISA offers a set of serializing instructions and ordering instructions that can be used to ensure the intended order of instructions and therefore prevent unwanted effects of out-oforder execution and speculation [18,Sec. 8.3]. The ordering instructions are sfence, lfence and mfence which are accessible from userspace. The store-fence (sfence) instruction ensures that all write instructions prior to the fence become globally visible before those after the fence [17, P. 4-599]. The load-fence (lfence) does the same for load instructions [17, P. 3-529] and the memory-fence (mfence) combines both fences to ensure that all loads and stores before the fence become globally visible before any load or store after the fence [17, P. 4-22].
Opposed to these memory-ordering instructions, serializing instructions enforce all modifications on the processor state made by any instruction before the serialization must be completed before the next instruction is fetched. This poses a very strong serialization since new instructions can only enter the pipeline after all prior tasks are finished. Importantly, serializing instructions also drain the SB with any outstanding write operations before the next instructions are fetched. On Intel processors there are three non-privileged serializing instructions, namely cpuid, iret and rsm [18,Sec. 8.3]. While the two latter perform actions that would cause significant side-effects for the following program execution, cpuid only affects the values of the registers eax, ebx, ecx and edx. This makes it a formidable candidate to serialize instructions in any non-privileged program. According to the AMD documentation, on AMD processors the mfence instruction is also a full serializing instruction [1, P. 206].
Observations on Write-After-Write
In this section we first provide details on the WRITE+WRITE side channel. We give a brief summary on the side channel, reverse engineer the exact collision criteria and reason about the origins of the side channel leakage. We then take a look at the channel noise which yields our second observation, namely the clock pattern in the write latency. Finally, we discuss the findings and identify affected CPUs.
WRITE+WRITE Side Channel
The WRITE+WRITE side channel exploits differences in the timing behavior of two write operations based on features of the physical address. In a nutshell, we observe that if a write operation is issued to a given address, a subsequent write to some addresses is slower than a subsequent write to some other addresses. We found in particular, that if the physical address of the first and the second write share some of the lower address bits, the second write will be slower than if they do not share those bits. We reverse engineer the exact bits of the address matching function in Section 3.2. In the following, we refer to addresses that match by this function as colliding addresses.
A minimal proof-of-concept pseudocode is shown in Listing 1. We inserted additional instructions that enforce the execution order during the measurement and export the timestamp of the rdtscp instruction. For now, we set the goal to find whether a candidate address collides with a given target address and therefore causes a slower write access during the measurement. To test this, the target address is first flushed from the cache using the unprivileged clflush instruction of x86. The flushing can be done at any time during the attack as long as it is made sure that the data is not cached when accessed during the measurement. Then, a write operation is issued to the candidate address. The write is followed by a cpuid instruction which is crucial for the success of WRITE+WRITE. It makes sure that the first write instruction is retired before the timing measurement begins. Note, that WRITE+WRITE does not work if an ordering instruction like mfence is used instead. Opposed to ordering instructions, cpuid crucially also drains the internal store buffer. In the final step, the latency of a write operation to the uncached target address is measured. The distribution of the measured write latency for an address that is known to collide with the target (solid) and one that is known to not collide with the target (dashed) on an Intel Xeon E-2224G (Coffee Lake) is shown in Figure 2. Therefore, we repeatedly measured the write latency to the target address with a random candidate address directly followed by the measurement with a candidate that collides with the target. From the figure it is clear that the distributions can be distinguished easily.
Listing 1: AT&T syntaxed pseudo-code assembly for the WRITE+WRITE PoC.
Collision Criteria
We now focus on reverse engineering the criteria under which two addresses collide, and therefore influence the write latency of each other. For this, we again use the Coffee Lake Intel Xeon E-2224G, however we later verified that the observations hold for all tested Intel CPUs, listed in Table 1. For the reverse engineering, we ran tests where we fix the target address and perform WRITE+WRITE on each address of a large array to find those colliding. We gather those addresses Offset Tag 63 15 6 0 Set Figure 3: The bits used for the lower address match are highlighted in blue. The address is divided into the L3 cache addressing parts.
that led to an increased latency for further analysis. Using the libtea framework [10], we analyzed several properties of the addresses and found that the physical address of each analyzed address matches the target in the 10-bit range between bit 6 and 15 as shown in Figure 3. We verified this by allocating and testing possible physical addresses that only differ in the bit-range of interest and found that none of these candidates influenced each other. This rules out the possibility that the function combines some parts using a more complex technique (as it is for example the case for the cache-slice selection). We repeated this process multiple times to account for false positives and the influence of noise.
We further attempted to mount WRITE+WRITE across multiple processor cores and hyperthreads. Therefore, we tried a synchronized and a non-synchronized variant. Both variants split the WRITE+WRITE-code in two threads, one flushing the target address and measuring the access time to it, the other repeatedly writing to the candidate address. The synchronized variant utilizes mutexes to ensure the correct order of instructions, the non-synchronized variant performs the operations in a loop without synchronization. We did not find clear indications that WRITE+WRITE can be exploited across hyperthreads or CPU cores. We therefore conclude that either the addressing function implements some additional context-awareness (e.g., by matching the id of the originating core), the noise level makes it immensely hard to observe the effect, or the observed hardware structure is not shared among cores / hyperthreads. The load-and store buffers are believed to be partitioned in recent CPUs [22]. Hence, it is likely that the hardware that causes WRITE+WRITE-leakage is also partitioned.
The measurable timing difference of WRITE+WRITE is either an artifact of a false dependency check within the CPU core, or a conflicting use of some hardware resource that processes the write instructions. All our tested CPUs use exactly the 10 Bits identified with WRITE+WRITE for L2 and L3 cache indexing. Hence, we suspect that the simultaneous write access to the two addresses causes a collision in the set addressing process which yields the measurable timing difference. To support this hypothesis, we attempted to swap the write instructions for non-temporal writes, i.e., writes that do not affect the cache. After that, WRITE+WRITE no longer works. We believe that the process of set allocation is similar in the L2 and L3 cache. Since the store buffers are partitioned in recent Intel CPUs [22] we suspect that the hardware for allocating the cache sets is also partitioned and the structural For each execution, one address that is known to collide with the target (solid) is compared to an address that does not collide (dashed).
issue that causes the measurable timing delay is to be found within this logic. Since the LLC allows for cross-core attacks, we focus on the implications on the LLC set-contention in the following. We show how WRITE+WRITE can be used for efficient LLC attacks in Section 4.1. Figure 4 shows two non-successive executions of the PoC code. For each of these distributions, the target and the candidate address are repeatedly measured in alternating order. It is clear that while the distribution itself appears to be similar for each run, the ideal threshold that distinguishes colliding addresses from those that do not collide, varies drastically. As a result, it is not possible to make a decision based on a single measurement or even distribution. However, for measurements that are taken in close succession like the alternating measurements that make up the colliding and non-colliding distribution, a distinction is simple. Hence, the channel is only stable over short temporal periods. This distinguishes the WRITE+WRITE side channel from many other CPU side channels like FLUSH+RELOAD and PRIME+PROBE where a threshold can be established which reliably decides the two distributions. The reason for the temporal instability of the channel can be found in the average write latency to any address.
Dealing with Noise
Listing 2: AT&T syntaxed pseudo-code to measure the write latency.
cpuid rdtscp movq rdx, ([ address ]) cpuid rdtscp
We measure the write latency using the code in Listing 2 in a loop. The initial cpuid instruction ensures no unfinished write instructions are in the pipeline at the beginning of the measurement. The second cpuid ensures that the measurement is only stopped when the write is completed. Figure 5: Moving average of the write instruction latency on the Xeon E-2224G. Figure 5 depicts the moving average of the resulting latency measurement. Surprisingly, the graph represents a rather sharp clock signal. Later in this paper we show how this can be leveraged for cross core synchronization. We suspect that the observed behavior is an artifact of a CPU internal state-machine. Although the high-and low-level of the signal appear to be stable in the figure, we found that it can slowly change over time which might be due to the dynamic frequency adaption of the CPU.
To filter the noise and still be able to distinguish colliding addresses, it is therefore required to take a comparative approach. In other words, the results of a measurement are only valuable in comparison to another measurement taken in close succession. Since the addresses collide on 10 bits, the probability of a collision for a randomly chosen address is 2 −10 . Hence, by choosing a random address to compare the measurement against, the attacker has a high probability of successfully gathering the addresses that collide with the target. Since some of the bits can even be influenced by the virtual address, the attacker can also make sure that the random address does not collide with the target. While it is sufficient to test multiple iterations of accessing the target address combined with the candidate address, directly followed by multiple iterations of accessing the target address combined with the random address (resulting essentially in Figure 2), we find that a better way of distinguishing the two addresses is to toggle between accessing the target-and the candidate address every second iteration. This results in an access pattern of T-T-C-C-... which avoids most of the prefetcher effects that are otherwise present. The measurements are summed for the candidate-and the random address respectively such that afterwards, the mean latency of both addresses can be computed. By subtracting the mean latency of the iterations with the random address from the mean latency of the iterations with the candidate address, we can test whether the distributions have a large difference in their mean value and hence conclude if the candidate collides with the target address. If the two addresses did not collide, then the distribution resulting from WRITE+WRITE with the random address is similar to the distribution of WRITE+WRITE with the candidate address, resulting in a small difference in means. We found that a threshold of 10 clock cycles difference in Table 1: List of tested CPUs for Write-After-Write effects.
CPU Architecture W+W Clk
Intel Xeon E-2224G Coffee Lake Intel Xeon W-3223 Cascade Lake Intel i5-8259U Coffee Lake Intel i5-8265U Whiskey Lake Intel i7-7600U Kaby Lake AMD Ryzen5 5600H Zen3 means after 30 iterations with each address gives a reliable indication of whether the two addresses collide on each tested CPU. The pseudocode is given in Listing 3. It is beneficial to write the code directly in assembly, using conditional moves instead of branch instructions. This prevents unwanted effects from the branch predictor and mis-speculation.
Listing 3: C-flavored pseudo code for same-process testing if two addresses collide with WRITE+WRITE.
Discussion
We tested our implementation of the WRITE+WRITE side channel on various different CPUs which are listed in Table 1. All tested Intel CPUs showed the behavior described above. We therefore assume that most of the recent Intel CPUs will be vulnerable to Write-After-Write effects. We adapted WRITE+WRITE to an AMD CPU but were unable to identify similar effects and therefore have no indication to believe that other AMD CPUs are affected. ARM and RISC-V also feature serializing instructions and may therefore show similar behavior to the Write-After-Write clock and could be subject of further studies in future work. Opposed to other well-known side channel attacks on modern microprocessors like FLUSH+RELOAD and PRIME+PROBE, the WRITE+WRITE channel relies on a write operation and does not work if an address is only read. Furthermore, it is not possible to mount WRITE+WRITE attacks across process boundaries. Therefore, WRITE+WRITE cannot directly be used to leak data from other processes. However, in combination with the aforementioned side channels, WRITE+WRITE and the clock synchronization can be useful tools in the hands of an attacker. We show how WRITE+WRITE can be used to construct eviction sets for traditional caches (Section 4.1) and on side-channel hardened architectures like ScatterCache [57] (Section 4.2). Finally, we show how the hidden clock signal can be used to synchronize covert channels (Section 4.3).
Exploiting Write-After-Write
In this section we demonstrate how WRITE+WRITE can be exploited to rapidly create eviction sets. We start by attacking traditional caches on real CPUs. Then we use a gem5 implementation of ScatterCache [57] to show how WRITE+WRITE would affect randomized caches. Finally, we use the Write-After-Write clock for cross-core synchronization of covert channels.
Unless stated otherwise, all CPUs run an unmodified version of Ubuntu 20.04. We did not disable any security / performance features or isolate cores. We use the Intel Xeon E-2224G as our main evaluation platform.
WRITE+WRITE for Rapid Cache Attacks
PRIME+PROBE [25,34,52] is one of the most widely used cache attacks. Therefore, the attacker needs to be able to efficiently construct eviction sets, allowing them to reliably observe accesses to the victim address. The state-of-the art algorithms [46,55] obtain such eviction sets using a top-down approach. They reduce a large set of addresses that randomly include an eviction set and then filter all addresses that are not required for a minimal eviction set. Using WRITE+WRITE, it is possible to construct eviction sets using a bottom-up approach that iteratively adds addresses to the eviction set without any privileges. As we will show, this approach is much faster compared to the top-down approach. Moreover, behavioral detection mechanisms can likely be bypassed since the methodology is drastically different compared to current algorithms and therefore, the fingerprint for detection changes.
In the following, we first define the attacker model, then describe our methodology and evaluate the performance and reliability on various processors.
Attacker Model. The attacker's goal is to create an eviction set for a known target address. We assume that this address is either directly accessible, or the attacker has access to an address that contends for the same LLC cache set as the target address; i.e. the i-bit after the offset of the physical addresses match. If the address is not directly accessible, the attacker can obtain a colliding address by priming the cache and then observe which address is evicted after triggering the victim process (basically one iteration of PRIME+PRUNE+PROBE [38]). We do not require the attacker to know any physical addresses or the mapping of virtual to physical addresses. Furthermore, we do not make use of huge pages which are not always available. From a microarchitectural perspective, we assume that the CPU is vulnerable to WRITE+WRITE as described in Section 3.
Methodology.
As described in our analysis in Section 3.2, WRITE+WRITE allows the attacker to test whether two virtual addresses map to the same cache set. This does not inherently result in addresses that collide in the LLC since the L2 cache also introduces WRITE+WRITE leakage. Though the L2 cache uses the same index bits on our evaluation CPUs, it is not partitioned into slices. Hence, with WRITE+WRITE, the attacker can identify physical addresses that collide in the cache set-index but not necessarily the cache slice. For attacks on the LLC, the attacker needs to sort out addresses that do not map to the target slice. For CPUs with complex cache indexing, an undocumented hash function is used to map the address to a cache slice. Only if the slice and the set / index of two addresses collide, the two addresses can potentially evict each other. The slice addressing function has been reverse engineered in [15,20,30]. Our approach does not require any knowledge about this function.
The algorithm to construct eviction sets using WRITE+WRITE is shown in Algorithm 1. Therefore, we first allocate a sufficiently large memory area. The algorithm takes as input a pointer to the target address, a pointer to the memory area of size mem_size, and the number of repetitions for each candidate. Since some of the cache set bits can be directly controlled from the virtual address space, we align the lower 12 bits of the first virtual address to be tested with the lower 12 bits of the victims virtual address. As discussed in Section 3.3, the results are only meaningful when compared to another measurement in close succession. Previously we mentioned that the target address can be tested alternating with the candidate address and a random address. In that case, the timing difference is reliably measured if the candidate address collides. Therefore, the attacker needs to make sure that the random address maps to a different cache set using some bits of the virtual address. However, we found that for performance reasons, a better approach is to test two candidate addresses (i.e. both are 2 12 byte aligned to the target) in parallel and compare the timing of these instead of a random address. This way, a timing difference will be observed if one of the addresses collide with the target but neither if none or both do. Since it is relatively unlikely that two successive candidate addresses map to the same cache index, two strategies are possible: The coverage optimized variant aims to retrieve the most conflicts from a memory range. In that case, the two candidate addresses are base + i * 2 12 and base + i * 2 * 2 12 and i is increased by 2 12 in each step. This way, each address is tested twice which reduces the probability of missing a conflict. The performance optimized version increases i by 2 * 2 12 in each iteration which does not detect if both candidate addresses collide with the target but instead doubles the execution speed. In the following, we use the performance variant, as shown in the algorithm.
To reduce the error rate, each pair of addresses is tested multiple times and the results are averaged. If the absolute difference in means of the two candidate addresses is larger than a threshold, one of the candidate addresses collides with the target. The sign of the difference indicates which of the candidate addresses collides. When a collision is observed, the address is added to the preliminary eviction set. The attacker frequently tests if the eviction set is functional by measuring whether it evicts the target address. As long as the eviction set is not functional, the attacker continues searching for WRITE+WRITE collisions. When the preliminary eviction set is functional, the attacker can choose to remove false positives and addresses that map to different slices by removing one address at a time from the set and testing whether the remaining addresses still form an eviction set. This step is not strictly necessary since the initial set is also functional but depending on the use-case, it may be important for the attacker to obtain a minimal eviction set.
Algorithm 1 WRITE+WRITE-based eviction set construction for traditional caches.
Input: * target, * mem, mem_size, rep ev ← ∅; start ← align(mem,target); if decision == 0 then sum 0 += w+w(target, candidate 0 ); else sum 1 += w+w(target, candidate 1 ); end if end for avg 0 ← sum 0 /(rep/2); Optional. return ev Performance and Reliability. In the following we evaluate the WRITE+WRITE-based eviction set assembly on different target CPUs. The previously mentioned Xeon W-3223 CPU is the only one of our test-sample implements a non-inclusive LLC. We therefore do not consider the Xeon W-3223 for the eviction set use-case. In Figure 6, the confidence level of a WRITE+WRITE-based observation after 2n measurements is depicted. This includes n measurements with the first candidate address that are compared to n measurement to the second candidate address. The True-Positive-Rate (TPR) rises sharply after the first few repetitions and then converges towards 1. The characteristic is similar for all CPUs with the exception of small outliers. These may be due to scheduler interruptions or system noise during the measurement. For the construction of eviction sets it is important that WRITE+ WRITE reliably detects colliding addresses. Therefore, we now investigate the coverage, i.e. how many addresses from the memory area collide with the target and how many collisions are found using WRITE+WRITE. We use the libtea framework [10] to compare the addresses that map to the same cache set to those returned by WRITE+WRITE. Each measurement for WRITE+WRITE is repeated 30 times in order to achieve a good TPR. We found that in this configuration, WRITE+WRITE detects about 90% of the colliding addresses. Table 2 shows the performance for eviction set construction of all tested CPUs including the reduction to a minimal evic-tion set and compares it to the currently fastest algorithm for eviction set construction by Song et al. [46]. We found that the optimal performance for WRITE+WRITE-based eviction set construction can be achieved using a tradeoff between WRITE+WRITE repetitions and the amount of false-positive classifications for collisions. As shown in Figure 6, the TPR of a collision classification becomes very high for more than 30 repetitions of WRITE+WRITE. However, our experiments revealed that the runtime of the eviction set construction algorithm is minimal with about 10 to 15 repetitions. This leads to more false positives in the preliminary eviction set which increases the runtime of the eviction set reduction to a minimal eviction set but reduces the time to probe for conflicts in the first part of the algorithm.
We executed the code by Song et al. on our evaluation CPUs to get a clearer picture of the performance difference 2 . For all tested CPUs, the WRITE+WRITE-based eviction set construction outperforms the previous approach by a factor of three to six. The success rate is also very high throughout all our experiments. Runs that have been classified as failing mostly include only one address that is wrongly classified as collision. In such a case, the false address could be exchanged for a different colliding address without much additional computing time.
Attacks on Randomized Caches
The search for effective countermeasures to thwart cache side channels has peaked in a number of cache architectures that randomize the cache index using (partial) address encryption [42,49,51,57] to prevent efficient eviction set generation. Cache randomization is generally considered to make attacks more difficult [45], even though attacks like PRIME+PRUNE+PROBE [4,38] are still feasible on pure index-randomization schemes. More recent designs try to encounter such attacks with further security mechanisms [42,51]. Hence, we believe to see some form of index randomization to be adapted by major CPU vendors in the not-too-distant future.
Since the principle of contention is still present in randomized caches, i.e. two addresses can still collide in the cache, it is likely that the implementation of the entry selection remains similar and hence, WRITE+WRITE-like leakage may still exist.
Methodology.
To demonstrate the threats of WRITE+WRITE leakage in the randomized cache setting, we implement the WRITE+ WRITE behavior in the CPU simulator gem5 [27]. On traditional caches, WRITE+WRITE causes an increased write latency when two addresses map to the same cache index. In randomized caches, addresses may have an index collision in one cache way, but not the others. In our implementation, the increased latency occurs when two successive write operations are issued and the address of the second write can map to the same index in the cache way in which the first write is stored. This follows our assumption that WRITE+WRITE is caused by a conflict in the simultaneous set allocation of two write operations. The intention is that the first write is assigned to a set, and then the second write needs to be placed. If a second write can map to the entry in which the first one was placed, the replacement policy needs to wait until the first write has updated the replacement data. We chose a conservative approach where the other option would be that a timing difference is measurable if the two addresses can collide in any cache way. The effect on the security on randomized caches would be equal, however, the latter approach would accelerate the attack even further.
For the randomized cache, we implement ScatterCache [57] with random replacement policy in gem5. Thus, each address is randomized in every cache way individually, yielding w independent cache indices in a w-way cache. The attacker model is similar to the one in the non-randomized setting, although the aim is no longer to construct a minimal eviction set but instead a probabilistic one; see [38]. That is, since constructing a minimal eviction set would require the attacker to find w fully congruent addresses. This has been shown to be infeasible [57].
The algorithm to construct a probabilistic eviction set using WRITE+WRITE is similar to the algorithm in the nonrandomized cache and shown in Algorithm 2. The main difference is that the candidate addresses are no longer aligned to the target address since the attacker cannot influence the lower bits used for set selection from the virtual address space. This increases the search space for colliding addresses significantly. Each cache line holds 64 byte of data, hence, the attacker needs to probe for colliding addresses in a 64-byte stride. Smaller offsets would result in addresses that map to the same cache line while larger offsets would skip potentially conflicting addresses. A further difference is that the attacker can no longer deterministically test if the eviction set is complete. The test needs to be conducted multiple times and the attacker needs to determine whether the eviction set evicts the target with the expected probability p e . The optional reduction to a minimal eviction set also differs from the traditional algorithm. Instead of removing one address at a time and probing if the eviction set is still functional, the attacker can prime the target multiple times and remove those addresses that are never evicted by the target.
Performance Evaluation. We configured gem5 to use the O3 CPU model equipped with a small 64 kB, 2-way associative L1 cache and a larger 1 MB, 8 way associative L2 cache. Both cache levels use our ScatterCache implementation. The randomization function is a round reduced version Algorithm 2 WRITE+WRITE-based eviction set construction for randomized caches.
Input: * target, * mem, mem_size, rep ev ← ∅; if decision == 0 then sum 0 += w+w(target, candidate 0 ); else sum 1 += w+w(target, candidate 1 ); end if end for avg 0 ← sum 0 /(rep/2); avg 1 ← sum 1 /(rep/2); if avg 1 − avg 0 > T H then ev ← ev candidate 0 ; else if avg 1 − avg 0 < −T H then ev ← ev candidate 1 ; end if if test_evset_ratio(ev) ≈ p e then break; end if end for ev ← reduce(ev); Optional. return ev of PRINCE [3] with equal keys in the L1 and L2 cache. This way, the generalized eviction set evicts entries from both levels of cache. In practice, this reduces the complexity of cache randomization, since the address encryption only needs to be performed once for cache lookups in any cache level.
We implement our attack and execute it in gem5's syscall emulation (SE) mode. The SE mode executes the binaries in an isolated environment without any operating system or parallel processes. Calls to OS functions are handled by gem5 directly. Hence, in our setup there is no noise or disturbance by scheduler decisions or parallel processes. The numbers reported in the following therefore represent the best-case for an attacker.
Constructing a probabilistic eviction set with p e = 90% for an 8-way cache requires about 143 addresses that collide with the target in at least one cache way as shown in [38]. We executed the attack 10 times and the average runtime was 267 ms. Furthermore, we verified that the correctness of the constructed eviction set by probing whether it evicts the target with the expected probability.
To compare our attack to the established PRIME+PRUNE+PROBE attack [38], we also implement this attack in the same setup. PRIME+ PRUNE+PROBE repeatedly fills large parts of the cache and accesses the prime-set until there are no more evictions within this set. Then, the target address is accessed which with some probability evicts one of the attacker controlled addresses. If an eviction is observed, the address is added to the generalized eviction set. We found that priming 50% of the cache leads to a reasonable probability of observing an eviction while keeping the amount of conflicts in the prune phase low. Using the same settings, we executed the attack 10 times and the average runtime was 2.324 s. Hence, the WRITE+WRITE-based attack outperforms PRIME+PRUNE+PROBE by a factor of 10 in this cache configuration. In practice, priming and pruning a large part of the cache is difficult in the presence of noise induced by parallel processes. Any such process may cause an eviction of the pruned set which leads to a false-positive observation. WRITE+WRITE is less affected by such noise since only two addresses are used to perform the measurement. Therefore, we expect that our attack would perform even better in real-world scenarios compared to PRIME+PRUNE+PROBE.
Cross-Core Synchronization
If an attacker wants to communicate over a covert channel, they must make sure that the sender-and receiver process are properly synchronized. This is not trivial since there is no direct way for the sender to communicate to the receiver that the next symbol starts and in many scenarios, there is no common clock (e.g. when using virtualization). In prior work this problem is often evaded by using edge detection during post-processing of the received signal [21] or using self-clocking encodings such as the phase / Manchester encoding [31,50,58]. However, the former suffers from noise being classified as edges and limitations on how short a single symbol can be, while the latter reduces the channel entropy since two bits are needed to transmit one bit of information. Moreover, if for example, the FLUSH+RELOAD channel shall be used for transmission, the sender and receiver have no way of coordinating the flush/reload and access steps. In practice, the sender and receiver perform the flush and the access in parallel which on average leads to the low reload latency on the receiver end. However, this approach is not very stealthy. If the sender and receiver were to share a precise clock source, they could coordinate the flush and access in a way that each probe by the receiver yields precisely one bit of message. This is exactly what the Write-After-Write clock achieves.
In the following, we demonstrate how changes in the average write latency can be leveraged for cross-core synchronization of multiple processes. We show how covert channels can synchronize a sender and receiver process by observing the average write latency to an arbitrary address. Importantly, this address does not need to be shared between the sender and receiver process. Due to the synchronization, the sender only needs to send each symbol once and the receiver only once measures the access time to read the symbol. This makes our approach much more stealthy compared to previous techniques.
Attacker Model. We assume that the attacker controls two processes on the target device. The goal is to transmit a message from one process to the other over a covert channel. The sender and the receiver process execute in parallel but not necessarily on the same physical CPU core. For covert channels that require shared memory (i.e. FLUSH+RELOAD [62] and FLUSH+FLUSH [14]), we assume that both processes can access a shared memory resource. For PRIME+PROBE, this is not required. Furthermore, we assume that both parties have access to a precise timer. Should rdtsc not be available, such timers can easily be constructed as shown by Schwarz et al. [44].
Methodology. As shown earlier (c.f. Figure 5), the average write latency to a given address periodically switches between a high and a low state. The resulting signal already resembles a very sharp clock signal. We verified that this change in the write latency is synchronized across multiple CPU cores. The raw measurement is shown in Figure 7 (). While the average latency is reasonably stable, the latency of single write instructions can vary massively, making trivial classification to the low, or the high state infeasible. Therefore, we compute a running average of the write latency () and then perform edge detection on that data (). To generate the clock signal, we hence instantiate a loop that measures the write latency to a given address constantly. It does not matter whether the address is cached, as long as it is either always cached or 0 0 always not cached. A ring buffer is used to compute a moving average. Moreover, we implement a second ring buffer that stores delta between the measured time and the current moving average. This way, the average of the second ring buffer is close to zero if the mean write latency is stable, but if the average write latency changes abruptly, the mean over the second ring buffer will peak briefly. Using this technique, a simple threshold value is sufficient to detect changes in the write latency and therefore, generate the clock signal. If the average change in write latency is above a threshold (e.g., 15) and the current clock is low, the signal changes to high and vice versa. As depicted in Figure 7 (), the synchronization is highly accurate and even in the selected small time frame, no visible error can be observed. We introduce two metrics to evaluate the accuracy of the received clock signal. The cycle-to-cycle jitter measures the mean difference of clock periods of two successive cycles. It calculates as J cc = mean(|T j − T j+1 |)∀ j. To quantify the measurement error of two processes observing the Write-After-Write clock, we define the synchronization error as the mean difference between the detection of a clock edge of two processes. It calculates as S cc = mean(|T Until now, the clock period is fixed by the characteristics of the write latency. However, if the synchronization error is small, the sender and receiver can split each clock period in smaller chunks and hence increase the bandwidth. Therefore, both the sender and receiver need to keep track of the average clock period. The edges of the Write-After-Write clock serve as synchronization marks from which both the sender and receiver separate the expected period in n timeframes, each of which is used to transmit one bit of message. In the following, we use the FLUSH+RELOAD covert channel to transmit a message from the sender to the receiver process. If a '1'-bit is to be transmitted, the sender will access the shared memory address on the rising edge of the Write-After-Write clock. If a '0'-bit is transmitted, the sender does not access that address. The receiver then measures the access (read) latency to the shared address on each falling clock edge and flushes the shared address afterwards to prepare for the next symbol. On average, transmitting one bit of message therefore only requires 0.5 memory accesses on the sender side, and a single memory access and a cache flush instruction on the receiver side. 4.587x10 9 10.03% 0.02% i7-7600U 4.157x10 9 5.1% 0.05% Performance and Reliability. We first measure some characteristics of the Write-After-Write clock on our target CPUs. Therefore, we execute two processes on each CPU that both measure the write latency. The results are shown in Table 3. The measured clock period is similar in all our measurements. For the two Xeon processors and the i5-9259U, the jitter is low, indicating a very stable clock period. However, on the i5-8265U and the i7-7600U the, the jitter is much higher. On these CPUs, we experienced a large number of outliers during the measurement which reduces the accuracy of the observed clock signal. However, the synchronization error is very low on all tested CPUs, i.e. both processes are equally affected by the noise and hence, synchronization is still provided. We now use the Write-After-Write clock to synchronize a covert channel communication using FLUSH+RELOAD on the Xeon E2224-G CPU. Both the sender and receiver process outsource the clock generation to a separate thread. This way, we get the highest possible sampling rate of the write latency which improves the accuracy of the retrieved clock signal. We furthermore schedule each process (sender, receiver and two clock threads) on different CPU cores to avoid heavy noise disturbance. During a transmission two kinds of errors may occur: We classify a flipped bit as a transmission error and a missing or added bit as a clock error. Transmission errors occur if the covert channel is noisy or the threshold is not optimally chosen. Clock errors are an artifact of failed clock synchronization. This may happen if one of the processes gets descheduled by the scheduler and therefore misses a clock edge, or if the clock signal is disturbed by system noise. In our implementation, clock errors also occur if the receiver process is stopped too late or too early which might lead to additional or missing symbols. Since such errors are easily detected in the final message, we do not count them into the error rates.
To compare the sent and the received data and classify the errors, we use the Needleman-Wunsch algorithm [33]. The algorithm originates in bioinformatics and can be used to identify matches, mismatches and gaps in two input vectors. Mismatches correspond to transmission errors and gaps correspond to synchronization errors during the transmission. We configure the match and mismatch scores to 1 and -1 respectively. We set the score for gaps to -8 to prevent false classifications as gaps. We do not implement any error correction during the transmission which could reduce the amount of gaps and mismatches. Figure 8 shows the average transmission-and clock error rate in percent. We therefore compute the average over eight transmissions of 1kB data over the covert channel using the write-synchronization method. With the exception of one outlier at n ≈ 75, 000, the error rates are very low. The transmission error rate which is purely influenced by the accuracy of FLUSH+RELOAD is at about 1% while the clock error rate is significantly smaller between 0.2% to 0.8%. The outlier may be explained by a scheduled task that interrupted one of the processes. Since the Xeon E-2224G CPU has only four cores, any additional process directly competes for CPU time. We achieved a maximum transmission rate of up to 2 kB/s. Since the error rates are still low at this rate, we suspect that the bottleneck of the transmission speed lies in the communication between the clock generating thread and the sender / receiver. We also tested the code on an AMD Ryzen 5 5600H but did not observe the same clock-like characteristic in the write latency.
Mitigation
The mitigation of the Write-After-Write effects requires modification on the microarchitectural level by re-designing the aspects of the CPU that lead to the behavior exploited in this work. This, however, requires significant change to the design which cannot be applied to currently deployed CPUs. A less intrusive way to prevent Write-After-Write effects would be a change in the behavior of the cpuid instruction. There is no apparent reason why the instruction should be unprivileged and serializing. The fencing instructions ([s,l,m]fence) are well defined and sufficient to prevent speculative execution attacks which is a valid use-case for many programs. To the best of our knowledge, beyond that, there is no further scenario for an unprivileged serializing instruction.
In-line with previous research on cache side channels, making the clflush instruction privileged would render WRITE+WRITE infeasible in unprivileged environments. However, this does not affect the clock synchronization based on the average write latency.
Related Work
In this section, we briefly summarize related work.
Side Channels. There is a vast variety of different side channels in modern CPUs. In the following, we focus on memory-related side channels in desktop-and server-grade CPUs; for recent surveys, see [26,47,48]. Caches are the most commonly used source of leakage. Early timing-based attacks could be used to recover cryptographic keys, among others of AES [2], DES [53], and RSA [6] by measuring the overall runtime of the program. Eviction-based cache side channels increase the attack resolution since they allow the attacker to trace single cache accesses by the victim. EVICT+TIME [34] compares the execution time of a program before and after some cache entries were evicted to reconstruct an AES key. FLUSH+RELOAD [62] flushes a shared cache line and measures whether it will be reloaded by the victim. FLUSH+FLUSH [14] operates similarly but instead of measuring the reload-latency, it measures the latency of a second clflush instruction. These attacks require shared memory between the attacker and the receiver. PRIME+PROBE [34,52] instead uses eviction sets to evict entries from the cache. RELOAD+REFRESH [5] is a variant of PRIME+PROBE that reduces the amount of cache misses and exploits the replacement policy of caches. Both attacks rely on eviction sets [25] that reliably evict a target entry from the cache. An algorithm for finding such eviction sets has been presented in [55] and improved in [46]. Several randomization-based cache designs have been presented to prevent the construction of eviction sets [42,49,51,56,57]. The PRIME+PRUNE+PROBE attack [4,38] targets randomized caches and constructs generalized eviction sets, albeit much less performant compared to regular caches. Other side channels in the memory hierarchy have been discovered, most notably on TLBs [9,12], DRAM [37], and the on-chip ring interconnect [35]. The group of MDS attacks [7,54] exploit speculative behavior in Intel's store buffers.
Covert Channels. Cross-VM covert channel communications have been studied in real-world environments on AWS systems in [41,60]. Wu et al. use a memory-bus-based covert channel in [58], and Xiao et al. exploit memory deduplication for covert-communication [59]. Cache-based covert channels have been presented in [14,29,36,60]. In [31] it has been shown that cache covert channels can even be used to establish ssh connections between the communication partners.
Conclusion
We investigated the microarchitectural peculiarities of write instructions on recent Intel processors. We discovered WRITE+WRITE, a new side channel that leaks set contention in the cache architecture. We used WRITE+WRITE for bottom-up construction of cache eviction sets and in doing so, broke current speed records for eviction set construction. Furthermore, we demonstrated that attacks on randomized caches can be accelerated significantly if WRITE+WRITE leakage is present. Therefore, we implemented Scatter-Cache in gem5 and benchmarked our attack against the recent PRIME+PRUNE+PROBE attack. We found that the WRITE+WRITE-based attack outperforms current attacks by a factor of 10 and expect an even larger advantage in realworld implementations. That is, since the WRITE+WRITE algorithm for eviction set construction is much less susceptible to noise by parallel processes.
Moreover, we developed a new approach to synchronize processes across CPU cores. The clock-like nature of the noise in the write latency allows for accurate synchronization and therefore more stealthy covert channel transmissions.
|
2022-09-07T01:16:09.194Z
|
2022-09-05T00:00:00.000
|
{
"year": 2022,
"sha1": "25547d7ab6e7bf17fa4bc21c498c8e1fcf4ce779",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "25547d7ab6e7bf17fa4bc21c498c8e1fcf4ce779",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
15909674
|
pes2o/s2orc
|
v3-fos-license
|
Age-related differences in brain activity underlying identification of emotional expressions in faces
Michelle L. Keightley, Kimberly S. Chiew, Gordon Winocur, and Cheryl L. Grady Department of Occupational Science and Occupational Therapy, University of Toronto, Toronto Rehabilitation Institute, Rotman Research Institute, Baycrest Centre for Geriatric Care, Department of Psychology, Department of Psychiatry, University of Toronto, Toronto, and Department of Psychology, Trent University, Peterborough, Ontario, Canada
Social cognition has been defined as the ability to interpret and predict others' behavior in terms of their beliefs and intentions, and to interact in complex social environments and relationships (Baron-Cohen et al., 2000).The ability to understand and respond to the emotional content and cues present in the environment and to remember emotional information are integral parts of social cognition (Grady and Keightley, 2002;Adolphs, 2003).The amygdala is thought to be a critical component of a network of regions involved in social cognition, particularly for the processing of emotions in faces (Gobbini and Haxby, 2007).Lesions to the amygdala disrupt this ability (Adolphs et al., 1994(Adolphs et al., , 1999;;Anderson and Phelps, 2000).Consistent with lesion work, functional neuroimaging studies in young adults have found amygdala activation when negative face expressions are viewed, particularly fear (Breiter et al., 1996;Morris et al., 1996;Whalen et al., 1998b;Blair et al., 1999;Pessoa et al., 2002;Anderson et al., 2003).Increased amygdala activity also has been observed in response to positive emotional stimuli, although less consistently (Hamann et al., 2002;Winston et al., 2003;Yang et al., 2003;Zald, 2003).More recently, a patient with bilateral amygdala damage (S.M.) who showed impaired ability to detect fear, demonstrated normal performance once she was directed to look at the eyes when judging the facial expression.This latter finding suggests a more general role for the amygdala in directing attention to social and emotional cues (Vuilleumier, 2005).
The cognitive appraisal of visually complex emotional stimuli also is a critical component of social cognition, and is thought to be mediated in part by occipital cortex (Kosslyn et al., 1996;Reiman et al., 1997;Morris et al., 1998;Sprengelmeyer et al., 1998;Taylor et al., 1998;Lane et al., 1999;Paradiso et al., 1999;Phan et al., 2002).Prefrontal cortex also appears to play a general role in attention to emotion and emotional appraisal (Drevets and Raichle, 1998;Ochsner et al., 2002;Cunningham et al., 2004) and is often active during tasks of emotional processing (Lane et al., 1997a, b, c;Reiman et al., 1997).Emotional tasks also involve the anterior cingulate, particularly when some cognitive component, in addition to emotion perception, is added to the task (e.g.gender identification or recognition of emotional stimuli, Taylor et al., 1998;Whalen et al., 1998a;Bush et al., 2000;Keightley et al., 2003).It has been suggested (Phan et al., 2002) that the anterior cingulate and medial prefrontal cortex, together with their extensive connections to subcortical limbic structures, may represent an interaction zone between affect and cognition.
The effect of aging on social cognition has received considerable interest in recent years, particularly the effect of age on perceiving emotions in faces.In one early study, McDowell et al. (1994) found that older adults identified happy expressions as accurately as younger adults, but they were less accurate at identifying negative and neutral expressions.Similar results have since been reported by a number of other investigators, with an age reduction in labeling negative expressions and an age preservation in labeling happy expressions the most consistent findings (Oscar-Berman et al., 1990;McDowell et al., 1994;Brosgole and Weisman, 1995;MacPherson et al., 2002;Phillips et al., 2002;Calder et al., 2003;Keightley et al., 2006).In addition, older adults show a decreased ability to detect threat from faces, compared to young adults (Ruffman and Edge, 2006).Moreover, the age reduction in labeling negative expressions is independent of general age-related cognitive changes in processing speed, basic face processing abilities, and reasoning about non-face stimuli (Sullivan and Ruffman, 2004;Keightley et al., 2006).
Consistent with the behavioral differences, initial neuroimaging studies have demonstrated age-related alterations of activity in the amygdala and other emotion-related areas.Reduced amygdala activity in older adults when viewing negative faces, as well as reduced activity in occipital and parietal regions when viewing positive faces, compared to young adults, have been reported (Iidaka et al., 2002;Gunning-Dixon et al., 2003;Fischer et al., 2005).An age reduction in amygdala activity has been reported for negative pictures, as well (Mather et al., 2004).On the other hand, older adults have more activity in medial and lateral prefrontal cortex when viewing emotional faces (Gunning-Dixon et al., 2003), particularly negative ones (Tessitore et al., 2005).This increased prefrontal activity is interesting in light of similar findings of increased prefrontal activity in elderly individuals during non-emotional tasks (Grady and Craik, 2000;Cabeza, 2002).Given the presumed roles for prefrontal cortex in emotion processing mentioned above, these data suggest that older adults rely more on cognitive appraisal of emotional faces than do younger adults.
To the best of our knowledge, no imaging experiment has assessed the ability of older and younger adults to label a broad range of emotional expressions.Thus, the purpose of the current study was to explore neural activity associated with the perception and labeling of multiple facial expressions in young and old adults, so that we could examine the neural processes associated with older adults' preservation for labeling happy expressions, as well as those underlying reductions in labeling negative expressions.In addition, we used an analytic approach that emphasizes whole-brain patterns of activity, rather than focusing on individual brain regions.As happy faces are identified with high accuracy regardless of age and cultural background (Biehl et al., 1997), and are recognized more rapidly than negative expressions (Kirita and Endo, 1995), we expected to see patterns of activity unique to happy expressions in both young and older adults.In addition, based on previous neuroimaging data, we expected older adults to show different patterns of brain activity when labeling emotional expressions, involving reduced amygdala activity and greater prefrontal and anterior cingulate activity, particularly for negative faces.
METHODS
Participants in this experiment were 10 young adults (five men, five women) and 11 older adults (six women, five men; Table 1).All participants were Caucasian, except for two of the younger adults who were Asian.Participants were right-handed, with the exception of one young adult who was left-handed, and all gave informed consent in accordance with the ethics committees of Baycrest and Sunnybrook Health Sciences Centre.Participants were screened to rule out a history of psychiatric, neurological or other medical illness that might compromise cognitive function, or a history of substance abuse.We also assessed personality using the NEO Five Factor Inventory (Costa and McCrae, 1997) and emotional awareness using the 20-item Toronto Alexithymia Scale (TAS-20, Bagby et al., 1994).Alexithymia is a personality construct that includes difficulty identifying and describing feelings and difficulty distinguishing between feelings and the bodily sensations of emotional arousal (Parker et al., 1999).Mood was assessed using the Positive and Negative Affect Schedule (PANAS, Watson et al., 1988) and mental status with the Mini Mental Status Examination (Folstein et al., 1975).Younger adults had slightly more education than the older adults, 1), which were all within normal limits.
Stimuli
Faces with positive, negative and neutral expressions were taken from the Japanese and Caucasian Facial Expressions of Emotion (JACFEE) and Neutral Faces (JACNeuF, Biehl et al., 1997), a stimulus set that has been extensively normed in younger adults.The JACFEE contains 56 photographs, including eight photos each of anger, contempt, disgust, fear, happiness, sadness and surprise.For each emotion, the eight photos include four individuals of Japanese descent and four Caucasians, as well as equal numbers of men and women.Each of the individuals in the JACFEE contributes a neutral expression in the JACNeuF, for a total of 56 neutral faces.Thus, each individual posing a neutral expression was also viewed portraying one of the seven emotions.These faces were used in an 8-alternative forced-choice labeling task similar to those used previously to assess face emotion recognition in healthy individuals and patients with amygdala damage (Young et al., 1995;Adolphs et al., 1996;Calder et al., 2003).Participants viewed faces one at a time and were instructed to assign an emotional label to each face.Faces were presented for 6 s in a random order, and interspersed with null events (fixation crosses, presented for 4 s each).Blocks of the label task were presented in three scanning runs (along with two other tasks not reported here).Across the three runs there were 128 total trials for the label task, with 16 trials for each of the seven emotions and 16 neutral trials (some faces were seen more than oncebut no more than twicein order to generate enough trials for reliable analysis in each emotion category).Overt labeling was assessed prior to scanning, and in the scanner participants were instructed to silently label the faces using the eight categories.
Covert labeling was used during scanning to avoid verbal responses and the high memory demand of having to respond with key presses corresponding to the eight choices.We found no differences in labeling performance, in either young or old adults, based on the ethnicity of the presented faces, so for all analyses data were collapsed across Japanese and Caucasian faces.
Image analysis
Image preprocessing was performed using the Analysis of Functional Neuroimages software package (Cox, 1996).Time series data were spatially co-registered to correct for head motion using a 3D Fourier transform interpolation.
Each volume in the time series was aligned to an early fiducial volume from the first imaging run in the scanning session.The alignment parameters were computed by an iterative weighted least squares fit to the reference volume.
The peak range of head motion was less than 1.3 mm for all subjects.Motion corrected images were then spatially normalized to an fMRI spiral scan template generated from 30 subjects scanned locally.This template was registered to the MNI template used by SPM99.The transformation of each subject to the spiral template was achieved using a 12-parameter affine transform with sinc interpolation as implemented in SPM99, and smoothed with a Gaussian filter of 6 mm full-width-at-half-maximum (FWHM) to increase the signal-to-noise ratio.The initial 10 image volumes in each run, in which transient signal changes occur as brain magnetization reaches a steady state, were excluded from all analyses.The resulting voxel size after processing was 4 Â 4 Â 4 mm 3 .
For statistical analysis we used a multivariate approach, partial least squares, or PLS (McIntosh et al., 1996(McIntosh et al., , 1999)), in order to identify whole brain patterns of activity that varied across the emotion conditions.PLS operates on the covariance between brain voxels and the experimental design to identify latent variables, or LVs (similar to principal components), that optimally relate the two sets of measurements.In using PLS, we did not specify contrasts across conditions or groups in advance; rather, the algorithm extracts LVs explaining the covariance between conditions and brain activity, in order of the amount of covariance explained (with the first LV accounting for the most covariance).Each LV identifies a pattern of differences in brain activity across the conditions and specifies which brain voxels show this effect.Each brain voxel has a weight, known as a salience, which is proportional to the covariance of activity with the task contrast on each LV.Multiplying the BOLD contrast value in each brain voxel for each subject by the salience for that voxel, and summing across all voxels gives a 'brain' score for each subject on a given LV.
The PLS analysis examined activity across the face conditions in both young and old adults, allowing us to determine patterns of brain activity that differed across groups as well as across the emotion conditions.Data from the contempt condition were not included in either the behavioral or MRI analyses, as performance on this condition was poor for all participants (i.e.no better than 50% correct, on average).PLS was carried out on the remaining face conditions after averaging all 16 events for each emotion, using the definitions of each emotion from the normative data (Matsumoto and Ekman, 1988).The first eight TRs of each event were included in the analysis to capture the hemodynamic response (i.e.0-16 s), with activity at each time point normalized to activity in the first TR (labeled TR0 in the figures).PLS as applied to eventrelated data results in a set of brain regions related to the task contrasts for each TR on each LV (McIntosh et al., 2004).To determine contrasts across conditions, mean brain scores were plotted across the eight TRs used in the analysis (Figure 1).The significance for each LV as a whole was determined by using a permutation test (McIntosh et al., 1996).As 500 permutations were used, the smallest P-value obtainable for each LV was P < 0.002.In addition to the permutation test, a second and independent step was to determine the reliability of the saliences for the brain voxels characterizing each pattern identified by the LVs.To do this, all saliences for each TR were submitted to a bootstrap estimation of the standard errors (Efron and Tibshirani, 1986).Peak voxels with a salience/SE ratio > 3.0 were considered to be reliable, as this approximates P < 0.005 (Sampson et al., 1989).Local maxima for reliable clusters containing at least 10 voxels on each LV were defined as the voxel with a salience/SE ratio higher than any other voxel in a 2-cm cube centered on that voxel.Locations of these maxima are reported in terms of coordinates in MNI (Montreal Neurological Institute) space.
RESULTS
Performance on the labeling task collected prior to the scans (Table 1) was analyzed with a repeated measures of analysis of variance (ANOVA), with age group as the between-subject factor and emotion condition as the within-subject factor.Scores for happy faces were not included in this analysis, as both groups showed perfect identification of these faces.For the remaining conditions, there was a significant main effect of emotional expression, F(5,95) ¼ 6.0, P < 0.001, and the main effect of age was significant F(1,19) ¼ 9.4, P < 0.01.However, the interaction of age and emotion also was significant, F(5,95) ¼ 2.5, P < 0.05.To examine this interaction we tested simple main effects for each emotion (except happy and contempt).Compared to younger adults, older adults had reduced identification of anger (F(1,19) ¼ 6.7, P < 0.02), disgust (F(1,19) ¼ 10.7, P < 0.01), and sadness (F(1,19) ¼ 5.4, P < 0.05).Identification of surprise, fear and neutral expressions did not differ between the groups (Fs < 1).
In the analysis of fMRI data, LV1 (P < 0.002) revealed brain activity that differentiated happy expressions from all other expressions in young adults (Figure 1B), but did not distinguish the face expressions in older adults (Figure 1C).When young adults identified happy expressions, activity was increased in a widely distributed set of brain regions, including ventromedial prefrontal cortex, anterior and posterior cingulate gyrus, left postcentral gyrus, and bilateral middle frontal gyri (Figure 1A and Table 2).Other brain regions demonstrating increased activity for happy faces bilaterally included the cuneus, precuneus, inferior parietal lobe and superior temporal gyrus.Activity was decreased for happy expressions and/or increased during the other conditions only in the left dorsal anterior cingulate gyrus (Table 2).No changes in amygdala activity were noted using a cluster size of 10 voxels; however, smaller clusters of increased activity for happy faces were noted in the right amgydala (X: 24, Y: À8, Z: À24, ratio ¼ 4.5 at TR4, 4 voxels) and in the left hemisphere in a region extending into both the amygdala and hippocampus (X: À24, Y: À16, Z: À24, TRs 2-7, ratio ¼ 5.7 at TR5, 5 voxels, Figure 2).LV2 (P < 0.02) differentiated happy expressions from other expressions only in the older adults (Figure 3).In old adults, happy expressions, and those of disgust to a lesser extent, were associated with increased activity in ventromedial prefrontal cortex, lingual gyrus and bilateral premotor cortex (Table 3).For the other negative expressions and neutral expressions, increased activity was seen in a large number of areas, including dorsal anterior cingulate, middle and inferior frontal gyri, somatosensory cortex, middle temporal gyri and the insula.No activity changes were noted in the amygdala in older adults, even after lowering the cluster size criterion.
Although the patterns of activity distinguishing happy faces differed in young and older adults, there appeared to be some regions where both groups had increased activity.In the ventromedial PFC (Prefrontal Cortex) and lingual gyrus, there was overlap in the areas showing increased activity for happy faces in the two groups (Figure 4).In addition, decreased activity for happy faces, compared to other expressions, was found in very similar regions of dorsal anterior cingulate cortex in the two groups (for a further discussion of overlapping regions and differential timing of activations of young and old adults, please see Supplementary Material including Figures 5 and 6).
Finally, as neither of the patterns described earlier identified any differences for those emotions where older adults performed more poorly than the younger adults, we carried out three additional analyses directly comparing these emotions (anger, disgust and sadness) to the other emotions (except for happy) to look for differences in activity between the age groups.None of these analyses resulted in significant LVs; however, there were a few regions from each that showed reliable bootstrap ratios.These can be found in Supplementary Table 4.
DISCUSSION
In this experiment, we measured brain activity associated with identifying a broad range of emotional face expressions.Consistent with the highly accurate identification of happy faces generally seen in adults, and found here, the main pattern of brain activity in both age groups distinguished happy faces from those expressing all other emotions.Our results also are in line with previous reports of age differences in brain activity associated with processing emotional faces.Although some areas, such as ventromedial PFC and lingual gyrus, were active for happy faces in both groups, young adults additionally activated the amygdala, lateral PFC, posterior cingulate, temporal and parietal regions.In contrast, lateral PFC and temporal regions were active in the older adults when labeling emotions other than happiness.The lack of significant findings that characterize differences between young and old adults during negative emotional processing is surprising in light of previous studies (i.e.Gunning-Dixon et al., 2003;Iidaka et al., 2002) and behavioral results indicating decreased performance for older adults.However, this may be related to methodological differences.In particular, because we examined brain activity across all the basic emotions, we were able to identify the à TRs where area is reliable; If more than one TR is listed, the TR in bold and underlined is the TR where the ratio was maximal.R ¼ Right; L ¼ Left; BA ¼ Brodmann's area; Ratio ¼ bootstrap ratio indicating reliability of each voxel (the largest ratio across the TRs is reported in the table).dominant patterns of brain activity characterizing the process of labeling a broad range of emotions, shedding new light on the functional neuroanatomy of emotional face processing and how this may be modulated by age.
Neural correlates of identifying facial expressions in young adults
Most neuroimaging studies have found amygdala activation in young adults when viewing negative faces (Breiter et al., 1996;Morris et al., 1996;Whalen et al., 1998b;Blair et al., 1999;Critchley et al., 2000;Pessoa et al., 2002;Anderson et al., 2003).We found increased activity in small regions of the amygdala bilaterally in the younger adults, but for happy faces, not negative ones.This is surprising, given the evidence that the amygdala is activated by negative facial expressions.On the other hand, it is not entirely unexpected as increased amygdala activity has been found also for positive faces (Hamann et al., 2002;Pessoa et al., 2002;Winston et al., 2003;Yang et al., 2003;Zald, 2003).Indeed, a recent study suggested that both right and left amygdala play a role in processing a wide range of emotional expressions, not just negative ones (Shaw et al., 2005).A recent model of amygdala function proposes that the right amygdala mediates autonomic responses to emotional stimuli whereas the left mediates conscious cognitive appraisal of emotional stimuli (Glascher and Adolphs, 2003).Based on this model, our results suggest that in young adults, happy faces engage both autonomic responses (via the right amygdala) and cognitive evaluation (via left amygdala engagement).
The pattern of brain activity that characterized happy faces in young adults also included a number of other regions previously shown to be active during emotional processing, such as ventromedial PFC and somatosensory cortex.For example, ventromedial PFC is interconnected anatomically with the amygdala (Amaral et al., 1992) and is involved in emotional decision making (Cicerone and Tanenbaum, 1997;Bechara et al., 1999;Price, 1999;Winston et al., 2003).It was shown recently (Lewis et al., 2005) that ventromedial PFC was activated during encoding of positive words and this activity was associated with later memory for these words, in line with our finding of more activity to happy faces.Our result also is consistent with a model of ventral PFC function that ascribes a role for ventromedial PFC in assessing and representing reward (O'Doherty et al., 2001;Kringelbach and Rolls, 2004).Our results suggest that ventromedial PFC is engaged for processing the primary reward properties of happy faces, perhaps in conjunction with activity in the right amygdala.Somatosensory activity during labeling of happy faces is consistent with the somatic marker hypothesis (Damasio, 1996) which states that people identify emotions in others by simulating these emotions in themselves via involvement of somatosensory cortex (Adolphs et al., 2000).The role of the posterior cingulate in emotion is not entirely clear, but the frequent activation of this region in studies of either emotion or autobiographical memory (Maddock, 1999) suggests that it may integrate these two functions.Increased activity in visual cortex during happy face identification, such as we found in the lingual gyrus, is consistent with other work showing modulation of visual regions during emotional tasks (Morris et al., 1998;Anderson et al., 2003) or when participants are viewing personally-relevant faces (Gobbini et al., 2004).Finally, the dorsal anterior cingulate is thought to mediate monitoring and error checking during a variety of cognitive tasks (Bush et al., 2000;Carter et al., 2001;Paus, 2001), so that reduced activity in this region for happy faces likely reflects a reduced need for these processes, when the expression can be easily labeled.Thus, we were able to identify a widespread group of regions whose combined activity facilitates happy face labeling and that reflects the multiple processes that are likely recruited for this purpose.
Age differences in identifying emotional expressions
Older adults were able to identify happy expressions with the same accuracy as younger adults, and showed a pattern of brain activity that distinguished happy expressions from the others, as was found for the younger adults.This pattern in the older group was similar in some ways to that seen in the young, including increased activity in ventromedial PFC and lingual gyrus, and decreased activity in dorsal anterior cingulate.These similarities indicate that the processes subserved by these regions for the identification of happy faces changes little with age.One explanation for this type of preserved processing of positive emotions by older adults is that they have more experience with emotional regulation and have learned to emphasize positive emotions over negative ones (Carstensen et al., 2003).However, it is not yet possible to know whether this is due to a positive motivational bias that affects brain activity, or to a spared ability to engage a distinct set of brain areas when a happy face is encountered, which could influence motivational factors.
In addition to some similarities, there were notable differences in the brain activity associated with processing face expressions in the young and old adults, including the fact that older adults showed no reliable modulations of activity in the amygdala.This is consistent with other studies showing an age reduction in this region (Iidaka et al., 2002;Gunning-Dixon et al., 2003), although other work would have suggested more amygdala activity for positive stimuli, in older adults compared to younger adults (Mather et al., 2004).A number of studies have now shown that task demands can influence activity in the amygdala when participants process emotional stimuli (Bush et al., 1998;Vuilleumier et al., 2001;Ochsner et al., 2002;Keightley et al., 2003), and it is likely that these demands will also influence age differences in how this region responds to emotional stimuli.Therefore, differences across studies in task demands may account for some of the variability in results; nevertheless all studies to date, including the current one, are consistent in that age differences are found in amygdala activity.This suggests that older adults' behavioral responses to emotional stimuli are mediated by age differences in the brain's basic response to these stimuli.
Our data also indicated that there are age differences in the neural correlates of emotion beyond those seen in the amygdala.Older adults demonstrated a more widely distributed pattern of activity for negative and neutral expressions, compared to younger adults, which included increased activity in bilateral middle frontal and temporal regions, as well as somatosensory cortex.Increased activity in some of these regions, such as somatosensory cortex, was seen in response to happy expressions in young adults.As noted earlier, activity in this region may indicate that individuals rely on simulating emotions observed in other people to identify those emotions (Adolphs, 2002), and the age difference seen here suggests that older adults, unlike younger adults, may rely on this strategy for emotions other than happy.Increased prefrontal activity in older compared to younger adults during emotional processing was found in earlier studies (Iidaka et al., 2002;Mather et al., 2004), and we found differences here as well.In our study, these frontal differences were seen not so much in terms of degree, but in terms of which emotions elicited this frontal activity.That younger adults activated the middle frontal gyri when identifying happy expressions, whereas older adults activated middle and inferior frontal gyri when identifying neutral and most negative expressions.Although these differences cannot be directly related to their ability to identify the expressions (e.g.there were no age differences in the identification of neutral faces), they do suggest that younger and older adults utilize different brain networks for identifying emotions in faces, similar to findings of increased prefrontal activity in elderly individuals during non-emotional tasks (Grady and Craik, 2000;Cabeza, 2002).
It was surprising to find that the pattern of neural activity found for happy expressions in older adults also was associated with faces expressing disgust, and the reason for this is not clear.This finding could be related to evidence that older adults are sometimes better at identifying expressions of disgust than are younger adults (Calder et al., 2003), although in the current study expressions of disgust were identified with less accuracy by older compared to younger adults.Nevertheless, a similar brain pattern for disgust and happy faces, which are always easily identified by older adults, lends some support to the idea that the ability to recruit this pattern may benefit both types of expression recognition.
One limitation of the current study that should be noted is that the faces used in the labeling task have been standardized in younger adults across a variety of cultures (Biehl et al., 1997), but not in older adults.Indeed, when we began the study there were no face stimulus sets for which norms were available from old adults.Although no data are available regarding valence or arousal ratings in older adults for all of the faces used here, we have obtained ratings for a large stimulus set of emotional and neutral faces, which includes some of the faces used in the current experiment (Grady et al., 2007).These ratings did not differ significantly with age, suggesting that the age differences in brain activity observed in the current study were not due to age differences in the perceived intensity of the emotion or arousal to the faces.Nevertheless, it is clear that collecting normative data from older adults on labeling specific emotions would be useful for future work.Furthermore, it is unclear what influence labeling age-congruent faces would have on the current findings.As the stimulus set contained primarily young and middle-aged adults, future research should examine younger and older adults' accuracy for rating emotional expressions portrayed by older adults.
CONCLUSIONS
We found that both young and older adults identified happy facial expressions with high accuracy, consistent with other work in this field (Kirita and Endo, 1995;Leppanen and Hietanen, 2004;Suzuki et al., 2006).Recently, Suzuki et al. (2006) found that happy face recognition is independent of all other emotions, using a technique that controls for effects of task difficulty, and a recent MEG study found that viewing happy faces resulted in a larger amplitude of a face-specific evoked component compared to either faces expressing disgust or neutral faces (Lewis et al., 2003).This evidence, taken together with our fMRI results, suggests that the perception of happy face expressions may be distinct on a neural level and that this leads to a behavioral advantage in recognizing these faces.Although some regions such as ventromedial PFC were active for happy faces in both groups, the overall patterns for happy faces differed with age and this pattern was less specific in older adults, being also engaged in response to expressions of disgust.In addition, age differences were found in the amgydala and prefrontal cortex.Taken together the results suggest that older adults may rely on different cognitive strategies to identify both positive and negative emotional expressions in faces.
Fig. 1
Fig.1(A) Areas from LV1 (P < 0.002) with differential activity across emotional expressions are shown on the average structural MRI from the young group.Labels under each image refer to the level relative to the anterior commissure-posterior commissure (AC-PC) line.All areas shown had increased activity during labeling happy expressions in young adults, but older adults showed little contribution to this pattern.All data in these images were taken from the bootstrap ratios from the 4th TR. (B) Plot of mean brain scores for all conditions in young adults.The mean brain score for happy expressions diverges from the other emotions as early as the first TR (2-4 s poststimulus onset).(C) Plot of mean brain scores across the expression conditions in old adults.
X (right/left): Negative values are in the Left Hemisphere; Y (anterior/posterior): Negative values are posterior to the zero point (located at the anterior commissure); Z (superior/inferior): Negative values are inferior to the plane defined by the anterior and posterior commissures.Coordinates are in MNI space.
Fig. 2
Fig. 2 (A) A region of the left amygdala/hippocampus where young adults had increased activity during labeling of happy expressions, shown on a mean sagittal image from the young adults (Y ¼ À24, TR5).(B) The graph shows percent signal change (from baseline) in this region for all conditions in young adults.Fig. 3 (A) Areas from LV2 (P ¼ 0.02) with differential activity across emotional expressions are shown on the average structural MRI from the old group.Labels under each image refer to the TR from which the data were taken and the level relative to the AC-PC line.Red brain areas had increased activity during labeling happy expressions in old adults, and disgust expressions to a lesser extent, and blue areas had more activity during labeling of the other expressions.(B) Plot of mean brain scores for all conditions in young adults, who showed little contribution to this pattern.(C) Plot of mean brain scores across the expression conditions in old adults.The mean brain scores for happy and disgust expressions diverge from the other emotions as early as the first TR (2-4 s poststimulus onset).
X (right/left): Negative values are in the Left Hemisphere; Y (anterior/posterior): Negative values are posterior to the zero point (located at the anterior commissure); Z (superior/inferior): Negative values are inferior to the plane defined by the anterior and posterior commissures.Coordinates are in MNI space.
Fig. 4
Fig. 4 (A) Two regions of medial cortex where both young and older adults showed increased activity during labeling of happy expressions are shown on a mean image from the older adults.Regions were defined by determining those voxels where both young adults (TR 4) and older adults (TR 2) showed reliable activity.(B) Plots of percent signal change (from baseline) in the ventromedial PFC region with maximal overlap (X ¼ 0, Y ¼ 56, Z ¼ À4) for all conditions in young and old adults.(C) Plots of percent signal change (from baseline) in the lingual gyrus region with maximal overlap (X ¼ 4, Y ¼ À60, Z ¼ À4) across the expression conditions in young and old adults.
Table 1
Demographic and behavioral measures
Table 2
Brain regions where activity differentiates happy from all other expressions in young adults (LV1) ÃTRs where area is reliable; If more than one TR is listed, the TR in bold and underlined is the TR where the ratio was maximal.R ¼ Right; L ¼ Left; BA ¼ Brodmann's area; Ratio ¼ bootstrap ratio indicating reliability of each voxel (the largest ratio across the TRs is reported in the table).
Table 3
Brain regions where activity differentiates happy and disgust from all other expressions in old adults (LV2)
|
2016-04-30T18:35:23.062Z
|
2007-12-01T00:00:00.000
|
{
"year": 2007,
"sha1": "045147ecad763e832b773153c908d06d7249631e",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/scan/article-pdf/2/4/292/27104882/nsm024.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bc34c82671c7e9ae281a0c0ae4e6ea2b7cada22e",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
230655785
|
pes2o/s2orc
|
v3-fos-license
|
The effect of air relative humidity on the intensity of evaporating of water–ethanol droplets
The results of experimental studies of the evaporation of the water-ethanol droplets with various concentrations of ethanol suspended on a thread are presented. Using high-speed microphotography, we studied the dynamics of changes in the size of evaporating droplets at various relative humidity of the surrounding air. As a result of experiments, it was found that the relative humidity of the surrounding air had a significant effect on the process of droplet evaporation. For all the considered concentrations of ethanol, it was found that the higher the concentration, the shorter the evaporation time of the droplets. Generalization of the experimental data on the evaporation time of the water-ethanol droplets of the corresponding concentration of ethanol at a relative humidity of 95% was carried out.
Introduction
The phenomenon of evaporation of the binary solution droplets is the basis of various technological processes. Therefore, a significant number of numerical and experimental studies have been devoted to this issue. As compared to pure liquids, the process of evaporation of binary solutions is significantly complicated due to differences in the thermophysical characteristics of the components and their complex mutual influence. In most experimental and numerical studies of evaporation of the binary solution droplets the effect of the ethanol concentration on the intensity of the evaporation process is studied [1][2][3][4][5][6]. In [7][8][9] a significant effect of the concentration on the change in the volume and contact angle of lying droplets was shown. A number of studies have analyzed the changes in geometric parameters and temperature of the suspended droplets for various binary solutions depending on the temperature of the air flow [10][11][12]. It should be noted that a significant proportion of studies were carried out at high temperatures and at a fixed relative humidity of the surrounding air. In a number of works [13][14][15] it was noted that the relative humidity significantly affects the process of evaporation of droplets of water-ethanol solutions. However, in these works only lying droplets are considered. At the same time, the process of evaporation of free droplets is of certain scientific and practical interest. Thus, one of the important tasks is to study evaporation of free water-ethanol droplets at various relative humidity. The results of the experimental study of the effect of relative humidity on the evaporation of water-ethanol droplets suspended on the thread are presented in this work.
Experimental setup
This work is a continuation of the experimental studies on the evaporation of the water-ethanol droplets suspended on thread of a low-heat-conducting material, which allows a good approximation to the conditions of evaporation of the free droplets [15]. Unlike previous studies, the evaporation of a droplet occurred inside a sealed chamber. In the experiments, a drop of a water-ethanol solution was suspended 2 on a thin polypropylene thread (thermal conductivity coefficient of 0.19 W/mºС) with a diameter of 150 μm. Evaporation of droplets of the water-ethanol solution with 5 μl volume was studied at various volume concentrations of ethanol cv (0, 0.25, 0.5, 0.75, and 0.96.) In accordance with the estimates made, the concentration measurement uncertainty was ± 4%. Using a sorbent, a fixed constant value of relative humidity in the chamber was achieved. The control and measurement of the temperature and humidity of the air in the chamber were carried out using an Eclerk-USB-RHT-K1 thermohygrometer. The absolute error in measuring the temperature in the chamber was ± 1°C, relative humidity ± 2%. In experiments with a digital microscope, the change in the droplet size during evaporation was recorded, while the uncertainty in determining the droplet size was ± 7%. During the experiment, constant temperature and relative humidity were maintained inside the sealed chamber. In the experiments, droplets were evaporated at a constant temperature t = 25°C and different relative humidity of the ambient air (φ = 5%, 25%, 55% and 95%). Figure 1 shows the experimental results depending on the square of the relative diameter of the droplets (d/d0) 2 as a function of time for droplets with various ethanol concentrations at different relative humidity. Here d0 is the initial diameter of the droplet and d is the current diameter. The results in Figures 1(a, b) show that the square of the relative diameter of the droplets of water and ethanol decreased linearly with time for the whole considered range of relative humidity. This linear relationship which known as the Sreznevsky law (d2-low) is widely used to describe the process of evaporation of drops of pure liquids [6,[10][11][12]. The less the relative humidity of the air, the greater the slope of the line (d/d0) 2 . This indicates the intensification of evaporation of the droplets. Figures 1(c, d) show the (d/d0) 2 vs time for droplets with ethanol concentration of 0.75 and 0.25. It has a non-linear character. The higher the concentration of ethanol in the solution, the closer the evaporating character to the ethanol one. With a decrease in ethanol concentration, the character of evaporation was more similar to the evaporation of water droplets. At the initial stage of evaporation, the slope of the curve approached this for the ethanol droplets at the corresponding relative humidity. At the final stage of evaporation, the slope of the curve (d/d0) 2 approached to the slope for the water droplet at the corresponding relative humidity.
Results of the experiments
At the initial stage this is occurred due to evaporation of a more volatile component: ethanol. Then as the ethanol concentration decreased the evaporating of the droplet was mainly determined by water. A decrease in the rate of evaporation of the water -ethanol droplets at high relative humidity was noted earlier in [13][14][15]. However, in these articles evaporating of sessile droplets was considered.
Dependences of evaporation time of water-ethanol droplets on concentration at various relative air humidity were obtained (Figure 2) in the experiments. Figure 2 shows that for the entire range of the studied relative humidity, the higher the concentration of ethanol, the shorter the evaporation time. Such effect of concentration on the evaporation time of water binary solutions was noted earlier in [6,10]. However, the results of these studies were obtained only at a fixed relative humidity. The results of this work show that a similar pattern of a decrease in the evaporation time with increasing ethanol concentration is characteristic of various relative humidity. It should be also noted that with an increase in relative humidity the evaporation time increased for all Figure 3.
The data presented in Figure 3 show that the highest rate of an increase in the evaporation time with increasing relative humidity was observed for water droplets, and the lowest one was observed for ethanol. An intermediate situation was observed for droplets with ethanol concentrations cv = 0.25, 0.5, 0.75. Figure 4 shows the dependence of the dimensionless evaporation time of water-ethanol droplets = , where τφ is the evaporation time at the relative humidity of 95%. It can be seen that in this processing the experimental data are well generalized by the exponential dependence . Thus, the dependence can be used to determine the time of evaporation of waterethanol droplets with different concentrations at various relative humidity.
Conclusions
As a result of the experimental studies the suspended droplet diameter as a function of time was determined at different relative humidity. The data showed that for droplets of pure liquids (water and ethanol) the square of the relative diameter (d /d0) 2 decreased linearly over time at different relative humidity. For droplets with an intermediate concentration of ethanol two intervals can be distinguished. The first one is associated with the predominant evaporation of ethanol and another is associated with the evaporation of water. It is shown that with an increase in the relative humidity the evaporation time of droplets increases for all the considered concentrations of the water-ethanol solution. Generalization of the experimental data is carried out and it can be used in calculating the evaporation time of the waterethanol droplets at different relative humidity.
|
2020-12-10T09:06:09.334Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "1ed2c5a186b7f53713f2a3a5c365f0031c9215fe",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1677/1/012098",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "674350ba1e1471c4a9e0b6685bb572cc236bbc77",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
236270928
|
pes2o/s2orc
|
v3-fos-license
|
Tackling regional skill shortages: from single employer strategies to local partnerships
ABSTRACT This research examines regional skill problems and the strategies adopted to reduce skill shortages by a set of employers (n = 16). The data collected in 2019 in a northern region in Portugal indicate considerable and persistent shortages of engineering and IT graduates and non-graduates for operational jobs. The employers implement anticipative strategies interacting with the education system, and the city council has developed a multi-stakeholder partnership. However, the most widespread strategy is remedial and consists of employer-provided training. Employers believe that the partnership has been a fruitful way of expanding economic activities, but further efforts are required to alleviate skill shortages.
Introduction
The timely availability of the skills mix required to implement business strategies has a major impact on firms' performance. This assumption is not only applicable to countries, but also to regions and local communities. From the 1990s, the qualifications and skills necessary for regional development acquired increasing importance in the political agenda (OECD 2016). A strand of the literature attempts to explain the collaboration and commitment made by firms with local government to develop a VET system (Persson and Hermelin 2020). Other studies acknowledge the role of higher education (HE) as a driver of economic development of weaker regions (Pugh 2017). However, complementarities between HE and vocational education and training (VET) in the preparation of the workforce at the regional level are less explored in the literature. Our research contributes to this debate and attempts to answer the following questions: Is HE expansion and consequently the supply of graduates to the labour market the solution to skill problems? Or alternatively, is it a necessary but insufficient condition? What role can local and regional partnerships play in providing specific answers to firms' skill problems? And what are the firms' perceptions of these regional partnerships?
The literature indicates that the underlying rationale of HE expansion is indeed to supply high-level skills to the economy and resolve immediate skill problems while anticipating future skill needs (COM 2017). However, a significant body of literature shows that employers still encounter a range of skill problems (e.g. Cappelli 2015;Suleman and Laranjeiro 2018). Recent literature indicates that policy makers are concerned about the status of VET and are discussing coordination mechanisms aimed at making it a valuable source of skilled workforce (Arribas and Papadakis 2019). This stream of the literature recognises that both the nature and drivers of skill shortages as well as production regimes vary across regions and, therefore, solutions to skill problems must be adapted to the specific contexts in which they occur (Sharma, Oczkowski, and Hicks 2017;Persson and Hermelin 2020). This encourages researchers to study skill problems and solutions, including regional partnerships, using the regional level as the unit of analysis (e.g. Froy, Giguère, and Hofer 2012;Sevinc et al. 2020). Additionally, mechanisms and conditions are explored to ascertain how to implement decentralised VET systems (Persson and Hermelin 2020). Nevertheless, employers' reactions to skill problems and their evaluation of political initiatives targeting these problems continue to be scarcely explored at the regional level.
We examine the skill problems faced by employers and the strategies adopted to cope with them in a northern region in Portugal (V.N. Famalicão). The Portuguese education and training system is known to be highly centralised, leaving little room to accommodate regional needs (OECD 2015(OECD , 2020. Previous research on the Portuguese labour market has shown that in spite of huge investments in HE, employers continue to face skill shortages and deficits that entail high training costs for them (Suleman and Laranjeiro 2018). We aim to provide a regional analysis of these issues, hitherto missing, that provides policy makers with relevant information to support the education and training systems' ability to adapt to local needs.
A multi-stakeholder partnership was implemented in V.N. Famalicão in an attempt to resolve skill shortages and deficits at the local level. The partnership, called Famalicão MadeIN, 1 involves different local stakeholders in order to promote education and training, entrepreneurship and innovation in the region. However, the employers' perception of the initiative has not yet been explored. Indeed, to the best of our knowledge, no study has collected insights on partnerships from stakeholders at either the national or local level. Our study thus seeks to i) identify skill shortages reported by employers at the local level; ii) examine their solutions to address this issue; and iii) explore the local partnership's role in tackling regional skill problems. Ultimately, we strive to raise policy makers' awareness on the regional specificities of skill problems and the combination of solutions being implemented by local actors to address them.
The empirical analysis is based on original qualitative data gathered in 2019 from two sources within this region: interviews with a set of employers (n = 16) and focus groups with employers, schools and local policymakers. This qualitative material was examined through content analysis which allowed us to classify the skill problems and solutions adopted by the sampled employers.
The rest of the paper is organised as follows. The next section provides an overview of the literature on skill shortages from a regional perspective, together with some notes on the skill problems of the Portuguese labour market and a description of the Famalicão MadeIN programme. Section 3 is devoted to the methodology. The empirical evidence is presented and discussed in the Section 4. Finally, Section 5 concludes and sets out policy implications.
Skill shortages and solutions at the regional level
It is well-documented in the literature that human capital is a key ingredient of local economic growth (Plummer and Taylor 2001) and empirical research has shown skills boost local growth, notably in metropolitan areas (Glaeser and Saiz 2004). It is argued that a qualified workforce is a necessary condition to reduce disparity among regions and increase competitiveness (OECD 2016;Froy, Giguère, and Hofer 2012); to boost some value-added activities (McCann and Ortega-Argilés 2015); and to create challenging jobs at the regional level (Østbye et al. 2018). However, recent literature raises the question of the drivers of growth and discusses the role of skill mismatch in regional labour markets (Sevinc et al. 2020). It suggests that skill shortages thwart regions' performance and prevent them from harnessing their potential.
The literature shows that employers encounter skill shortages, leading to hard-to-fill vacancies (Sharma, Oczkowski, and Hicks 2017;Cappelli 2015). While some of these shortages are widespread, i.e., a large proportion of employers report recruitment problems, others are localised and affect a relatively small number of employers (Green and Owen 2003). Furthermore, whereas some regions may experience severe shortages in particular skills, others enjoy a skill surplus (Cameron 2011). In this context, Sevinc et al. (2020) propose methodological tools to forecast the skill supply and demand in three regions in the UK with a view to reducing the skill mismatch.
A stream of the literature focuses on explanations for skill mismatch at the local level. The imbalance may be an outcome of both the concentration of graduates in large cities (OECD 2016) but also centralised coordination mechanisms (Froy, Giguère, and Hofer 2012;OECD 2016). Not surprisingly, essential changes are being introduced that will modernise VET and make it more attractive so that it becomes a valuable alternative for young people and their families (Arribas and Papadakis 2019). This process has revealed the inefficiencies of highly centralised coordination mechanisms (Froy, Giguère, and Hofer 2012;OECD 2016) and shows that the supply of skills should be tailored to local and regional labour markets (McCann and Ortega-Argilés 2015), thus rejecting standardised solutions to inherently context-specific skill problems (Sharma, Oczkowski, and Hicks 2017;McCann and Ortega-Argilés 2015). In this context, Persson and Hermelin (2020) provide a discussion on the conditions and mechanisms that support decentralised cooperation in the VET systems and strengthen employers' engagement, notably the role of the municipality, the model of partnership, and the institutional arrangements at national and regional levels.
Finally, some literature focuses on the variety of solutions that have been designed and implemented by different actors to tackle skill shortages at the regional level. It should be noted that researchers discriminate between strategies and solutions (Suleman and Laranjeiro 2018). The former refers to anticipative and remedial strategies, while the latter address the distinction between make and buy, i.e. between train the workforce (make) or recruit ready-to-work candidates from the labour market (buy).
Investment in education and training and the upskilling of the local workforce are the major solutions proposed. Anticipative strategies include interaction with education and training institutions with the aim of anticipating skill needs and embedding the world of work in HE courses (Suleman and Laranjeiro 2018). This type of strategy goes further in the context of the triple helix model (Etzkowitz and Leydesdorff 2000), namely interaction between government, universities and industry to foster not only university-industry linkages, but also employers' engagement with HE (Bolden et al. 2009) to improve graduates' employability.
Anticipative strategies inspire multi-stakeholder partnerships, which are linked especially to sustainable development processes aimed at combining economic and regional development through the social inclusion of vulnerable groups (Hemmati et al. 2002). These partnerships are coordinated groups involving the market, the state and civil society formed to address complex problems (Froy, Giguère, and Hofer 2012), and they illustrate a trend towards the decentralisation of solutions to tackle skill problems and VET policy in different countries (Froy, Giguère, and Hofer 2012;Rogers 2001;Persson and Hermelin 2020). The policy debate now addresses the decentralisation of VET systems to make them attractive to young people (Arribas and Papadakis 2019). However, it is vital to understand how to make regional partnerships more effective and how to strengthen employers' involvement; however, it encompasses a set of dilemmas (Persson and Hermelin 2020).
The links between HE and VET are particularly important in the case of regions with significant fringes of low-skilled workers (Froy, Giguère, and Hofer 2012) or with a weak economic performance (Pugh 2017). Employers may engage with HE either passively or actively (Hogarth et al. 2007). Passive engagement involves a simple market transaction and HE is basically a recruitment channel, while active engagement includes, for example, workbased learning as well as the design and delivery of courses.
Remedial strategies encompass both recruitment and training policies to overcome skill problems. These strategies pose different challenges for firms. Employers search for trainable candidates endowed with learning abilities so that their training costs are reduced (Thurow 1976). Employers sometimes recruit low-skilled candidates, especially in the context of a limited supply, and provide extensive workplace-specific training when there are severe and persistent regional skill shortages (Sharma, Oczkowski, and Hicks 2017). Persson and Hermelin (2020) refer to the uncertainty faced by employers, particularly due to the risk of poaching. In fact, poaching by competitors increases transaction costs and makes employers less willing to provide training (Healy, Mavromaras, and Sloane 2015). Cockrill (2002) describes an extreme situation of employers' unwillingness to train in the Welsh automotive and electronic industries; all employers attempt to buy ready-to-work candidates in the labour market. Nevertheless, they sometimes make considerable investments in training programmes for their workforce, despite the risks and training costs (Suleman and Laranjeiro 2018;Froy, Giguère, and Hofer 2012). Workplace training is still a widespread response to skill shortages, especially at the regional level (Healy, Mavromaras, and Sloane 2012).
The migrant workforce is another potential solution to skill shortages (Froy, Giguère, and Hofer 2012;Healy, Mavromaras, and Sloane 2012). However, specific public policies are required for the successful integration of migrants, and employers must also be engaged to avoid or reduce the underutilisation of their skills (Dietz et al. 2015). Migrant labour involves risks and challenges, notably in terms of skill mismatch (Visintin, Tijdens, and Van Klaveren 2015), and the use of some sectors as the port of entry to the local labour market, therefore leading to high turnover (Treuren, Manoharan, and Vishnu 2019). In sum, addressing skill shortages through migration schemes might be a suboptimal and short-term solution.
We conclude that employers implement different strategies to meet their skill needs in the context of regional skill shortages. These solutions involve risks and benefits, so employers have to trade the costs off against the benefits. However, there is no guarantee that the solutions will be successful; skill shortages may persist despite the implementation of suitable strategies. Some successful examples showed that firms involve themselves in VET system, but the major share of funding comes from public investment (Persson and Hermelin (2020).
National and regional solutions to anticipate the supply of skills in Portugal
The governance of the education system has a highly centralised structure in Portugal (OECD 2015(OECD , 2020. Legislation has acknowledged the potentialities of decentralisation (OECD 2020), and it is recognised that the specific skill needs of sub-national economies benefit from the strong involvement of authorities and stakeholders. Nevertheless, the intervention of local and regional stakeholders is often limited to advice or consultations initiated by the central government (OECD 2015(OECD , 2020. In spite of the over-centralisation and under-evaluation of regional skill needs, Portugal has made substantial investments in both HE and vocational training since the 1990s to anticipate skill needs. Furthermore, great efforts have been made in recent years to involve local and regional actors and respond to their particular needs (OECD 2020).
The upward trend in HE enrolment started in the 1990s and intensified significantly after the adoption of the Bologna model in 2006 (see Portela et al. 2009 for more details). Although Portugal's enrolment rate 2 was considerably lower than that of the European Union in the 1990s, it rose substantially after the Bologna reform was implemented, drawing closer to the EU average in 2011 i.e. around 60%, and has since remained almost stable.
Nevertheless, there is an excessive concentration of students in the larger regions, notably Lisbon, Oporto, Coimbra, Braga and Aveiro, and more importantly Lisbon has twice as many students as the other large regions. 3 Faced with this disparity, in 2018 the Portuguese Government decided to reduce HE vacancies (5%) in large cities (Lisbon and Oporto) in favour of other regions. 4 This shows that policy makers are aware of regional skill needs and are trying to reduce the disparities.
The Portuguese VET has always faced a set of challenges (see OECD 2020 for details). VET was promoted primarily by the education system and later with the labour market. This initial divide between the Ministries of Education and of Labour continues to affect the system's approaches and priorities. Furthermore, the system is strongly influenced by political priorities for the development of skills policies and is heavily dependent on European funding. Finally, it continues to be less attractive to young people. Notwithstanding, several steps have been taken to ensure high participation rates and quality assurance. These include the definition of the National Qualifications System (NQS) in 2007 to map the relationships and linkages between education, vocational training, and employment; and a systematic approach to skill needs to gather information on the supply of and demand for skills in 2015. Furthermore, local stakeholders are engaged in the definition of skill needs and the implementation of education and training guidelines; the promotion of sub-national economies is one of its key goals.
The Famalicão MadeIN initiative is an example of multi-stakeholder partnership sponsored by the city council and aimed at connecting different local stakeholders to promote VET, entrepreneurship and innovation in the region of V.N. de Famalicão, a county in the northern region of Portugal. This is a small but dynamic industry-based region, ranking 3 rd in the country's export volume and 2 nd in gross added value in manufacturing industries. 5 The major sectors of activity include textiles, automobile, metallurgy and agri-food industries. The county also has relatively low unemployment levels; only 4.1% of people aged 15-64 years were unemployed compared to 5.4% at the national level 6 in 2018.
Although Famalicão MadeIN was officially launched in 2014, its origins can be traced back to a 2005 programme aimed at reducing poverty and providing social assistance to more vulnerable populations. However, VET emerged from the outset as a key solution to boost employability and combat poverty. Therefore, employers, public employment services, and VET were integrated in the partnership to provide accurate information on skill needs, to smooth school-to-work transitions and improve employment opportunities. This engagement became even more relevant in the context of the 2008 economic recession. Over time, Famalicão MadeIN therefore diversified both its scope of action and the stakeholders engaged in it. Now it is involved in the Government Programme Qualifica, 7 which seeks to ensure adult education and training as part of lifelong learning. The primary role of Famalicão MadeIN is to build bridges between employers, employment services, education, and training system, i.e. to mediate the relationship between local actors and national policy makers.
The annual diagnosis of skill needs warrants special attention as it seeks to obtain accurate answers from VET institutions to overcome skill problems. It also tries to encourage complementarities with employer-provided training, particularly in much-needed sectors (e.g. meat and metalworking industries). Furthermore, the partnership provides consultancy for employer-provided training (e.g. in textiles) to tackle very specific skill needs.
The collaboration with local HE is recent and targets joint R&D projects conducted by firms and advanced education programmes. However, the partnership's rationale remains to connect all local stakeholders of education and training: firms, VET, employment services, HE institutions as well as students and their families. The goal is to answer the skill needs of the local firms while promoting better employment opportunities for the population.
Data and methodology
This paper draws on the qualitative analysis of empirical data gathered from several complementary sources. It includes face-to-face semi-structured interviews with human resource managers and owners of 16 industry-based firms from the northern region of Portugal (council of V.N. de Famalicão); and 5 focus groups on regional skill problems organised by local policy-makers and with the participation of firms, training organisations and other local actors in the context of the multi-stakeholder partnership. These focus groups represent the partnership's first attempt to engage with local employers not only to learn about skill shortages but also to take the appropriate steps to address them. This illustrates that Famalicão MadeIN is moving forward in its mission.
We believe that the use of these different data sources will provide a more detailed and nuanced understanding of the questions raised in this paper, namely what are the skill shortages across firms at the local level and how do they vary? How are firms coping with those shortages? How do employers assess the multiple stakeholder partnership's role in reducing skill problems?
The interviews with firm owners and human resource managers are our primary data source and were conducted throughout 2019. Our sample (Table 1) comprises firms with different characteristics in terms of size and years of activity, and that operate in the most representative sectors of the regions' economy (notably textiles, metallurgy and commerce) which, as we have seen, is mostly industry-based. All of the firms have been actively recruiting for both graduate and non-graduate positions in the last 3 years.
The focus groups were organised by the city council as part of the Famalicão MadeIN initiative and took place between March and June 2019. The number of participants varied between 10 and 30 and comprised firms from different sectors, representatives of local VET institutions and local policymakers. The main objectives were to identify firms' skill problems and skill needs, debate the role of VET institutions in the region, and promote more agile and efficient recruitment processes. A member of the research team participated in each meeting and the data gathered were used to complement the information from the interviews with the firms. Table 2 summarises the skill problems faced by the employers in our sample and the solutions they use to address them. Hard-to-fill vacancies are the most Industry F1 1950 200 Textile for automobiles F2 2003 30 Polymers F3 1995 160 Textile F4 2008 50 Textile F5 1973 656 Optics F6 1970 140 Textile F7 1961 753 Agri-food F8 1927 1131 Textile F9 1993 2154 Tyres and metallurgy F10 1993 63 Metallurgy F11 1981 67 Metallurgy F12 2011 38 Textile F13 1999 72 Metallurgy F14 1937 1216 Textile F15 1988 2672 Textile for automobiles F16 1942 230 Agri-food Table 2. Firms' skill shortages and strategies used to overcome skill problems. reported skill problem although there is some mention of ill-prepared graduates, notably due to the lack of soft skills. Almost all employers report hard-to-fill vacancies for both graduate and non-graduate jobs, but the intensity of these difficulties varies across firms and across educational attainment. Employers stress that skill shortages for non-graduate jobs are greater than for graduate jobs: 'It's very difficult to hire people even without the skills that we need' (F1). Some note that the general lack of technicians is particularly acute at the local level (F2). The graduate level skill shortages are concentrated in high-level technical fields, notably engineering and IT. However, the skill problem among nongraduates is intense or very severe: 'We have extreme difficulties in hiring [. . .] for example in technical and maintenance, these are very difficult areas to hire, these profiles are scarce in the market nowadays and they are easily absorbed' (F8). 'We cannot find school-leavers or experienced professionals from the labour market; this kind of professional is unavailable' (F13). There is also shortage of lessskilled candidates (F1).
Regional skill shortages: graduates and non-graduates
We asked employers about the drivers of skill problems. They underlined: i) the lack of graduates in Engineering and IT; ii) the reduced willingness of young people to take non-graduate technical jobs; iii) the brain drain of young people from the region to other national and international settings; and iv) the demand for very specific skills by few employers.
However, the recruitment of young people for non-graduate technical jobs is the major concern: 'There are many difficulties for non-graduates, at the industrial level [. . .] there is a shortage of qualified professionals in all areas and we suffer a lot from local and international competition' (F10). Many employers prefer to hire technicians with practical knowledge than HE graduates. F13 indicates the ratio of one graduate to twenty non-graduates. Furthermore, some of the sampled employers acknowledge that there are few candidates from vocational courses but blame large firms for absorbing them (F11). F11 stresses that large employers benefit from their engagement activities with VET to recruit trainees. Nevertheless, large employers also experience graduate and non-graduate skill shortages despite this engagement and the better employment conditions they provide. Large employers claim they are not immune to regional skill shortages, particularly when these are widespread (F8, F9, F14 and F15). Small and medium firms lack attractiveness, and this leads to hiring difficulties but, more importantly, large employers get in first and recruit the few available candidates.
One employer (F13) underlines the side effects of strong skill specificity. As it is the only firm hiring professionals with a very specific skill (mould making), no VET institution is willing to provide training for so few employers.
The key issue raised by members of the focus group is the imbalance between different sized employers. Whereas participants from SMEs blame large employers for their strategy of headhunting skilled candidates, large employers highlight SMEs' difficulties in complying with the rules set in Table 3. Drivers of skill shortages in the supply and demand sides: summary of perceptions of focus groups' participants. (4) Engagement with VET is useless; large firms hunt candidates (5) Brain drain to more advanced economies (Germany) that provide better employment conditions (4; 5) HE is not a real answer for skill problems (4; 5) Young people lack motivation to learn (3; 4; 5) Demands for closer ties between VET and firms; (3; 5) Better conditions abroad enhance young people's interest in mobility (5) Weak selection of candidates by employment services (1; 5) contractual arrangements. The information summarised in Table 3 furthers our knowledge of the drivers of skill shortages, especially of VET graduates. The education and training system has a threefold effect on supply: young people and their families prefer HE degrees; the VET institutions fail to attract youngsters; and the skills supplied by HE are mismatched with employers' needs. Additionally, international employers are more attractive to the young, who are open-minded and ready to engage in international mobility. Regional skills shortages therefore involve multidimensional drivers, for which there is no single resolution.
In sum, the data show that, in addition to the shortage of IT and engineering graduates, the sampled employers are concerned about hard-to-fill nongraduate vacancies. This structural mismatch affects all industries and employers of all sizes in the region of V.N. de Famalicão. Table 2 displays the intensity with which employers report skill problems and the variety of strategies to cope with those problems. The findings show that the most important solution is the firm-level training policy since most employers actively engage in training programmes either in the workplace or with outside providers. Although a major solution for skill shortages, it also attempts to prepare newly hired graduates and non-graduates with specific skills. F1 notes that the rationale behind the training policy is to overcome the lack of candidates by recruiting under-skilled workers and investing in their training, and preparing workers with specific skills: 'We cannot go to the market to hire people with the know-how to work with our machinery, it is very specific. And so, we provide extensive training from day one and until that person leaves the firm' (F1).
The solutions to address skill problems: multiple answers from employers and local actors
In addition to the lack of candidates, employers lament that they cannot afford the high wages of ready-to-work candidates, even when they are available: 'We cannot [hire in the market] because there are no candidates and the employed people earn wages that our firm cannot afford' (F14).
The employers' upskilling strategy involves formal and informal training. They often recognise their inability to provide appropriate graduate training and seek the expertise of consultancy companies, specialised training companies, and sometimes HE institutions (F3, F6, F9, F11, F12, F14, and F16). The data show that firms in the sample participate very actively in training activities because they are unable to find ready-to-work candidates with the required skills in the market. Employers must therefore incur non-negligible training costs so that newly hired workers can do specific tasks and use specific tools, and to tackle skill shortages.
Additionally, employers attempt to influence the supply of skills, notably by engaging with education and training institutions ( Table 2). Even though all the sampled firms establish some type of relation with education and training institutions, their contact with them is primarily to access the best candidates and develop internship programmes targeting students in the region. Other more active forms of engagement, such as teaching and collaboration in the design of tailored courses, rarely occur in HE (F3), but are slightly more frequent in the case of training schools at non-graduate level (F9; F13; F14). The empirical evidence therefore indicates that employers' active engagement with HE to resolve their skill problems is extremely limited and they use HEIs merely as recruitment channels.
Employers claim that this is due to some on-going barriers, despite some improvements in recent years: 'I believe [the relationship] is getting closer and I feel universities are increasingly open to approaching firms, something which did not happen before' F5). The major barrier is the lack of awareness of the world of work, which calls for closer ties between HE and workplaces from the early stages of education (F4). This employer 'believes the universities are still distant (. . .) the students should be put in contact with firms sooner in their university trajectory as happens in programmes abroad where the connections with firms start in the first year of college'.
On the other hand, collaboration in R&D projects is more common (F2, F6, F7, F8, F10, F12, F14 and F16); this usually involves product development and training as the projects often entail hiring the master and doctorate graduates that participated in their development. In these specific cases, there are closer ties and dedicated HE programmes.
This picture changes somewhat when it comes to VET institutions that prepare students for non-graduate jobs. Employers are again largely involved through internship programmes, which allow them to attract and screen prospective employees in the context of full employment and strong skill shortages. While almost all employers report this strategy and confirm their willingness to actively participate in the training of new employees, they vary in their level of commitment to the strategy.
A small proportion engage more with VET institutions and refer to teaching activities or designing tailored courses (F9; F13; F14), and they therefore influence the supply of skills. However, there is a widespread perception that VET institutions facilitate engagement more than HE, and are more willing to adapt their curricula to firms' skill needs: 'We collaborate with several training centres (. . .) if you compare vocational training with higher education, the technical training is much more adapted, closer to the entrepreneurial activity. The teachers have a closer relationship with us and the courses are much more flexible (. . .) there is that liberty from the pedagogical point of view and professional schools know that if they don't do it, they disappear' (F9). 'We are very close to [vocational] schools and we give them a lot of support (. . .) we have our reference school and 50% of our professionals come from them' (F13).
The employers' perception of the public employment services is quite different from that of the VET institutions. These services rarely interact actively with them to access candidates, and when they do so, it is to participate in public internship programmes. However, employers complain about the bureaucratic burden and process time which are incompatible with their skill needs: 'there is a lot of bureaucracy around these public internships, maybe they have to work like that but for firms it is a significant administrative burden' (F11). Furthermore, some underline the mismatch between the candidates' occupational profile and the skills required by firms (F4).
In this context, migrant labour emerges as a final resource and the sampled employers do acknowledge that these workers expand the pool of candidates and consequently help reduce training costs. However, it is a marginal solution and only one employer uses this as a decisive strategy (F15). This is a large and labour-intensive textile firm which has two large contingents of migrant workers from China and Brazil. The Chinese workers have previous experience of the textile industry, and the Brazilians are often graduates but with no previous experience in this industry: 'We have the workers from China and they already had the know-how in textile which we cannot find here (. . .) we started to need a lot of people in the beginning of the year and we had to open our door because we did not have enough Portuguese to work' (F15).
Migrant labour is occasionally used by other firms but bureaucratic and culture issues, in addition to skill mismatch, make employers reluctant to adopt this solution (F10, F14). More specifically, Brazilians are often graduates recruited for non-graduate occupations and this involves a high risk of turnover (F14, F16). Employers try to minimise turnover costs by avoiding the hiring of overqualified workers. The participants of focus groups were more enthusiastic about migrant labour. They noted two issues: a small supply of Portuguese workers and huge competition from abroad (e.g. Germany and France) to recruit these few workers.
In the context of the skill shortages described above, Famalicão MadeIN emerged as a collective solution, organised by the municipality with several different partners in the terrain. It is interesting to note that the initiative was promoted and developed by local stakeholders although the entire northern region of Portugal is affected by skill problems. It indicates that policy makers and stakeholders have recognised the scale of skill problems and have designed solutions that eliminate, or at least mitigate, a long-lasting regional/local labour market problem. But how do employers perceive the role of this partnership?
Employers admit that geographical proximity provides an understanding of local problems and the institutional proximity creates the necessary conditions to mediate these problems (F7, F8). The data indicate that most of the sampled employers have some interactions with the partnership and, in some cases, develop close or intense ties (Table 2).
One employer notes that ' . . . to be honest, until they arrived I did not exist for them and now I do (. . .) they have opened up and it has been very good, the firms started to be given attention for the first time (. . .) there is a proximity and interest and collaboration' (F2). This involves information on skill needs, meetings to discuss labour market outcomes, plans to ease the transition of young people, and training strategies. Furthermore, it is accepted that the initiative has encouraged the promotion of some new activities: 'Made In has helped us expand in design and confection. It provides information, acts as a mediator (. . .) we feel that the Town hall is very close to firms and always attentive' (F8). 'They [the relations with Famalicão MadeIN] are very positive, we have established a protocol to support us in an investment we made here (. . .) and we were even recently contacted with an international request involving the chamber of commerce of Cuba' (F11).
However, Famalicão MadeIN's response to skill problems and training for non-graduate jobs is still insufficient, despite general agreement about its benefits. For most firms, skill shortages remain a serious and unresolved issue (F11; F13). Some consider that the partnership has specific targets, notably micro-firms, emergent businesses and firms in difficulties (F4). In this context, F4 underlines the successful intervention of Famalicão MadeIN. Others assume that it attempts to spotlight the region and engage employers in this strategy (F5). Famalicão MadeIN is undoubtedly relevant in leveraging the local/regional economy (F11). Nevertheless, there is a lack of consensus on its contribution to reducing skill problems: some, but not for all, believe it is still unsatisfactory. For example, one employer (F7) acknowledges the partnership's role in easing the access to VET institutions, employment services, and job candidates: 'I strongly support the county work . . . Its interaction with employment services, training institutions . . . Yes, also makes it easier to search for professionals. It is fantastic to find this level of understanding between entities, which was not usual in our country. This interaction has been extremely relevant to us and has produced very significant fruits in fact' (F7).
The information gathered in the focus groups further highlights the benefits but also the shortcomings of the Famalicão MadeIN initiative in addressing local skill shortages (Table 3). It should be noted that the partnership seeks to promote ties between firms and other local stakeholders involved in training and employment (local schools and training centres, employment services and others) and to encourage young people to choose VET training.
The employers participating in the focus groups regret that the partnership has not fulfilled these goals. Young people, but also the unemployed, continue to see VET as less attractive, firms have persistent skills shortages, and employers remain outside the design and delivery of VET courses despite a greater willingness to participate. We conclude from the data that Famalicão MadeIN has positively impacted innovation and the internationalisation of firms but has not yet had a sufficient impact on the supply of skills; above all, the firms continue to face skill shortages i.e. skill problems remain a serious and unresolved issue.
Discussion and concluding remarks
This research helps answer a key question: Is the expansion of HE and consequently the supply of graduates to the labour market the solution for skill problems? The evidence reported above underlines the need to take the regional level into consideration when exploring skill problems (Froy, Giguère, and Hofer 2012;Sevinc et al. 2020). The data collected tend to go in the opposite direction of the most cited arguments in favour of HE expansion (COM 2017). In fact, the supply of skilled workforce is found to be the key driver of expansion. Our research shows that employers face hard-to-fill vacancies for IT and engineering graduates, but they blame HE for diverting young people from VET and thus amplifying the skill shortages of technically prepared workers. Participants of focus group lament the end of the old technical schools that provided them with highly skilled non-graduates until the 1970s. In contrast, Persson and Hermelin (2020) report that young people are attracted to VET and this facilitates the access to very technically-skilled candidates.
We found that the skill problems in V.N. Famalicão are recurrent and widespread across the region as all the sampled employers report recruitment difficulties. This is understood to be a structural skill problem especially linked to the under supply of non-graduates. This evidence highlights the argument that skill is inherently a local issue (Froy, Giguère, and Hofer 2012) and consequently calls for regional/local level responses from stakeholders of education and training (McCann and Ortega-Argilés 2015). Moreover, it suggests that HE expansion might not be the only solution for skill problems, at least in this industry-based region.
The research addresses the solutions adopted to tackle skill problems. Workplace training is a remedial strategy for accessing skills which seems to be influenced by the persistent regional skill shortages at graduate and nongraduate levels. Is this an option? Scholars usually propose a choice between make and buy solutions, as reported by Suleman and Laranjeiro (2018) for the Portuguese labour market. However, employers indicate mostly a single solution: provide their often under-skilled workforce with extensive training. The fragmentation of privatised vocational training seems to be the most risky outcome of such a solution. This calls for action to foster the different stakeholders' collaboration and engagement. The cases discussed by Persson and Hermelin (2020) suggest that some factors are crucial for an effective partnership, notably the access to suitable skills for all employers, municipality funding and engagement, 'good branding' of VET so that it is attractive to young people, and above all a close relationship between local government and local firms.
The employers' perception of the Famalicão MadeIN partnership points to two major conclusions. There is a widespread positive perception of the role this local partnership plays in fostering economic performance and helping firms take new directions. The findings illustrate that multi-stakeholder partnerships effectively tackle the region's development (Hemmati et al. 2002) by opening to the international market and encouraging innovation. However, much work remains to be done to reduce skill shortages. It should be noted that local partnerships have only recently been used to tackle skill mismatch and the critical views indicate that employers are desperate to resolve structural skill problems. However, they must understand that it is still too early to reap the benefits of this engagement; the partnership has a mediating role and is not in itself responsible for resolving the skill problems.
Furthermore, the intervention of local actors is often limited to advice or consultations for VET courses initiated by the central government, i.e. VET system is still highly centralised in Portugal (OECD 2015(OECD , 2020. As a result, local actors do not participate in the design of VET policies. The main solution found by employers is the privatised and fragmented workplace training, but it involves risks. Although, employers are vulnerable to poaching due to generalised skill shortages (Healy, Mavromaras, and Sloane 2015), our data showed that poaching is not yet a serious issue. However, the attractiveness of firms differs substantially. Large firms tend to be proactive in hiring new school-leavers, taking advantage of their ties with education and training institutions; SMEs are unable to compete with this. Literature has in fact shown that large and wealthy employers dominate the relationship with the education system (Hesketh 2000); small firms have less power to participate since they do not recruit regularly enough to justify these ties (Hogarth et al. 2007). The scale of activity really matters. This is exemplified by F14's very specific skill set and the lack of power to influence the supply of the particular skills required for unusual occupations.
The recruitment of migrant workers was also identified as a solution. However, only one firm uses this as a strategic approach (F16) because of both bureaucratic barriers (Froy, Giguère, and Hofer 2012) and skill mismatch (Visintin, Tijdens, and Van Klaveren 2015). The skill mismatch increases the risk of turnover (Treuren, Manoharan, and Vishnu 2019) and transaction costs therefore reduce employers' willingness to recruit migrants. Furthermore, employers contribute to the underutilisation of migrants' skills since they assign them to jobs requiring lower qualifications (Dietz et al. 2015). Ultimately, it is a marginal solution for the skill problems.
In sum, the solution for skill shortages is multifaceted and represents a serious challenge for all stakeholders at local and national levels. The effectiveness of any solution cannot be taken for granted and employers must often trade the costs of each solution off against the benefits. Therefore, policymakers should ensure governance arrangements in VET that help the formulation and implementation of training policies in line with regional and local skill needs. Efforts are likewise required to raise the status of VET among young people and families and help them when choosing between HE and other valuable alternatives, probably with HE credentials.
Our findings should be examined with caution despite the relevance of the research. This is qualitative research based on a small number of participants and focusing on industry-based firms. Additional studies are required that compare different sectors, industries, and services, as well as a large sample of employers. Nevertheless, key insights are provided on employers' struggle with the lack of skilled workforce, which undermines the competitiveness and sustainability of a wealthy region.
|
2021-07-26T00:06:46.593Z
|
2021-06-02T00:00:00.000
|
{
"year": 2023,
"sha1": "078f50d5ba4b2193bcc503f76d2c1bd3bd53e899",
"oa_license": "CCBY",
"oa_url": "https://repositorio.iscte-iul.pt/bitstream/10071/24970/1/article_82429.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "964a45640284025c9112f016b93bdf5ab7d895f4",
"s2fieldsofstudy": [
"Engineering",
"Education"
],
"extfieldsofstudy": [
"Business"
]
}
|
64660380
|
pes2o/s2orc
|
v3-fos-license
|
Experimental vehicles FASCar ®-II and FASCar ®-E
The main goal of the large-scale research facility FASCar® are scienti c studies and analyses in the eld of driver assistance and vehicle automation. This includes also studies of human behavior, acceptance studies, test of new assistance systems and automation, as well as user friendliness. FASCar® makes it possible to test and analyze innovative systems and developed functions in a simulated or even real tra c environment.
Introduction
Active interventions can make driving safer -used incorrectly, however, they can also cause danger.The Institute of Transportation Systems therefore developed driver assistance according to the driver's requirements and needs.To nd out if the driver reacts correctly to the intervention of a new assistance system, test rides with a car capable of active interventions are the last logical step of development.These test rides can be performed by using the large-scale research facility FASCar ® .This article provides an overview of the experimental vehicles FASCar ® -II and FASCar ® -E.
Technical Description
The large-scale research facility FASCar ® consists of two experimental vehicles called FASCar ® -E and FASCar ® -II.The main di erence between FASCar ® -E and FASCar ® -II is their special area of operation.FASCar ® -E is developed for testing in real tra c environment and it has a road approval.FASCar ® -II instead, caused by its hardware, can only be driven on test sites, but in exchange it o ers a higher level of active interventions and a futuristic human machine interface (HMI).
FASCar ® -E
The FASCar ® -E is an electric 7 th generation Volkswagen Golf.It is equipped with a 115-hp electric motor.Considered all the build in technology its range is approx.130 km (80miles).Its main goal is the research in the eld of automation in public urban scenarios.For this research purpose the vehicle is modi ed with additional Sensors, a new HMI and a lateral and longitudinal control system.
Sensors
For environment recognition and vehicle localization FASCar ® -E is equipped with four laser scanners and three long range radars which are mounted in front and rear bumpers of the vehicle, as well as an inertial measurement unit (IMU) with GPS aiding.A C2X-System is used for vehicle-to-infrastructure and vehicle-to-vehicle communication.
Human Machine Interface (HMI)
A free con gurable dashboard display replaces the original instrument cluster, which is mounted in the glove compartment for safety purposes.With this free con gurable dashboard new HMI concepts can be validated.
Lateral and longitudinal control
FASCar ® -E can be controlled by a controller area network (CAN) interface in longitudinal and lateral direction.For longitudinal control the signals of the adaptive cruise control (ACC) are rerouted to be able to accelerate the vehicle with +2m/s 2 to -3m/s 2 by software.For lateral control the signals of the active park assist are used.Especially the use of the original equipment manufacturer's own systems for lateral and longitudinal control enables the use of this vehicle on public roads.
FASCar ® -II:
The FASCar ® -II is a Volkswagen Passat which has a 2.0 Diesel engine.It is equipped with the same set of sensors as FasCarE.Hardware di erences between both vehicles are the lateral and longitudinal control System as well as the HMI.
Lateral and longitudinal control
To achieve a maximum intervention FASCar ® -II is equipped with a throttle paddle and a prototype of a brake booster which support full longitudinal control without any restrictions.For lateral control and new HMI concepts a steer-by-wire system is integrated in the vehicle.It allows on the one hand an active control of the vehicle wheels without a turning of the steering wheel and on the other hand a turning on the steering wheel without turning the vehicle wheels.This advantage can be used for new HMI concepts, automated security interventions and it enables FASCar ® -II not only to be used on test sites, but also in a simulator like the VR-Lab (virtual reality laboratory), see Figure 4.
Human Machine Interface (HMI)
Besides a free con gurable dashboard display such as FASCar ® -E, a steering wheel for HMI purposes replaces the original one.It has several free programmable and illuminable buttons, which can be read out by a wireless connection to a PC.
Project Application Exampels
The large-scale research facility FASCar ® was and is used in several projects.This only a short overview of some of the projects FASCar was involved in:
InteractIVe
The Project interactive stands for accident avoidance by active intervention for Intelligent Vehicles The European research project interactIVe took the next step towards the goal of accident-free tra c. interactIVe developed advanced driver assistance systems (ADAS) for safer and more e cient driving.interactIVe introduced safety systems that autonomously brake and steer.The driver is continuously supported by interactIVe assistance systems.They warn the driver in potentially dangerous situations.The systems do not only react to driving situations, but are also able to actively intervene in order to protect occupants and vulnerable road users.Seven demonstrator vehicles -six passenger cars of di erent vehicle classes and one truck -were built up to develop, test, and evaluate the next generation of safety systems (Heesen et al., 2015).
HAVEit
The project HAVEit aimed at the realization of the long-term vision of highly automated driving for intelligent transport.The project developed, validated and demonstrated important intermediate steps towards highly automated driving.HAVEit signi cantly contributed to higher tra c safety and eciency usage for passenger cars, busses and trucks, thereby strongly promoting safe and intelligent mobility of both people and goods (Flemisch et al., 2011).The signi cant HAVEit safety, e ciency and comfort impact was generated by three measures: • Design of the task repartition between the driver and co-driving system (ADAS) in the joint system.• Failure tolerant safe vehicle architecture including advanced redundancy management • Development and validation of the next generation of ADAS directed towards higher level of automation as compared to the current state of the art.
MobiFAS
The example of browsing the internet with a tablet PC allowed researchers of the Institute of Transportation Sysems to investigate how and under what circumstances control should be handed over from the vehicle to the driver.In case the vehicle is approaching a construction site distraction of the driver can become a problem.In order to safely navigate the vehicle in this situation the driver has to interrupt his or her activities and prepare for taking over responsibility for steering the vehicle.Even today every fourth car driver is distracted by the use of mobile devices during his drive.This can have catastrophic e ects.How can a driver of a highly automated road vehicle be integrated in the driving task in a comfortable, fast and e ective way?These answers are given by the MobiFAS project (Lapoehn et al., 2016).
Valet parking
Automation of vehicles provides new opportunities to develop novel concepts for an optimal combination of public and individual transportation as well as the introduction of electrical cars that need coordinated recharging.A typical scenario of such a concept might be automatic drop-o and recovery of a car in front of a train station without taking care of parking or re-charging.Such new mobility concepts require among other technologies autonomous driving in designated areas.The objective of this project is to develop a smart car system that allows for autonomous driving in designated areas (e.g.valet parking, park and ride) and can o er advanced driver support in urban environments (Löper et al., 2013).
|
2018-12-11T16:36:07.607Z
|
2017-05-22T00:00:00.000
|
{
"year": 2017,
"sha1": "5612f547c1620865ecfc070d9f0aa798feae49f6",
"oa_license": "CCBY",
"oa_url": "https://jlsrf.org/index.php/lsf/article/download/147/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3a81a4e1676e1b5f38a7546aff901e67c72de626",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
209318079
|
pes2o/s2orc
|
v3-fos-license
|
Instruments that measure psychosocial factors related to vaccination: a scoping review protocol
Introduction As vaccine-preventable disease outbreaks increase, there is growing international interest in monitoring public attitudes towards vaccination and implementing and evaluating vaccine promotion interventions. Outcome selection and measurement are central to intervention evaluation. Measuring uptake rates alone cannot determine which elements in a multicomponent vaccine-promotion intervention are most effective, why specific populations are undervaccinated or when confidence in vaccines is wavering. To develop targeted and cost-effective interventions and policies, it is necessary to measure vaccination-related psychosocial factors such as knowledge, attitudes and aspects of decision-making. This scoping review aims to identify, compare and summarise the properties and validation of instruments for measuring vaccination-related psychosocial factors and identify gaps where no instruments exist. Methods and analysis We will search Medline OVID, Embase OVID, CINAHL and PsycINFO with no date restriction, using a pilot-tested search strategy of terms related to vaccination: knowledge, attitudes, trust, acceptance and decision-making and measurement, psychometric testing or validation. This search will be supplemented with manual search and expert consultation. We will include studies that describe instrument development, adaptation or testing and include evaluation of at least two measurement properties (eg, content, criterion, or construct validity; test–retest reliability; internal consistency; sensitivity; responsiveness). Instruments measuring a vaccination-related psychosocial factor in any population will be included. All studies will be screened by one reviewer, with a sample double-screened to confirm accuracy. Disagreements will be resolved with a third reviewer. Data will be synthesised narratively and through summary tables to chart and compare instrument characteristics such as factors measured, date and/or location of development or validation, measurement properties evaluated and population. Ethics and dissemination This scoping review aims to provide an overview of existing instruments and ascertain measurement gaps where no measurement instruments currently exist. The identified instruments will form the basis of an open-access online repository of instruments.
Introduction As vaccine-preventable disease outbreaks increase, there is growing international interest in monitoring public attitudes towards vaccination and implementing and evaluating vaccine promotion interventions. Outcome selection and measurement are central to intervention evaluation. Measuring uptake rates alone cannot determine which elements in a multicomponent vaccine-promotion intervention are most effective, why specific populations are undervaccinated or when confidence in vaccines is wavering. To develop targeted and cost-effective interventions and policies, it is necessary to measure vaccination-related psychosocial factors such as knowledge, attitudes and aspects of decision-making. This scoping review aims to identify, compare and summarise the properties and validation of instruments for measuring vaccinationrelated psychosocial factors and identify gaps where no instruments exist. Methods and analysis We will search Medline OVID, Embase OVID, CINAHL and PsycINFO with no date restriction, using a pilot-tested search strategy of terms related to vaccination: knowledge, attitudes, trust, acceptance and decision-making and measurement, psychometric testing or validation. This search will be supplemented with manual search and expert consultation. We will include studies that describe instrument development, adaptation or testing and include evaluation of at least two measurement properties (eg, content, criterion, or construct validity; test-retest reliability; internal consistency; sensitivity; responsiveness). Instruments measuring a vaccination-related psychosocial factor in any population will be included. All studies will be screened by one reviewer, with a sample double-screened to confirm accuracy. Disagreements will be resolved with a third reviewer. Data will be synthesised narratively and through summary tables to chart and compare instrument characteristics such as factors measured, date and/ or location of development or validation, measurement properties evaluated and population. Ethics and dissemination This scoping review aims to provide an overview of existing instruments and ascertain measurement gaps where no measurement instruments currently exist. The identified instruments will form the basis of an open-access online repository of instruments.
IntroduCtIon
Outbreaks of vaccine-preventable diseases are a growing international crisis, with worldwide measles cases increasing by 300% from 2018 to 2019. 1 Undervaccination and nonvaccination are driven by barriers to access and vaccine hesitancy, with the WHO naming the latter as a threat to global health. 2 3 Now, more than ever, there is urgent global focus on the development, implementation and evaluation of interventions and policies to increase vaccine uptake.
Vaccine uptake-like other health behaviours-is shaped by communication, interaction and psychosocial factors. 4 5 The language around these factors can vary depending on the context or discipline. Building on our earlier taxonomic work in this area, we consider 'psychosocial factors' to include knowledge, attitudes, values, selfefficacy, vaccine confidence, trust and aspects of individual decision-making. 4 We also use the term 'factors' here, though they may be referred to as 'constructs' in a psychometric context, or 'outcomes' in intervention evaluation.
Open access
A variety of theoretical models aim to describe the ways in which these psychosocial factors mediate the impact of communication and other interventions on health behaviour. For example, the Health Belief Model, Theory of Planned Behaviour and Social Cognitive Theory highlight the interplay between specific factors including selfefficacy, perceived risks and benefits, attitudes, beliefs, subjective norms and knowledge. [6][7][8] Models of shared decision-making also suggest that health behaviour can be shaped by the decision-making experience itself, through quantifiable factors such as anticipated regret, decisional conflict or satisfaction with the process. 4 For vaccination specifically, additional factors like confidence, trust and values have been shown to be linked to behaviour. [9][10][11] Vaccine uptake is generally the ultimate goal of public health policy and intervention. However, understanding and being able to measure psychosocial factors are important at every level, from governments monitoring population health and evaluating interventions to clinicians tailoring communication approaches. For instance, measuring vaccine uptake rates alone cannot establish an intervention's mechanism of effect, or tell us which elements in a multicomponent intervention are most effective and which are unnecessary or even harmful. 12 13 Along with assessment of practical barriers, measuring vaccination-related psychosocial factors is also necessary to identify target populations, determine potential reasons for undervaccination and inform the design of tailored and cost-effective interventions. 14 Uptake rates are insufficient for monitoring public sentiment towards vaccination-to detect worrying trends before they become vaccine refusal, regular monitoring of factors like vaccine confidence and trust is required. 11 15 Finally, ensuring that people are informed, supported and satisfied with their healthcare decision-making experiences is an ethical and human rights imperative that can also be quantifiably assessed. 16 Despite the importance of these psychosocial factors in explaining or shaping vaccine-uptake behaviour, they are frequently overlooked in intervention, policy and programme evaluations. 17 Evaluations may not have sufficient resources or expertise to incorporate such additional measures, evaluators may not be aware of or may not value additional measures, or there may be no easily-accessible or identifiable instruments with which to measure a concept. Even when these additional psychosocial factors are measured, it is often with instruments that are developed ad-hoc, incompletely validated or used only once. These instruments are then difficult to find, interpret or apply in future studies. Previous reviews of the effect of communication interventions at both individual and community levels found significant heterogeneity in outcome measures, indicating the need for greater standardisation to enable better comparison. 18 19 A selection of seven vaccine acceptance or hesitancy measures has been briefly summarised elsewhere. 10 However, there is no broad overview of the instruments available for measuring the full range of psychosocial factors related to vaccination, including knowledge, attitudes, values, selfefficacy, vaccine confidence, trust and decision-making.
Therefore, this systematic scoping review aims to (a) identify, compare and summarise the properties and validation of instruments to measure vaccination-related psychosocial factors and (b) identify gaps in the factors for which instruments exist.
MEthodS And AnAlySIS
This scoping review will apply the framework developed by Arksey and O'Malley and further expanded by the Joanna Briggs Institute. 20 21 This framework involves the following stages: define the review aim and eligibility criteria, identify relevant studies, screen and select studies, extract and chart the data and collate and summarise the results. The initial stages, through study selection, will also draw from the COSMIN methodology for systematic reviews of Patient-Reported Outcome Measures. 22 Reporting will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews checklist. 23 The estimated review completion date is 31 March 2020.
Eligibility criteria
Studies meeting the following criteria will be included: ► Study type: any published studies which aim to (1) develop a new instrument OR (2) adapt, translate or test an existing instrument in a new population; AND (3) evaluate at least two measurement properties (eg, content validity, criterion validity, construct validity, test-retest reliability, internal consistency, predictive validity, sensitivity, or responsiveness). If a study tests only one measurement property but another study tests one or more other properties of the same instrument, these will be included. Face validity is not considered a relevant measurement property for the purposes of determining study inclusion because it is a subjective judgement made by individuals with no expertise in the subject area. 24 ► Factors: the instrument measures one or more psychosocial factors relevant to vaccination, including but not limited to knowledge, attitudes, values, selfefficacy, vaccine confidence, trust and aspects of individual decision-making. ► Population: any population eligible for vaccination or responsible for making vaccination decisions for others (including healthcare workers, students, parents, adolescents, children, pregnant women and elderly people). There will be no date, location or language restrictions in our search. Where studies published in languages other than English are identified, we will use freely-available online translation tools to enable screening for relevance. We will contact study authors and/or seek full translation for relevant non-English studies if resources allow.
Search strategy
We searched the following electronic databases from inception to August 7, 2019: Medline OVID, Embase Open access OVID, CINAHL and PsycINFO. The search strategy includes index and text words related to vaccination or immunisation, knowledge, attitudes, acceptance and decision-making and measurement, psychometric testing or validation. Key MeSH terms were drawn from a framework of vaccination communication outcome domains (ie, knowledge, attitudes, decision-making). 4 Specific search keywords were identified from titles, keywords and abstracts of a sample of approximately 50 previously identified relevant studies across the range of psychosocial factors. Using these relevant studies, we pilot-tested and refined the search strategy to ensure it will be both sensitive and focused. The full search strategy is included as an appendix (online supplementary additional file 1).
We will also review the reference lists of relevant studies and consult experts in the field, identified through the authors' international networks, to identify any additional references or links to instruments.
Study selection
Search results will be loaded into Endnote X8 (Clarivate Analytics 2016) and duplicates will be removed. Studies will be screened first by title and abstract, with potentially eligible studies screened by full text. One author (JK) will conduct the primary screening. A second author will screen a sample of the results to compare and confirm the screening approach. Other authors will be asked to provide input where screening decisions are not straightforward.
data extraction
One review author (JK) will extract data from all included studies, using a standardised data extraction template developed for this review (online supplementary additional file 2). The extracted data will include: ► Citation details (eg, title, year of publication, authors). ► Type of study (instrument development and initial validation, validation of adapted or translated instrument, validation in a new population). ► Instrument title and/or abbreviation. ► Summary of instrument topic/purpose (eg, 'parental decision-making about human papillomavirus vaccine for their daughters'). ► Subscales, including definition provided by the authors and number of items per subscale. ► Number of total items. ► Ranges of scores/scoring description. ► Population of intended use. ► Population of validation.
(Sample size, age, gender, setting, country, language). ► Country of initial development. ► Available languages. ► Accessibility (location/cost). ► Linked references (ie, other validation studies related to this tool or other versions of the tool). ► Measurement properties evaluated (content validation, criterion validation, construct validation, test-retest reliability, internal consistency, predictive validation, sensitivity and responsiveness). Any uncertainties, for example, about the nature of measurement property assessment, will be raised with the other authors for discussion and resolution.
In keeping with standard scoping review methodology, we will assess the degree of validation for each tool by reporting which measurement properties have been evaluated, but will not appraise the quality of the specific validation methods used. 23 25 data synthesis strategy The data will be synthesised narratively and through summary tables which will chart the characteristics of the instruments for ease of comparison. The synthesis will provide an overview of the instruments measuring each factor, with specific subanalyses organised by relevant features such as date and/or location of development or validation and population of intended use. Comparative tables will be used where relevant. For each tool, we will summarise the measurement properties evaluated using tables similar to the Cochrane risk of bias summary figures for intervention reviews. 26 If many instruments measuring the same factors are identified, their key differences and similarities will be explored with more detailed analysis. To identify gaps,that is, factors that are not measured in any identified instruments, the instruments will be mapped against a taxonomy of outcomes relevant to vaccination communication. 4 Patient and public involvement The development of this scoping review protocol did not involve patients or the public.
EthICS And dISSEMInAtIon
This scoping review is intended to help researchers, policymakers, vaccination programme officials and other stakeholders identify appropriate, fit-for-purpose instruments to gather population data and evaluate vaccine promotion strategies. Healthcare practitioners may also find useful instruments to apply as waiting-room screening tools to determine people's potential vaccine hesitancy, knowledge or values and inform their communication strategies. This review will also highlight gaps where there are no available instruments to measure specific factors. There are no ethical considerations related to this review.
The results of this review will form the basis of an open access online repository of instruments to be developed through the Measurement Outcomes for Vaccination Evaluations project. 27
|
2019-12-12T10:31:03.849Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "af4d8c3ec07b7f7ec6a48058933854c5fc14aa04",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/12/e033938.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "6a38b80de3ad215738475c1a7c49bda1173808e7",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22593028
|
pes2o/s2orc
|
v3-fos-license
|
Universality class of site and bond percolation on multi-multifractal scale-free planar stochastic lattice
In this article, we investigate both site and bond percolation on a weighted planar stochastic lattice (WPSL) which is a multi-multifractal and whose dual is a scale-free network. The characteristic properties of percolation is that it exhibits threshold phenomena as we find sudden or abrupt jump in spanning probability across $p_c$ accompanied by the divergence of some other observable quantities which is reminiscent of continuous phase transition. Indeed, percolation is characterized by the critical behavior of percolation strength $P(p)\sim (p_c-p)^\beta$, mean cluster size $S\sim (p_c-p)^{-\gamma}$ and the system size $L\sim (p_c-p)^{-\nu}$ which are known as the equivalent counterpart of the order parameter, susceptibility and correlation length respectively. Moreover, the cluster size distribution function $n_s(p_c)\sim s^{-\tau}$ and the mass-length relation $M\sim L^{d_f}$ of the spanning cluster also provide useful characterization of the percolation process. We obtain an exact value for $p_c$ and for all the exponents such as $\beta, \nu, \gamma, \tau$ and $d_f$. We find that, except $p_c$, all the exponents are exactly the same in both bond and site percolation despite the significant difference in the definition of cluster and other quantities. Our results suggest that the percolation on WPSL belongs to a new universality class as its exponents do not share the same value as for all the existing planar lattices and like other cases its site and bond belong to the same universality class.
I. INTRODUCTION
Percolation is perhaps one of the most studied problems in statistical physics. This is not only because of the simplicity of its definition but also because of the versatility of its applications. To study percolation one needs to choose a skeleton first. It can be a lattice or a graph that has two entities namely sites (nodes) and bonds (edges). We then occupy each site or bond, depending on whether we want to study site or bond percolation, with probability p independent of the state of its neighbors [1,2]. Broadbent and Hammersley in 1957 first presented the percolation model to understand the motion of gas molecules through the maze of pores in carbon granules filling a gas mask [3]. Since then the intuitive idea of percolation has been found relevant to so many seemingly disperate systems that its concept has literally percolated across a vast area of science and social science. Examples include flow of fluid in porous media, infiltration in composite materials processing, spread of fluids, rumours, opinion, biological and computer viruses are just a few to mention [4][5][6][7][8][9][10][11].
Besides the simplicity of its definition and the versatility of its application there exists yet another reason why percolation model is so popular. In percolation we primarily observe how clusters, set of contiguous occupied sites, are formed and grown as a function of p which is the only control parameter. As p value increases from negligibly small, there appears for the first time a cluster that spans across the entire system. In the case of infinite system size, we find a unique threshold value p c such that there is the probability that the spanning clus-ter W (p) = 0 for p ≤ p c and W (p) = 1 for p > p c . Interestingly, such transition, despite being geometric in nature, yet we find many of its aspects reminiscent of continuous thermal phase transition (CTPT) [12,13]. Thus, percolation serves as a relatively tractable model for the investigation of phase transition and critical phenomena that lie at the heart of the modern development of statistical physics. This is perhaps the most important reason why percolation is still studied extensively even after almost 60 years of its inception.
Indeed, for almost every observable quantities in percolation there exist an equivalent counterpart in CTPT. These observables like their counterpart in CTPT, exhibit power-law, at least near p c , which is typically attributed to critical phenomena. For instance, the system size L is like correlation length L ∼ (p − p c ) −ν , mean cluster size S is like susceptibility S ∼ (p − p c ) −γ , percolation strength P is like order parameter P ∼ (p − p c ) β etc. Like thermal phase transition, percolation transition too can be classified in terms of p c and by a set of critical exponents β, γ, ν etc. One of the extraordinary findings in percolation is that the numerical value of its critical exponents depend neither on the detailed nature of the lattice structure nor on the type of percolation, bond or site. Their values depend only on the dimension of the embedding space of the lattice. It is, therefore, said that percolation on all planar lattices belong to the same universality class.
Unique universality class has been found true for a variety of periodic and non-periodic planar lattices having fixed and mixed-valued coordination number, random planar lattices and their dual, random multifractal lattices etc. [14][15][16][17] (see also Ref. [18], which is the most recent review article). Yet, have we exhausted all the possible lattices to conclude that percolation on all planar lattices belongs to the same universality class? The answer is no. Recently, we have reported that the site percolation on a weighted planar stochastic lattice (WPSL) belongs to separate and distinct universality class [19]. The WPSL is quite non-trivial as it has mixed properties of both lattice and network or graph [20]. On one hand, unlike networks it is embedded in the space of dimension d = 2, on the other unlike regular lattice, its coordination number distribution obeys a power-law. We found that the critical exponents for site percolation on the WPSL are totally different from the known values for all other planar lattices studied till to-date. We, therefore, claim that the random site percolation on the WPSL belong to a separate and distinct universality class.
In this article, we investigate the bond percolation on the WPSL and present detailed results of its site counterpart in order to see the contrast. One of the goals of the present article is to check if the bond and site percolation on WPSL belong to the same universality class like for all known planar lattices studied to date. First, we find the percolation threshold p c , for both bond and site percolation, using the idea of spanning probability W (p). Second, we attempt to find an estimate for the various critical exponents such as ν, β and γ using the finite-size scaling hypothesis where precise value of p c is necessary. Then, we use the idea of data collapse for further fine tuning of the estimated values for the exponents till we get the best data-collapse. Besides critical exponents, we also find the exponent τ that characterizes the cluster size distribution function n s (p c ) ∼ s −τ and the fractal dimension d f that characterizes the mass of the spanning cluster M (p c ) ∼ L d f . Note that the values of the various critical exponents and the exponents τ , d f etc. are not at all independent rather they are bound by some scaling relations. We use these scaling relations for self-consistency check. We find that our estimate for various exponents satisfy these relations up to quite a good extent. Our results based on extensive Monte Carlo simulation suggest that both site and bond percolation on WPSL belong to the same universality class and it is different from the one where percolation on all the planar lattices belong.
The rest of the article is organized as follows. In section II, we discuss the algorithm for the construction of WPSL and some of its key features. In section III, we briefly discuss the Newman-Ziff algorithm as it is the most efficient algorithm for percolation. We also discuss the finite-size scaling and underline its deep connection to the Buckingham Π-theorem in section IV. In section V, we present our results about bond and site percolation on the WPSL side by side so that we can appreciate the contrast. Finally, we summarize our results in section VI.
II. WPSL AND ITS PROPERTIES
We first give a brief description of the construction process of the WPSL. It starts with an initiator which we choose to be a square of unit area. The generator is then defined as the one that divides the initiator (in step one) randomly into four smaller blocks. In step two and thereafter the generator is applied to only one of the blocks by picking it preferentially with respect to their areas. Consider the tth time step of the generation of the WPSL at which the system has 3t − 2 number of blocks available whose areas are say a 1 , a 2 , a 3 , ..., ..., a 3t−2 . To pick one from 3t − 2 blocks we subdivide an interval of unit length [0, 1] into (3t − 2) sub-intervals of size [0, a 1 ], [a 1 , a 1 + a 2 ], ..., [ 3j−3 i=1 a i , 1] so that the higher the area the greater the size of the sub-intervals. We then generate a random number, say R, from the interval [0, 1] and find which of the (3t− 2) sub-intervals contain this R and pick that block. This process ensures that the blocks are being picked preferentially according to their size. In Fig. (1) we give a snapshot of the lattice to give a visual impression of how it actually looks at any given time. It is a space-filling planar cellular structure where the size or the area of the cells in the lattice are not equal rather their distribution is random. This is in sharp contrast to many of the cellular structures that we are familiar with. One advantage of creating WPSL by random sequential partitioning of the square into ever smaller mutually exclusive rectangular blocks helps defining each step of the division process as one time unit. The number of blocks N at time t therefore is N = 1 + 3t and hence it grows albeit the sum of the areas of all the blocks is always equal to the size of the initiator. Thus, the number of blocks N increases with time at the expense of the size of the blocks.
Recently, we have shown that the area size distribution of the blocks of WPSL obey dynamic scaling where we found θ = 2 and z = 1 [21]. It implies that the snapshots of the lattice at different times are similar. Yet another interesting properties of this lattice is that the dynamics of the system is governed by infinitely many conservation laws one of which is the conservation of total area.
To be more precise, if we denote x i and y i as the length and width of the ith block then we can show analytically that M n = N i x n−1 i y 4/n−1 i assumes statistically a constant value regardless of the time t when the snapshot is taken [20]. We have also shown that, except the conservation of total area, each of the infinitely conserved quantity is a multifractal measure. That is, we can assume that the ith block of the lattice is populated with probability p i ∼ x n−1 i y 4/n−1 i . We have shown that within the multifractal formalism we can construct the partition function which is the qth moment of p i i.e., Measuring Z q as a function of the square root of the mean block area δ = area of the initiator total number of blocks one can show that Z q exhibits power-law with exponent τ (q, n) = (4/n − n) 2 q 2 + 16−((4/n+n−2)q +2). (5) One of the characteristic features of this exponent is that it is non-linear ∀ n except n = 2. Note that the exponent τ (q, n) has two interesting properties. First, τ (q, n) = 2 ∀ n at q = 0 which is the dimension of the embedding space of the WPSL. Second, τ (q, n) = 0 ∀ n at q = 1 as it is required by the normalization condition [22]. The Legendre transform of τ (q, n) is a method whereby its derivative can be considered as an independent variable instead of q itself. In general, if we denote α as the slope and f as the intercept then the equation for the straight line is The function f (α) is the Legendre transform of the function τ (q) which is always concave in character. It implies that for every n value there exist a spectrum of spatially intertwined fractal dimensions f (α(q, n)) = 16 which are needed to characterize the WPSL except for n = 2. Note that the maximum of f (α, n) occurs at q = 0 which corresponds to the dimension of the embedding space of the WPSL when blocks are assumed empty. We thus find that the WPSL is a multi-multifractal planar lattice.
Besides, WPSL is a planar cellular structure whose cells or blocks has coordination number disorder in the sense that unlike regular lattice it has great many different number of neighbors. In fact, its coordination number distribution exhibits a power-law [20]. This is in sharp contrast to the coordination number distribution in the Voronoi diagram where it is also random but its distribution is peaked around the mean [23]. In the Voronoi diagram it is almost impossible to find cells or blocks which have significantly higher or fewer neighbours than the mean coordination number. That is, here the mean describes the characteristic scale. Such characteristic scale is absent in the WPSL since the distribution function follows a power-law. The power-law coordination number distribution also means that the majority of the blocks in the WPSL are very poor in coordination number and there are few cells or blocks which have significantly high number of nearest neighbours. A lattice, so rich in properties can be of great interest as it can mimic disordered medium on which one can study problems like percolation or random walk. In brief, the WPSL has the following properties: i) Its area size distribution function obeys dynamic scaling.
ii) It obeys infinitely many conservation laws.
iii) It is a multi-multifractal.
iv) Its coordination number distribution function obeys power-law.
III. NEWMAN-ZIFF ALGORITHM
In the standard algorithms, such as the Hoshen-Kopelman (HK), one must create an entire new state for every given value of occupation probability p in every independent realization. Investigation of the various observable using such traditional algorithms are highly expensive in terms of computational time and accuracy of finding various observable quantities. In 2000, Newman and Ziff (NZ) proposed an algorithm which is highly efficient in both accounts [24]. The efficiency in the NZ algorithm lies in the fact that one creates a new state with n + 1 occupied sites or bonds from the immediate previous state with n occupied sites or bonds simply by occupying one extra randomly chosen site or bond. It is based on the intuitive idea of random sequential adsorption of sites or bonds on a given lattice or graph. The algorithm is trivially simple. One starts with an empty lattice. Then at each step an empty site or bond is chosen at random and then is occupied if empty; else the attempt is discarded. However, in order to further reduce the computation time we first decide an order in which the sites or bonds will be occupied. That is, we wish to choose a random permutation of the bonds or sites. This is done by creating a list of all the bonds in any convenient order. Positions in this list are numbered from 1, 23, ..., M . Choose a number j at random with uniform probability in the range i ≤ j ≤ M . Then use any standard textbook algorithm to randomize the number i = 1 to M and put them in a new order in which they will be occupied. Having chosen an order of all the sites, we start occupying them in that order. The first site or bond to be occupied will definitely form a cluster of size one. The second, third, fourth etc too are highly likely to form clusters of size one. However, the likelihood of forming clusters of size one will decrease with the number of occupied sites since some sites when occupied, will become contiguous occupied sites thus making clusters of size more than one.
The formation of clusters and the statistics of their sizes are the key to the study of percolation theory. In the case of NZ algorithm we measure an observable, say O, for fixed numbers of occupied sites (or bonds), and obtain a data for H as a function of occupation number n. This is in sharp contrast with the HK algorithm where the number of sites being occupied at a given p is random and different at every independent realization. However, if the system size is large enough then the mean occupation number will almost equal to pN where N represents the system size. The weight factor of obtaining different n for a given p are not the same. The exact weighting factor of there being exactly n occupied sites on the lattice for a given p is given by binomial distribution The binomial coefficient N n represents the number of possible configurations of n occupied sites and N − n empty sites. Using this and the data for the observable O for all values of n we can find O for any value of p by the following relation It is interesting to note that the ensemble of states with exactly n occupied sites or bonds obtained according to NZ algorithm can referred to as a microcanonical percolation ensemble, where the number n is the equivalent counterpart of the energy E in thermal statistical mechanics. On the other hand, if we keep p fixed instead of n we can regard it as the canonical ensemble.
IV. FINITE-SIZE SCALING AND Π-THEOREM
We offer here a brief introduction to the spirit and scope of the scaling approach to phase transitions and critical phenomena in general. It is well-known as finitesize scaling (FSS) hypothesis. It has been extensively used as a very powerful tool for estimating finite size effects near the threshold value of the controlling parameter. In the continuous phase transition, the various response functions, typically the second derivative of the free-energy, diverges. Such transitions are classified by a set of critical exponents. The best known example of continuous phase transition is the paramagnetic to ferromagnetic transition where it has has been found that In percolation, their equivalent counterparts are These relations are only true in the thermodynamic limit in the sense that the system size is infinite. It is important to appreciate the fact that we can neither do experiment nor simulation on infinite systems where the correlation length ξ ∼ L. To overcome this impediment, physicists have come up with a smart solution which is known as finite-size scaling. In general, an observable quantity, say X, of the threshold phenomena that exhibit continuous phase transition is said to obey finitesize scaling if it satisfies where a and ν are said to be critical exponents. It provides an elegant way of extrapolating critical exponents for infinite system from a set of data for finite systems using the idea of data collapse. We shall here show that the origin of the FSS theory is actually deeply rooted to the Buckingham Π-theorem as it can be systematically obtained following the prescription of that theorem [25]. Consider that a quantity X is the primary quantity of interest which depends on the control parameter x and the system size L so that we can write Note that in the case of threshold phenomena, where there is a critical or threshold value x c across which the system under goes a sudden or abrupt change, we find that the distance x − x c is a better variable than x itself. Indeed, the observable quantity X is found to depend on x − x c and hence we write We almost always find that the quantity x−x c diminishes with L following a power-law (x − x c ) ∼ L −a . It implies that we can choose one of the parameters, say L, to have an independent dimension. Thus the dimension of X too can be expressed in terms of L alone Following the argument of the Π-theorem we can now define two dimensionless quantities and Note that φ being a dimensionless quantity its numerical value must remain invariant, for a given value of ξ, even if we change L by an arbitrary factor and hence φ(ξ, L) = φ(ξ). We can thus immediately write that The reduction of initially two variable problem into one variable problem constitutes the basic statement of the Buckingham Π-theorem. This is traditionally known as an hypothesis in the literature namely as the finite-size scaling hypothesis. A quantitative way of interpreting how the experimental data exhibit finite-size scaling is done by invoking the idea of the data-collapse method -an idea that goes back to the original observation of Rushbrooke [12]. The plots of X(x, L) vs x for different L always result in distinct curves. However, the same data can be made to collapse on a single universal curve if one plot XL −b vs (x−x c )L a instead of X(x, L) vs x regardless of the size of L. The quality of data collapse depends on how exact the value of x c and the exponents a and b. Data-collapse means that the characteristic properties of the system represented by X are similar on different system size L. Note that two systems of different sizes are said to be similar if they differ in the numerical value of their dimensional quantities X and x, however, the numerical value of the corresponding dimensionless quantities XL −b and (x − x c )L a coincide and that is why we obtain data-collapse. Obtaining data-collapse guarantees that the system exhibits scaling or similarity with respect to different independent system size. It is an extension of the idea of similarity of two triangles. For instance, two right triangles (characterized by their area S and the sides a, b and the hypotenuse c) may differ in the numerical value of their dimensional quantities. Now, one can vary b keeping a fixed and measure S for both the triangles. Plotting S as a function of b will definitely give two distinct curves one for each. However, the plots of the corresponding dimensionless quantities S/c 2 vs b/c will give rise to single universal curve since the numerical value of S/c 2 will always coincide for a given value of the acute angle θ regardless of the size of the triangle. This happens because triangles are similar.
V. SITE/BOND PERCOLATION ON WPSL
What is site and bond in WPSL? Before answering this question we find it worth discussing first what they are in the context of conventional lattices. For instance, we can regard a square lattice as a grid or mesh. Each cell of the grid has four sides and each side is a common border of two cells only. In the case of square grid, we can thus regard each cell as a site since it contains exactly one lattice point. Equivalently, we could also regard the vertices of each cell as sites. However, in the present context we stick to the former definition. The dual of the square grid, obtained by replacing the center of each cell by a node and the common border between neighbouring cells by a link connecting the two nodes. We can thus regard the links of the dual as the bond of the square lattice. Following the same argument we regard the blocks of the WPSL as its sites not the vertices of the lines that tessellated the initiator. To define bond, we first find its dual. It is obtained by replacing the center of each block by a node and the common border between two neighbouring blocks by a link connecting the corresponding nodes. We regard these links as the bonds of the WPSL. Using these ideas we first performed site and bond percolation on the square lattice and reproduced all the known results and then we applied them to the WPSL.
Recently, we have studied site percolation on WPSL, and found non-trivial results. That is, it belongs to a separate universality class than the universality class where percolation on all planar lattices are believed to belong. However, we are yet to check whether the site and bond percolation on WPSL belong to the same class or not. The dual of the WPSL can be well described as complex network and we have shown in Ref. [20] that the corresponding degree distribution follow a power-law P (k) ∼ k −γ with exponent γ = 5.58. Interestingly, the degree distribution P (k) in the context of network is the same as the coordination number distribution in the context of lattice. However, there is a sharp difference between networks based on graph theory and the network obtained from the dual of a lattice which is embedded in a space. The difference lies in the fact that networks based on graph theory have no edge or surface but networks based on the dual of a lattice have edge or surface which is crucial in the case of percolation as it is useful in defining the spanning cluster.
In the case of bond percolation, the lattice consists initially of N blocks and hence the system has exactly N number of cluster of size one since the center of each block represents a site. Thereafter, each time we occupy a bond, a cluster at least of size two or more is formed.
In the case of site percolation, each time we occupy a block, the size of the cluster may vary as we measure it by the area of contiguous occupied blocks. Initially all the blocks are empty and we won't know the size of the cluster even after the first block is occupied. For regular lattice like square lattice of L 2 sites have 2L(L − 1) and 2L 2 bonds with open and periodic boundary condition respectively. Now in the case of WPSL, being a disordered lattice, we cannot have such exact relation. We still find that the number of bonds or sites when we take average over ensemble of independent realizations follow a relation valid for all size of the lattice. For instance, for the lattice at time t there are exactly 3t + 1 sites and on the average there are 8t bonds with periodic boundary condition. Thus the mean coordination number is equal to 16t/3t ∼ 5.33 which is higher than the square lattice. We know that the percolation threshold p c depends on coordination number of the lattice and the higher the mean coordination number of a lattice the lesser is the value of p c . In the case of for square lattice, for instance, each site has exactly four nearest neighbours and each bond has six and hence p c of site percolation is higher than that of the bond. In the case of WPSL, we find that the mean number of nearest neighbors of a bond is 10.01 which is almost double the mean nearest neighbour of a site. So, it is expected that the p c value for bond percolation in WPSL will be quite less than p c = 0.5265 for the site percolation [19].
Percolation is all about formation of clusters and the statistics of their various properties as a function of control parameter p and L. The typical observable quantities in percolation are (i) Spanning probability W (p), (ii) percolation probability or percolation strength P , (iii) The mean cluster size S, (iv) cluster size distribution function n s (p) etc and their variation with p or L.
A. Spanning probability W (p)
The spanning probability W (p) for both bond and site describes the likelihood of finding a cluster that spans across the system either horizontally or vertically at the occupation probability p. To find how W (p) behaves with the control parameter p we perform many, say M , independent realizations under the same identical conditions. In each realization for a given finite system size we take record of the p c value at which the spanning cluster appears for the first time. To find a regularity or a pattern among all the M numbers of p c values recorded, one usually looks at the relative frequency of occurrence within a class or width ∆p. To find W (p), we can process the data containing M number of p c values to plot histogram displaying normalized relative frequency as a function of class of width ∆p chosen as per convenience. In Figs. In (c) we plot log(p − pc) vs log L for both bond and site. The two lines have slopes 1/ν = 0.611714 ± 0.007459 and 0.613552 ± 0.003861 for bond and site respectively. In (d) we plot dimensionless quantities W vs (p −pc)L 1/ν and by tuning the ν value slightly we find an excellent data-collapse using 1/ν = 0.6115 in both the cases which implies that the 1/ν is aproximately independent of the type of percolation.
(2a) and (2b) we show a set of plots of W (p) for bond and site percolation respectively as a function of p where distinct curves are for different system size L = √ N . One of the significant features of such plots is that they all meet at one particular p value regardless of the value of L. It means that even if we had data for infinite system the resulting plot would still meet at the same point revealing that it must have a special significance and the significance is that it is the threshold probability p c . Note that finding the p c value for different lattice is one of the central problems in percolation theory. In the case of bond we find p c = 0.3457 which is exceedingly less than its site counterpart since on the average nearest neighbor that each bond has in the WPSL is much higher than for its site counterpart.
The second most significant feature of the W (p) vs p plot is the direction of shift of the curve on either side of p c as the system size L increases. This shift with L clearly reveals that all the data points, i.e. the p values, are marching towards p c . We can quantify the extent at which they are marching by measuring the magnitude of the difference (p c − p) for different L. That is, we can draw a horizontal line at a given value of W , preferably at the position where this difference is the most, and take records of the difference p c − p as a function of system size L. Plotting the resulting data after taking log of both the variables or in the logarithmic scale we find a straight line whose slope gives an estimate of the inverse of 1/ν = 0.613552 ± 0.003861 since Fig. (2c) suggests It implies that in the limit L → ∞ all the p takes the value p c revealing that W (p) will ultimately become a step function so that W (p) = 0 for p ≤ p c and W (p) = 1 for p > p c . We can use Eq. (20) to define a dimensionless quantity (p c − p)L 1 ν . Now, we plot W (p) vs (p c − p)L 1 ν in Fig. (2d) and we see that all the distinct plots W (p) vs p for bond percolation collapse onto a one universal curve and for site onto another curve albeit they share the same ν value. By tuning the 1/ν value further we can get an excellent data-collapse for 1/ν = 0.6115 and hence a better ν ∼ 1.635 value that corresponds to infinite lattice size.
B. Percolation probability P
Consider that we pick a site at random and ask: How likely is that site belong to the spanning cluster? For finite system size, it may not belong to the spanning cluster even if p is larger than the percolation threshold p c . Therefore, we therefore can quantify the strength of the spanning cluster by percolation probability P which describes how likely a site picked at random is to belong to the spanning cluster. The quantity P is defined as the ratio of the size of the spanning cluster s ∞ to the size of the lattice N i.e., P = Number of sites in the spanning cluster Total number of sites in the lattice .
Sometimes, percolation probability is also defined as the probability that an occupied site belongs to the spanning cluster. It can be obtained if we replace the denominator N of Eq. (21) by total occupied sites. We, however, will consider the former definition. There exists yet another definition where we can use the size of the largest cluster instead of the spanning cluster. Note that all of these definitions behaves in the same fashion like order parameter. That is, in the limit L → ∞, P = 0 for p ≤ p c and it rises from P = 0 at p c to P = 1 continuously and monotonically like P ∼ (p − p c ) β . Such behavior is reminiscent of order parameter like magnetization m in the case of paramagnetic to ferromagnetic transition and hence P is regarded as the order parameter in percolation theory. The critical exponent β value is known to depend only on the dimension of the lattice and independent of the type of percolation. Through the site percolation on WPSL we already reported that β value for WPSL, which is a planar lattice, is different from the value for all the known lattices whose dimension of the embedding space d = 2. We shall now check if the β value for the bond percolation is the same as for the site percolation.
It is important to note that in the case of site percolation we occupy its blocks or cells which are of different size. We therefore measure the area of the spanning cluster, not the number of blocks in the spanning cluster. This is in sharp contrast to the regular lattice where all the blocks or cells are of the same size and hence the size of the spanning clusters can be described by the number of blocks or sites in the spanning cluster. In the case of bond percolation on WPSL we, however, use the traditional definition of cluster size. This is one significant difference between bond and site percolation on WPSL. Note that for bond percolation on WPSL we use the dual of the WPSL not the lattice itself. The dual of the WPSL is obtained by replacing each block of the WPSL by a node or vertex at its center and each common border between blocks by a bond connecting the nodes at the center of corresponding blocks. In the case of bond percolation we occupy these links and measure the size of the cluster by the number of nodes or vertices that the cluster contains. Below we shall see the impact of this difference in their behavior, if at all. In Figs. (3a) and (3b) we plot percolation probability P as a function of p for bond and site respectively. Looking at the plots, one may think that all the plots for different L meet at a single unique point like it does for W (p) vs p plot. However, if one zoom in it becomes apparant it is not so and hence the p c value from this plot will not be as satisfactory as it is from W (p) vs p plot. We also find that P (p) is not strictly equal to zero at p < p c , rather there is always a non-zero chance of finding a spanning cluster even at p < p c as long as the system size L is finite. However, the plots of P vs p for different system size L reveals that the chances of getting spanning cluster at p < p c diminishes with increasing L. There is also a lateral shift of P value to the left for p > p c but the extent of this shift p − p c decreases to such an extent that it never diminishes. On the other hand, the extent of shift p − p c to the right for p < p c diminishes to zero following Eq (20). We shall now check if P above p c grows like P ∼ (p − p c ) β . If it does so then we shall find the value of the critical exponent β and compare it with that of its site counterpart.
To show that the percolation probability behaves like P ∼ (p − p c ) β and to find the exponent β for infinite system size L we use the idea of finite-size scaling. We first plot P (p) vs (p c − p c (L))L 1 ν and find that unlike W (p) vs (p c − p c (L))L 1 ν it does not collapse. Instead, we find that for a given value of (p − p c )L 1/ν the P value decreases with lattice size L. It means percolation probability is not a dimensionless quantity and hence assume that and we choose a = β/ν for later convenience. To find the value of β/ν we measure the heights at a given value of (p − p c )L 1/ν for different L and plot them in the loglog scale. We find straight lines for both bond and site (see Fig. (3c)) with slopes β/ν = 0.135699 ± 0.0005905 for bond and 0.135701 ± 0.0002768 for site revealing that they are almost parallel. It implies that if we now plot P L β/ν vs (p − p c )L 1/ν all the distinct plots of P vs p should collapse into a single universal curve. In Fig. (3d) we plot just that and find an excellent data-collapse using β/ν = 0.1357 for both bond and site. We checked it for square lattice anyway. This again implies that perco- In (c) we plot log P vs log L using data for fixed value of (p − pc)L 1/ν and find almost parallel lines with slopes β/ν = 0.135699±0.0005905 for bond and 0.135701±0.0002768 for site respectively which clearly implies that the critical exponent β is independent of the type of percolation. For further fine tuning of the β value we also plot the same data of (a) and (b) in the self-similar coordinates namely P L β/ν and (p − pc)L 1/ν and find excellent data-collapse of the plots (a) and (b) both using β/ν = 0.1357 which gives β ∼ 0.222 lation probability P exhibits finite-size scaling Note that although the critical exponents of both site and bond coincide their collapsed universal curve does not.
We have chacked it with the site and bond percolation on square lattice and found that there too the universal curve do not coincide. Hsu and Huang also stated that the universal curves are different for planar random lattice, dual of the planar random latice and of the square lattice albeit they belong to the same universality class [16]. Now using Eq. (22) in Eq. (23) to eliminate L in favor of p − p c we get where β ∼ 0.222 independent of site or bond percolation and it is significantly different from the corresponding values for all known planar lattices.
C. Cluster size distribution and their mean
The cluster size distribution function n s (p) plays a central role in the description of percolation theory. It is defined as the number of clusters of size s per site in the lattice. Unfortunately, only in the case of one dimensional system, we know an exact form for the cluster number n s (p) and manage to it handle approximately for In the case of bond the cluster size is measured by the number of sites each cluster contain and in the case of sites it is the area of the contiguous blocks that belong to the same cluster. In (c) we plot log S vs log L using the size of S for fixed value of (p − pc)L 1/ν and find almost parallel lines with slope γ/ν = 1.73153 ± −0.001979 and 1.72806 ± 0.001993 for bond and site respectively. In order to obtain a better estimate for the γ value we also plot the same data of (a) and (b) in the self-similar coordinates namely P S −γ/ν and (p − pc)L 1/ν . By tuning the γ/ν = 1.728 value we find a set of excellent data-collapse for both (a) and (b) that gives γ = 2.825.
infinite system which is actually the Bethe lattice. For 1 < d < ∞ we do not yet know an exact expression for n s . This is because in such cases there exists a large number of different ways in which clusters of same size can arrange themselves, which are called lattice animals. Even for relatively small cluster size in the square lattice we run into difficulties in enumerating them. Nevertheless, theoretically we can still write down the general expression where g s,t is the number of possible lattice configurations of size s and perimeter of size t. Note that the quantity sn s (p) is the probability that an arbitrary site belongs to a cluster of size s. On the other hand, the quantity s=1 sn s is the probability that an arbitrary site belongs to a cluster of any size which is in fact equal to p. Therefore, the ratio of the two is the probability that an occupied site chosen at random belong to a cluster of size exactly equal to s. The mean cluster size S(p) therefore is given by where the sum is over the finite clusters only i.e., the spanning cluster is excluded from the enumeration of S. The definition of mean cluster size S, however, does not have information about the geometric structure of the clusters like their compactness and spatial extent. It is important to mention that the mean area of the blocks in the WPSL decreases as (1 + 3t) −1 and hence increasing the size of the lattice we need to blow up the lattice by a factor of 3t. It compensates the decreasing block size with increasing block number N . That is, the mean cluster size in the case of WPSL. In the case of bond percolation, however, we do not need to multiply by the factor 3t as the cluster size here is measured by the number of nodes or vertices it contains not by the area. In Figs. (4a) and (4b) we show the plots of the mean cluster size S(p), for both bond and site percolation, as a function of p for different lattice sizes L. We observe that in either cases, there are two main effects as we increase the lattice size. First, we see that the mean cluster size increases as we increase the occupation probability till p approaches to p c and the peak height grows profoundly with L in the vicinity of p c . Second, there is a slight shift in the peak towards p c value as we increase L. The extent of shift is again given by Eq. (20). To bring the peak height to meet at the same point we first plot S as a function of dimensionless quantity (p c − p)L 1/ν . We then measure the peak height for a fixed value of (p c − p)L 1/ν but for different L. Plotting these peak heights as a function of L in the log-log scale give straight lines for site and bond percolation both (see the inset of Fig. (4c). It implies that where like before we again choose θ = γ/ν for future convenience and find that γ/ν = 1.73153 ± 0.001979 for bond and 1.72806 ± 0.001993 for site. The two values are so close that they can be well approximated to be the same. Plotting now the same data of Figs (4a) and (4b) by measuring the mean cluster size S in unit of L θ and (p c − p) in unit of L −1/ν respectively we find that all the distinct plots of S vs p collapse superbly into one universal curve (see Fig. (4d)) in both cases with the same value for the corresponding exponents γ/ν = 1.728. It again implies that the mean cluster size too, for both bond and site, exhibits finite-size scaling sharing the same critical exponents. Eliminating L from Eq. (20) in favor of (p c − p) using (p c − p) ∼ L −1/ν we find that the mean cluster diverges where γ = 2.825 for both site and bond percolation. This value is significantly different from the known value γ = 2.389 for all the regular planar lattices.
which means n s (p c ) ∼ s −τ and hence ∞ s=1 Putting the two constraints together we find that τ must satisfy the bound 2 < τ ≤ 3. We can thus write that where τ is called the Fisher exponent. We can obtain the exponent τ by plotting the cluster area distribution function n s (p) at p c . In Fig. (5) we plot n s (p c ) vs s, for both site and bond, in the log-log scale and find two parallel lines except near the tail where there is a hump due to finite size effect. However, we also observe that as the lattice size L increases the extent up to which we get a straight line increases too. It implies that if the size L were infinitely large, we would have a perfect straight line obeying Eq. (35). The slopes of the lines are τ = 2.07252 for bond and τ = 2.0728 for site. It implies that the exponent τ is almost the same τ ∼ 2.072 for both site and bond percolation on WPSL and its value is different than the known value for all known planar lattices τ = 2.0549. Let M (L) denote the mass or size of the percolating cluster of lattice of linear size L. If the percolating cluster grows as a compact object, then its mass M (L) would grow with L as M (L) ∼ L 2 since the dimension of the embedding space of the WPSL is d = 2. However, at p c if we would like to walk through the spanning cluster then the amount of time it would take must diverge as L → ∞. This is so because of the fact that percolating cluster at p c is highly ramified. In fact, if we had p = 1 that would surely be M (L) ∼ L 2 . At p c we also get the same mass-length relation but the exponent is less than 2. To understand the significance of it, let us stack objects of unit sized squares as shown in Fig. (6a). In step one, we make four copies of unit square. Then we stack two of them side by side and the other two on top of those two also side by side. In step two, we make four copies of the resulting object after step one. We stack two of them side by side like step one and the other two on top of them again side by side. In general in the step i we make four copies of the resulting object after step (i − 1). We then stack two them side by side and the other two on top of this two again side by side as shown in Fig. (6a). It is easy to check that it obeys the mass of the object grow according to the following mass-length relation with D = 2. Now, let us slightly change the situation. We do everything like before with the only difference is that at each step we throw the top right copy leaving its space empty as shown in Fig. (6b). The amount of mass of the resulting system in the ith step is M = 3 i and the linear size of the system is L = 2 i . Using this two relations we can eliminate i in favor of L and we find the same mass-length relation as in Eq. (36) except that we get exponent D = ln 3/ ln 2 [22]. We could even remove any of the four copies at random and still we would get the same result. The exponent of the mass-length relation D = d f which is now less than the dimension of the embedding space d = 2 and hence it is a fractal. The spanning cluster too is highly ramified like Fig. (6b) as it has holes of many different sizes. Now, a litmus test whether the spanning cluster is a fractal or not would be to check if it obeys the same mass-length relation with an exponent d f < 2 since the embedding space of the spanning cluster is a plane. We plot the size of the spanning cluster M as a function of lattice size L in the log-log scale as shown in Fig. (7). Indeed, we find that d f = 1.86439 ± 0.001498 for site and 1.86378 ± −0.02249 which are almost the same but significantly different from the one for regular planar lattices d f = 1.895. It may appear that the difference between the d f for WPSL and that for regular planar lattices is not much but it important to remember that even a small difference in fractal dimension has an huge impact in their degree of ramification. We already know that the mean cluster size diverges i.e., S → ∞ as p → p c . According to Eq. (27), S can only diverge if its numerator diverges. Generally, we know that ∞ s=1 s α converges if α < −1 and diverges if α ≥ −1.
Applying it into both numerator and denominator of Eq.
(27) at p c gives a bound that 2 < τ < 3. Using Eq. (33) in Eq. (27) and taking continuum limit gives (37) We know that s ξ diverges like (p c − p) −1/σ where σ = 1/(νd f ) and hence comparing it with Eq. (31) we get Besides, there is another well known scaling relation τ = 1 + d/d f which we can use to find τ value. Using the d f value for WPSL in the scaling relations, τ = 3 − γσ and τ = 1 + d/d f , we find τ equal to 2.0725 and 2.0728 respectively which is almost equal to the one we obtained straight from slope of Fig. (7). There are also a couple of other well-known scaling relations, such as
VI. SUMMARY AND DISCUSSION
In this article, we have studied both bond and site percolation on WPSL using extensive Monte Carlo simulations. We thought it is important to know some key features of the WPSL so that one can understand why it is so special and unique. We therefore have first briefly discussed its construction process and then its various properties which are as follows. (i) The dynamics of its growth is governed by infinitely many conservation laws. (ii) Its area size distribution function obeys dynamic scaling. (iii) Each of the infinitely many conservation laws, except conservation of total area, gives rise to multifractal spectrum and hence WPSL is a multi-multifractal. Fourth, its coordination number distribution function follows a power-law. (iv) It has a mixture of properties of both lattice and graph. On one hand, like lattice, it is embedded in a space of dimension D = 2; On the other its coordination number distribution follow power-law like network. These unique properties have resulted in unique results too. We also briefly discussed about the finite-size scaling theory and have shown that its origin is deeply rooted to the Buckingham Π-theorem. The finite-size scaling is one of the most crucial aspects in percolation as it helps extrapolating critical exponents for infinite system using data for a set of finite size systems. This is done by using the idea of data collapse. Note that an excellent data collapse is one of the clear testaments that the numerical values we obtained for various exponents are quite satisfactory. Besides, we show that these satisfy a set of scaling relations which also provide a consistecy check.
In this work we first obtained percolation threshold p c = 0.3457 and p c = 0.5265 for bond and site percolation on WPSL. Naturally, the p c for bond is less than that of its site counterpart as expected. We also obtained numerically the various observable quantities such as the spanning probability W (p), the percolation strength P (p), the mean cluster size S(p) etc. using NZ algorithm. The initial data obtained from the NZ algorithm correspond to microcanonical ensemble. To get the corresponding data that correspond to canonical ensemble we used the convolution equation given by Eq. (10) for each observable quantities. With the help of a comprehensive finite-size scaling theory we also obtained numerically the critical exponents ν, β and γ for both bond and site percolation on WPSL and confirm they are equal (see table I for detailed comparison). To check further if they are equal or not we used the idea of data collapse and found an excellent data collapse for the same critical exponents albeit different p c . Note that good estimate of p c and of the critical exponents a must for obtaining satisfactory data collapse. These values also satisfy the scaling relations. All these provide a clear testament that the critical exponents for bond and site percolation in WPSL are the same. It happens in spite of the significant difference in the definition of clusters. Interestingly, these values are significantly different from the ones for all known planar lattices. We can thus conclude that the universality class of WPSL (bond and site) is distinct from the ones for all the known planar lattices. It happens in spite of the significant differences in the definition of site and bond in the WPSL.
Hsu and Huang also studied percolation in a class of random planar lattices and their duals yet they found the same critical exponents as the ones for regular lattices. Corso et. al. studied percolation on multifractal planar lattices and they too found the same critical exponents as the ones on regular lattices. So, it is neither the randomness nature of the lattice nor the multifractal nature of the lattice can be held responsible for making WPSL unique. The planar random lattice that Hsu and Huang studied is quite different than WPSL. The coordination number distribution of their lattice do not obey power-law. This is perhaps one of the most significant differences. Or, it may be the case that when a lattice is multifractal and at the same time it is random then depending on further detailed nature may be responsible for giving a new set of exponents. However, it is too soon to draw any conclusion. We hope to device more variants of WPSL in our future endeavour and see what happens. Nevertheless, we still hope that our findings will have a
|
2016-04-29T06:14:19.000Z
|
2016-04-29T00:00:00.000
|
{
"year": 2016,
"sha1": "5f3a11e974c1f38b0ae9daf67f6703155f5d93b2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1604.08699",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5f3a11e974c1f38b0ae9daf67f6703155f5d93b2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Medicine"
]
}
|
143279728
|
pes2o/s2orc
|
v3-fos-license
|
A New Approach to Relationships in Live Music : Redefining Emotional Content and Meaning
Live music is a social and artistic activity where all its participants establish a set of relationships in real time, mediated by different musical, social, psychological and environmental factors. The aim of this research is to study the relationships between audiences and musicians and their influence in the processes that lead to the generation of emotions and the creation of meaning. By taking a transdisciplinary view, I am trying to bridge some of the divides that frequently appear in music research (object-activity, aestheticsocial, etc.) and so I want to go beyond common artificial investigations about emotions and meaning in music by reflecting on the complex nature of real music experiences. The methodological approach is mainly qualitative — based on observations, and interviews; however, some quantitative data from a survey have also been taken into account. The research has been carried out as a case study that had the concerts of El Teatre Instrumental, an orchestra that specialises in the repertoire of the classical period, as its object.
Introduction
Music is present in many of our daily activities, in some cases as the main focus, in others supporting what we do and often as a mere background.Our link with music depends on the nature of the activity, the role that we play in it (musician, listener, teacher, student, technician, producer, etc.), our socialisation in a conscious and unconscious form (Bourdieu, 1998) and our own personal circumstances.People listen to music with the help of different devices and technologies and musicians rehearse and make recordings without an audience.Technology has allowed us to separate the making and the consumption of music.
In contrast, live music (still) creates social and artistic events wherein all the participants establish a set of relationships in real time, mediated by different musical, social, psychological and environmental factors.The aim of this project has been to study these relationships, their nature and their influence on the processes that bear emotions and meaning.
For Christopher Small (1998), relationships are the key to understand the meaning of a musical activity. 1 In the context of this research I treat relationships as dynamic necessarily reciprocal elements building here on Georg Simmel's "Wechselwirkung"which David Frisby translates as "reciprocal actions and effects" (Cantó Milà, 2005) -and Pyyhtinen, who applied Simmel's concept in a context that allows us to use it for the analysis of a live music situation, "Wechselwirkung is not only inter-action but the inter-play of activity and passivity in which every cause may become an effect and every effect a cause" (2009, p. 193).
A growing field of study in need of new contributions
Traditional music research tends to adopt a one-sided approach and focuses on music events through its disciplinary lenses.Music is perceived as a rather fixed object of study.Following Small (1998) and Roy and Dowd (2010), we perceive (the experience of) live music as an activity not as an object, an activity laden with emotions and concrete individual and collective meaning going beyond traditional disciplinary boundaries.Therefore, we need to review the look at emotions and meaning resulting from the music experience.A transdisciplinary approach that allows us to capture the experience of music in all its complexities is necessary.
Many existing empirical studies have been carried out in lab/ artificial settings, which are then not entirely applicable to real life experiences.For example, Egermann et al. (2013), in their experiment on expectation and emotion in a live concert, the concert was artificially staged, emotions (that were to be felt) were pre-defined and the social aspects of the activity were not considered.Consequently the results are hardly representative for a real live music event.
Also, psychological and neurological researches like Koelsch's (Koelsch and Friederici, 2003;Koelsch et al., 2000) exhaustive analysis of emotions in music experiences did not take into account cultural, sociological or environmental mediations.The same is true for "Mapping aesthetic musical emotions in the brain" (Trost et al., 2012).
Those focusing on emotions and the creation of meaning as a substantial part of a more in depth experience with music, sometimes miss a well-elaborated theoretical framework and a coherent and rigorous definition of emotion, which creates a great variety of scattered interpretations of what emotions mean (see Scherer and Zentner, 2001;Carr 2004, p. 226;Cross, 2008, pp. 153 -160).Kallinen and Ravaja (2006) differentiate between perceived and felt emotion, and the results of their investigation suggest that music "may be loaded with ambiguous emotional expressions with its rich structure" (p.207).In "The Pleasure of Making Sense of Music", Vuust and Kringelbach (2010) consider the concept of music-specific emotions (p.169), like the sensation of swing.
Exceptions are Sam Thompson (2006), who opts for a more naturalistic inquiry, and Stephanie Pitts, who points at the close relationship between social and musical enjoyment as something that is at the heart of live music experience (2005).Especially from the ambit of the sociology of popular culture (Hall and Jefferson, 2006) there are examples that take a more social and sociological approach to the analysis of music experiences and music as an event (Bennett, 2008;Frith, 1998).They are key benchmarks for this article.
Qualitative approach and a case study
A qualitative approach is needed to learn about processes and relationships, and Grounded Theory has provided inspiration for the gathering and analysis of the data (Annells, 1997;Strauss and Corbin, 1998;Goulding, 1999).I have decided to carry out a participatory research, primarily as an active observer during concerts and rehearsals, but also by contrasting my own systematically analysed experiences as a musician and listener with the views expressed by audiences and musicians.Digithum, No. 17 The Humanities in the Digital Age This investigation has been based on a case study carried out at the concerts of El Teatre Instrumental, with whose help I will shed light on some of the processes that operate in a broader spectrum of live music situations (Gerring, 2007, pp. 20, 49).Based in Barcelona, El Teatre Instrumental is an orchestra inspired by the tradition of classicism but offering a musical discourse that is addressed to today's audiences.The musicians of El Teatre play with historical instruments and their main objective is to emphasise the link that exists between music and drama.They do not perform at traditional concert conventions but rather create alternative events in which they encourage audiences to use their imagination.Through short oral explanations, musical examples and even some theatrical scenes, they help the public to understand the musical language of the more representative figures of the Viennese classicism, Haydn, Mozart and Beethoven.Their programmes often include vocal arias and they also present chamber music concerts.The orchestra performs without conductor, as it was common practice in the classical period.Instead, they receive musical guidance of clarinettist Lorenzo Coppola, an inspirational figure and a well-known specialist in this kind of repertoire.
In order to gain valuable results and so to carry out my case study, I worked with a mixed method approach and diverse data gathering methods: • Observations of life performances and rehearsals.
• A survey among people that attended the concerts, including quantitative information and qualitative questions.• Interviews and small group discussions with audience members and musicians.
Observations
Between October 2012 and June 2013, I observed ten public performances of El Teatre Instrumental in several concert halls in Barcelona and other surrounding cities.I attended various rehearsals for each one of these concerts, I kept a diary and I recorded six of the concerts and a few rehearsals on video.
The observations during rehearsals allowed me to witness the development process of a concert and the specific roles and contributions of each of the members of the orchestra.It became obvious that the orchestra is a conjunction of diverse social relations that are woven and interwoven via the music that is prepared and performed.Hierarchies are far less existent than expected.Whilst the role of El Theatre's artistic director, Lorenzo Coppola, is crucial, the rehearsals are extremely participative.Rather than a top down coordination, all the orchestra members contribute actively to the preparation and performance.The association between the music and some theatrical situations is used to find ways of being more expressive and, quite often, Lorenzo used plots from well known operas of that period to help musicians understand what they are performing and telling with their music.The vocal quality of the melodic lines was emphasised, as well as the emotional effects of rhythms and dynamics.A central exercise is the emphasis on the dramatic effect of the music, and Lorenzo highlights some harmonic features (dissonances, deceptive cadences, modulations, etc.) and relates them to the composer's intentions and performance practices of that time.
Survey I designed the survey to obtain: • quantitative statistical facts about the audiences that have attended the concerts of El Teatre Instrumental and their opinions about El Teatre's approach and • qualitative answers (text) with their thoughts, feelings and considerations about their experiences at the concerts.
The survey was created and distributed using Google Forms.Initially, 220 email messages with the link to the questionnaire were sent to the mailing list of El Teatre.In five days, 30 of them were answered and a few days later an event was posted in Facebook, followed by a link in El Teatre's website.I stopped collecting survey data two days after the last concert that I observed, having received 77 questionnaires.
Here are some of the main results of the quantitative questions: • 61 % of respondents were women and 39 % men, and the age distribution is what you would expect for classical music attendance, with 60 % between 45 and 64.• 84 % attended university; 64 % did not have musical training, 17 % had been to a music school and 14 % were music graduates.• 72 out of 77 liked classical music.
• 43 % went to at least one concert a month and 31 % to one every quarter.• 62 % had attended 2 or more El Teatre's concerts and 36 % 4 or more, 57 % of them had paid for a ticket (some were free).• 83 % said that the verbal explanations that were a feature of the concerts were very interesting; 73 % thought that they contributed a lot to enrich their experience; 79 % considered their balance with the music adequate and 17 % considered that there were too many.• Regarding the effect of some features over their musical experience they said: -Performing without conductor: 44 % very positive, 27 % positive, 29 % no effect.
The qualitative data of the survey is analysed together with the interviews in the discussion section.
Interviews and focus group discussions I carried out semi-structured interviews with six people that had attended at least one of the concerts (3 women and 3 men aged between 42 and 65) and were therefore representative of El Teatre's audiences.A woman had a small group discussion with her four children (aged 10, 7, 4 and 3) after one of the performances, which helped me to get at least an impression of the effects of the performances on children.Furthermore, I carried out a group discussion with three members of El Teatre.
I am aware that what happens during a concert and what audience members tell us that they have experienced are not necessarily the same, but the combination of observations, survey and interviews as a form of triangulation reduces possible bias.Some people are more articulate and others more reserved when they are asked to talk about their feelings and experiences, but throughout the interviews I was able to create a friendly atmosphere to facilitate sincere and open discussions.
Relationships
The majority of the survey respondents, as well as all the interviewees, value and enjoy the concerts not only for their music but especially for the relationships that are created between musicians and audiences during the concerts, which we can find reflected in a wide variety of citations from, especially, the open questions of the survey: "I feel like being among family"."These concerts are different, they're really addressed to the public.In other concerts... sometimes it looks as if musicians play just for themselves".
"[I enjoy] The proximity between musicians and public, to be able to see the complicity among them and the emotions that they experience when playing".
As we can clearly see, many participants emphasised the specialness and importance of the bond being woven during the concert, thus creating something special between all those who participate -the orchestra and the audience.In fact, quite a few underlined that they did not feel passive at all but actively engaged in the event: "They encourage you to participate, you don't feel like a mere spectator".
"I'm interested in the body-language of the musicians when they play, I have never seen it like this, they dance... express a lot more... there's no comparison".
Within the interviews I was able to go even a little deeper and figured out some aspects of these relationships.According to them, the complicity and the musical, physical and emotional interaction between audience and orchestra members was not only noteworthy but has had a strong effect on the experience of the event: "You can see beyond your ears, you see them [the musicians], their faces... some smile...I don't know, you can sense the complicity..." But the relationship was not only strengthened by the "almost physical" interactions between audience and orchestra members but also by the comments and explanations of the orchestra members.One of the interviewees explains why: "But that role too [the person who tells the story] is very important to the audience.You have to be taken, as a child, to this incredible journey… You have to catch the attention, it's part of the performance in a sense… it's so interesting and so catching if whoever presents and gives the information acts as the mediator between the orchestra and the audience.That's a very important part of it being so fascinating".
The musicians of El Teatre also felt the weaving of these invisible bonds through emotions and communicative interaction: "The first time [that I performed with El Teatre] it had a great impact on me.The fact that the public was laughing... that you made them participants... it touched me a lot".
The critical attention and the emotional participation of the audience pushes the orchestra members to give their best and to play better for the sake of their listeners.
But orchestra members do not just create relationships with the audience, they also bring to the stage the relationships that have been created during rehearsals.The absence of conductor empowers them and enhances their mutual relationships and the relationships between them and the public, and they speak as well about the effect of "breathing together".They talk about their level of implication and attention: Digithum, No. 17 The Humanities in the Digital Age "What I notice above all, the difference between playing with El Teatre compared to other orchestras, is that, in other orchestras, musicians are very passive, they wait for everything to be given to them, and it's very difficult to see them taking initiatives or really opening their ears.Here we have practiced to open up our ears hugely!To listen to the smallest detail of what is happening, because it's not enough to just play your part... the involvement is much stronger".
Through the interviews I was also able to confirm something that I had experienced myself: the importance of the space and the setting where the concert takes place.An example of this was the performance of Mozart's clarinet quintet by a chamber group of El Teatre.The same piece was played on a Saturday afternoon at the cloister of the Monastery of Pedralbes (Barcelona) and the following Wednesday at a modern chamber hall (Ateneu Barcelonès), nice but with quite dry acoustics.Both concerts were very well attended.At the cloister, after a magical second (slow) movement there was a great silence and you could just hear the murmur of the water coming from the cloister's fountain.The strong emotional relationship with the audience that had been created seemed to manifest itself as deep silence.In contrast, when the same piece was performed a few days later to an enthusiastic hall, after this movement everybody started clapping.Both performances were great, but the atmosphere of the place had a strong impact on the experience for both audiences and musicians and also on the way the relationships between them got woven.
Emotions
A significant finding of this research is the importance of emotions in live music events, as well as the difficulty to express them in words.What listeners experience or perceive is not just what the music or its interpretation express, but also the emotional impact of the visual, relational and environmental aspects of the activity.
Participants talk about "perceived emotions", those that they recognise as being expressed by the music or "felt emotions", those that they experience in themselves (Kallinen and Ravaja, 2006).In the case of the concerts by El Teatre, we can add a third category the "intended emotions", those that the composer and/ or the performers try to communicate.At the same time, audiences do not necessarily differentiate between emotions and feelings, so I am taking an open approach and use the term "emotional content" to convey the emotions, feelings, moods and sensations that are felt, expressed or generated in a live music activity.
We have already seen the importance that audiences place in the relationships that the members of the orchestra establish among themselves and with them, but they also talk about the emotions that the musicians communicate and how much they appreciate this emotional communication: "You can see if a group is... playing with a… it's very difficult to express it in other languages except Japanese.In Japanese they would say they're playing with the same energy.They call it 'ki' and, if it is there, it means you are like breathing at the same rhythm and the same air, feeling the same things and that's when this really works".
"What I really enjoy is to see how the musicians live it, I mean, I think that this is what wins your love, you fall in love with the complicity among them, at least to me this is what makes me connect.Because you're very attentive to their gestures, how they look at each other…" Similar ideas have also appeared in the questionnaires: "I like the excitement that I see in the musicians and the willingness that they all have to communicate".
"On top of playing very well, which is the most important thing, I especially like the enthusiasm that they communicate and the pleasure for the music that they perform, which is contagious to the public".
"They are an ensemble of instruments that talk to each other, with the language of music obviously, and the dialog that takes place is precise and very emotional".Some respondents are able to communicate quite vividly their emotional experience: "My senses wake up and open up to let me flow, freeing the emotions that are difficult for me to feel and express.A big space is created in me, where happiness, pain and sadness get mixed, but in harmony".
"Music wakes me up, opens my heart, makes me flow and listen to what lives inside me, and enter a different world where there's harmony and peace".
To describe their emotions during the concerts, people use words like: lots of emotions, curiosity, happiness, sadness, pleasure, joy, extremely beautiful, tears dropping or wanting to, tears of joy, below the surface and ready to break, playful and stimulating, well-being, surprise, discovery, hope, joy, tranquillity, relaxation, enjoyment, tenderness, melancholy, love, excitement, fullness, intensity, surprise, I want to dance, the emotion of beauty, freshness, sympathy, admiration, feeling of peace, calmness… Others acknowledge the difficulty of expressing it with ordinary words: "I should write a poem to express it better." Or they take a more philosophical approach: "Humanity; a deep and marvellous sensation when I felt the dialectic connection between the sublime art and our earthy and tangible existence; admiration; the value of living".Digithum, No. 17 The Humanities in the Digital Age Meaning, knowledge and emotions are also linked: "When I come out of the concert, the feeling is always to have learned a bit more… and during the concert I feel excited and deeply moved… every time I understand you better.The feelings are pleasant and I walk home with your music in my head.Thank you".
"I value the fact that my imagination is called upon and to be asked to make my own contribution to the piece that I'm listening to.To create imaginary worlds inside the music is enjoyable and relaxing".
"The joy of learning curiosities and to be transported to another era through the explanation of how they lived and what they performed".
One of the interviewees shares her feelings and the relationship between perceived (or intended) and felt emotions: "I would say… mainly the same… I'm not surprised.Maybe, it happened once that I felt something completely different from what was going on and it was explained to us afterwards".
"I tend to be relaxed when I listen to music; it's such a familiar, comfortable sort of feeling.Only I tend to ask myself more questions, I tend to become a bit more active in the listening, that's what it brought to me this sort of approach".
One of the persons that I interviewed describes his lack of emotional implication with music and what he actually enjoys: "For me is more technical...I don't receive music very emotionally; my ear is not trained for that.[At the concerts of El Teatre] I liked it more because I understood it better, I had a personal satisfaction".
But also on the side of the musicians emotions play a fundamental role.Musicians explained their own journey with El Teatre as a journey full of emotions: "There are always, in every project, two or three entrances that the entire orchestra does and where everybody breathes and it's a moment of... emotion.When you're playing and everybody breathes together you go... wow!" "I think that there are more and more emotional moments.The more we get into the repertoire, into this kind of interpretation, we know more... in a way you are readier to get emotional and there are moments... we're letting go ourselves, we're learning how to do it..." "I think that any music from any period in history reflects society in a way or another, what happens is that emotions are the same, it [music] uses some resources or others, but the emotions are the same".
The emotional impact of this breathing together on the musicians themselves is revealing, something that was felt by audiences too (the Japanese 'ki' that one of the interviewees mentions) and that we can therefore conclude that collective emotions have a strong effect on those who see them and are able to participate in an event in which they are generated.
But not all emotions are practiced.During the concerts, sometimes, intense emotions arise unexpectedly during a musical phrase, a rhythmic passage or even a single note, and it would be difficult to describe them or relate to them with a definite emotion.It seems that there are some music-specific emotions for which we don't have a name and can only be explained by experiencing music directly in certain circumstances.The silence after the slow movement of the clarinet quintet at the cloister, a poignant note sang by the soprano during a Mozart air, a beautiful melody played by the violins during a Haydn symphony or a dramatic and intense rhythm played by the entire orchestra are some of the moments that I can recall very vividly as having had this kind of powerful emotional impact on me.
Meaning
The specific meanings that are presented during the explanations and comments given at the concerts of El Teatre are an important part of what people take home from the concerts.Typically it is scenes from operas (counts, countesses, maids, lovers, seduction, fights... at the palace or the garden at night...) and occasionally teenage stories or even a football match may be used, but audiences are also invited to make up their own stories, and so to relate their own past experiences with the music and the event.Sometimes, specific musical codes by Haydn or Mozart are introduced: long melodic lines representing a noble figure, more rhythmic patterns to exemplify a popular one, etc. Lorenzo often dramatizes some excerpts played by the orchestra, creating some funny moments that make people laugh.One of the respondents says "Lorenzo's imitation of the maid opening the curtains is a classic" and the 4-year-old girl really enjoyed it: "Yes, and the man that was imitating a girl [Lorenzo] [...] and was opening the curtains... and was walking like this (she walks moving her bottom)".
If we analyse how the suggested meanings of the music influence the actual listening, we find three different reactions from the survey respondents: They try to follow the stories that have been suggested: "I very much like Lorenzo's comments and the fact that he tells us how we can imagine the scene gives a lot of meaning to the music that we are listening to".The Humanities in the Digital Age other one hiding.If this is to distant for me, I transform it into another script".
They just listen to the music or have other experiences: "The only meaning is what I felt there, performed live"."I don't know, I don't give a meaning to the music, I only enjoy it".
The same groups could be found in the longer interviews: Follow stories… "I think that the first time that I hear it [the music] I prefer to be guided a bit... to know where it goes, what's important at that moment...This is the added benefit of this type of concerts compared to the experience of a more traditional concert".
"I think it's necessary to have the guide that he [Lorenzo] gives, it's essential.At the end you feel that you gradually understand it better, you enjoy it more... the music.You're a bit lost, but it's also amusing because you try to guess what happens at that moment, is like a game, it's good".
Own stories… "Since he [Lorenzo] asked people to imagine, to see things, I really saw...I invented a story.I didn't say anything [in some concerts audiences are invited to share their own stories] but I had seen a different story from the one he was telling, although I thought that it was quite coherent with the music…" "This is very stimulating, because you not only have to understand something, but imagine it, which is something more...You can fantasise and invent different stories.It's like having Play-Doh, use it![...] It's not something that's already finished, you're giving me a material and I can make something else…" Just listen… "To me personally, [the explanations] don't change much my appreciation... then I like it or not..." The group bringing in their own stories is especially interesting.They pinpoint that the interpretation of meaning in music is somehow a negotiation process between own pasts and presents and the horizons created by the music, the result of a creative interaction.
This became especially clear in the comments of the youngest interviewees: "Yes, as if they were talking" (7 years old)."I imagined stories... but I created them myself...I wouldn't know how to tell you...I was seeing things... ima-gining things... but I had to make an effort to see what they were saying... other things came to my head" (10 years old).
In fact, these personalised imaginaries become also captured by the musicians that incorporate them somehow into the music; a reciprocal relationship between all participants has been woven: "But not only the images... also the implicit codes in this music that have been lost all this time.So the fact is that the more you work with them, you make them yours and they come out in a more natural way..." "We now have a group of El Teatre people that we're gradually understanding a certain repertoire in the same way".
"Sometimes, to know what these notes represent helps you a lot technically".
"[We may say in a rehearsal] we're not together, we're not together... but then you say this represents such and such, and all the sudden we're together".
The approach to musical meaning that I have described is based on the perspective of audiences and musicians participating in the concerts of El Teatre -with its particular take on the classical repertoire -but this analysis indicates that it can be applied to a wider spectrum of live music activities.
Conclusions
In conclusion, I would suggest that reciprocal relationships are central to live music events and experiences.Invisible threads exist and are woven between musicians and audience in various ways.
Reciprocal relationships can be shaped as emotions.They are essential to listeners and musicians in live music performances, milestones to describe musical experiences.I have argued that we can differentiate three categories of emotions: perceived, felt and intended, but the analysis of the data confirms that this is a complex area in which emotions overlap and intermingle with sensations and emotive reactions.However, it is undeniable that these emotions are experienced as collective material, as an invisible matter that makes people empathise to each other.
Reciprocal relationships can also be built on the previous experiences and expectations that are openly expressed and performed in the live music event.In fact, imaginaries created by the music and made of the different individual minds engage strongly in the live music event and emerge as vital when they are communicated and become such a public narrative.
In a certain way, we can say that music itself is a web of reciprocal bonds that operate at different levels and that shift from context to context (space, people, narration, orchestra).This study has made clear that the meaning of a musical work, whatever it (June 2015) | ISSN 1575-2275 A scientific e-journal published by the Arts and Humanities Department 13 Joan-Albert Serra, 2015 FUOC, 2015 (June 2015) | ISSN 1575-2275 A scientific e-journal published by the Arts and Humanities Department 14 Joan-Albert Serra, 2015 FUOC, 2015 (June 2015) | ISSN 1575-2275 A scientific e-journal published by the Arts and Humanities Department 15 Joan-Albert Serra, 2015 FUOC, 2015 (June 2015) | ISSN 1575-2275 A scientific e-journal published by the Arts and Humanities Department 16 Joan-Albert Serra, 2015 FUOC, 2015 They create their own stories: "I let myself go.Sometimes I create my own movie if I don't see the count or the servants that come and go or the Digithum, No. 17 (June 2015) | ISSN 1575-2275 A scientific e-journal published by the Arts and Humanities Department 17 Joan-Albert Serra, 2015 FUOC, 2015
|
2019-05-03T13:09:59.621Z
|
2015-07-31T00:00:00.000
|
{
"year": 2015,
"sha1": "16acb53f914683721c445d732b4d636660f109a7",
"oa_license": "CCBY",
"oa_url": "http://digithum.uoc.edu/articles/10.7238/d.v0i17.2635/galley/2833/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "16acb53f914683721c445d732b4d636660f109a7",
"s2fieldsofstudy": [
"Art",
"Psychology"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
55452705
|
pes2o/s2orc
|
v3-fos-license
|
“Different means of earnings management of owner-managed firms versus agent-led firms: evidence from chaebols in Korea”
This paper examines the earnings management behavior of large, family-controlled business groups (so-called ‘chaebol’) in Korea from 2006 to 2010. Specifically, the author studies whether the methods of earnings management are different between chaebol firms versus non-chaebol firms. The author finds no significant difference in accrual-based earnings management by these two types of firms. However, the author shows that chaebol firms’ real-based earnings management is greater than non-chaebol firms, based on their higher abnormal production costs and lower abnormal discretionary expenses, in order to manipulate accounting income upward. The results suggest that owner-managed firms tend to choose real manipulation which negatively affects future corporate performance and consequently mislead investors about the firm value
Introduction ©
This paper investigates whether owner-managed firms and agent-led firms have different means of earnings management.In Korea, large, familycontrolled business groups (so-called 'chaebol') account for a significant proportion of gross national product.Chaebol firms are usually managed by owner or founder family members, and these ownermanagers have control rights over the firm's assets and use these rights to influence the firm's decisionmaking processes.Unlike agent-led firms, managers in chaebol firms have interest aligned with owners.However, owner-managers may exert power over corporate decisions for private benefits at the expense of minority shareholders.Such unique circumstance of chaebol firms suggests that the extent of earnings management may be different for chaebol firms compared to non-chaebol firms.Furthermore, the means of earnings management (i.e., accrual-based versus real-based manipulation) may be different for chaebol firms than non-chaebol firms.
Such conjectures are empirically examined using a sample of listed companies in Korea from 2006 to 2010.I find no difference in accrual-based earnings management, which is measured by discretionary accruals, between chaebol firms and non-chaebol firms.This could be due to high political costs and auditing risk related to accrual manipulation regardless of ownership structure.However, I show that chaebol firms tend to manipulate real operating activities to a greater extent than non-chaebol firms.Specifically, they increase production activity and decrease discretionary expenses in order to manipulate earnings upward.These findings suggest that owner-managers tend to choose real manipulation which negatively affects future corporate performance and consequently mislead investors about the firm value.This also implies a greater demand for strict auditing and regulations on chaebol firms.This paper adds to the literature on the relationship between family firm ownership and earnings management.Specifically, the author studies the effect of chaebol ownership on the different methods of earnings management in Korea.Prior research has documented that earnings management is prevalent in firms with significant Type 2 agency conflict between controlling shareholders and minority shareholders (Fan and Wong, 2002, Liu and Lu, 2007, Bhaumik and Gregoriou, 2010).Studies based on Chinese companies (e.g., Aharony et al., 2005) find that the main motive for earnings management is tunneling and the main vehicle is transactions with related firms.However, when controlling shareholders become the true owners through highly concentrated ownership, they are likely to minimize accounting earnings in order to preserve their future growth potential (Ding et al., 2007).Ali et al. (2007) also show that family firms in China exhibit less discretionary accruals.Consistent with the latter strand, it was found no significant difference in discretionary accruals between chaebol firms and non-chaebol firms in Korea.The researcher also extends the former strand by showing that significant Type 2 agency conflict in chaebol firm results in a greater tendency of real-based earnings management.
The remainder of this paper is organized as follows.Section 1 reviews the related literature and develops hypotheses.Section 2 describes the methodology used in empirical analysis.Section 3 reports the test results and the final section concludes.
Related literature and hypotheses
This paper is closely related to two streams of literature.First, this paper adds to the earnings management literature.Schipper (1989) defines earnings management as "the purposeful intervention in the external financial reporting process, with the intent of obtaining some private gain (as opposed to merely facilitating the neutral operation of the process)."One survey conducted in Korea documented that more than 80% of listed companies on KOSPI and KOSDAQ are engaging in such disclosure management actions.There are several strands of research on the incentives of earnings management.First, executive compensation hypothesis (Holmstrom, 1982) posits that managers are inclined to manage earnings in order to maximize their compensation, since executive pay depends on firm performance relative to competitors.Second, according to debt covenant hypothesis, firms which are close to violating debt covenants are more likely to engage in upward earnings management to avoid high contracting costs (such as early redemption, higher interest rate) (Defond and Jiambalvo, 1994).Third, companies with higher political costs may have incentives for downward earnings management or income smoothing in order to defer current period earnings to the future periods.This paper attempts to examine whether owner-managed and agent-led firms have different incentives for such financial reporting behavior.
Next, the discussion on owner-managed and agent-led firms stems from the agency theory (Jensen and Meckling, 1976).The agents may have different interests to those of owners due to the separation of ownership and management, thereby acting to maximize their private benefits at the expense of owners.However, for owner-managed firms, such agency problem is mitigated, since the owner is in control.Instead, there is another form of agency conflicts between controlling shareholders and minority shareholders.Owner family (controlling shareholders) can exert significant power on important business decisions to maximize their own wealth even if it hurts the minority shareholders (La Porta et al. 1999).This type of agency cost becomes greater when there is no control mechanism to protect minority shareholders against controlling shareholders.In Korea, large conglomerates, so-called 'chaebol' firms, are considered owner-managed ones in which founder family members influence major business decisions and control the significant number of shares.
There are two opposing hypotheses that can explain the effect of such ownership structure on their financial reporting behavior.According to the view on the convergence of interest, owner-manager would want to maximize the firm value, since there is no agency conflict between owners and managers.Thus, there is less incentive for earnings management in owner-managed than in agent-led firms.On the other hand, management entrenchment view argues that owner-manager is inclined to build the empire and maximize their private benefits by abusing corporate resources.Based on such view, the agency conflict between controlling and minority shareholders is significant in chaebol firms.Therefore, the owner-managed firms are more likely to manipulate earnings at the expense of minority shareholders.These opposing predictions lead to the first hypothesis as follows: H1: Earnings management will be greater (less) in chaebol firms than in non-chaebol firms.
In addition, the means to manipulate financial statements may differ by ownership structures.Prior research has mainly documented two means of earnings management.First, companies may manipulate accruals to report favorable accounting income.Such discretionary component of accruals is measured using various models including the modified Jones model (Dechow et al., 1995).Second, companies also manage earnings by changing real operating activities (Roychowdhury, 2006).Such real-based manipulation is measured by abnormal operating cash flows, abnormal discretionary expenses and abnormal production costs.The choice of earnings management methods depends on the firm-specific characteristics or circumstances.Zang (2007) shows that managers use accrual manipulation and real manipulation as substitutes in managing earnings.Specifically, after lawsuit filings, managers are documented to switch from accrual management to real manipulation.
The means of earnings management may be affected by their potential costs, which is different for ownermanaged and agent-led companies.Managers in agentled companies have a risk-averse tendency, since their job security is harmed by risky investments and poor (short-term) performance.By contrast, ownermanagers have a longer horizon than owner-managers do (Fama and Jensen, 1983).This implies that ownermanaged and agent-led companies could face very different trade-offs between accrual-based and realbased manipulation.Therefore, the second hypothesis is stated as follows: H2: Chaebol and non-chaebol firms will have different means of earnings management.
Data and sample.
Our sample consists of listed companies on KOSPI and KOSDAQ from 2006 to 2010.The sample period ends in 2010, because the new accounting standard (Korean International Financial Reporting Standard) was adopted in 2011.The author collects the financial variables used in data analysis from TS2000 database.
He imposes the following data requirements on the initial sample.First, firms in financial and insurance industries are deleted since their financial statements are not comparable to the other industries.Second, firms with non-December fiscal year-end are excluded.Third, firms with impaired capital, negative total assets, or negative book equity are deleted.After I remove firm-years with missing data for tests, the final sample contains 2,184 firm-year observations (478 distinct firms).For a regression purpose, all variables are winsorized at top 99% and bottom 1%.
Variables. (1) Chaebol: owner-managed versus agent-led firms
In general, firms in which the founder or his/her family member is an executive are classified as owner-managed firms and the others as agent-led firms (e.g., Anderson and Reeb, 2003).In Korea, large conglomerate groups, so-called chaebols, are composed of owner-managed companies in which controlling shareholder or founder family member is an executive or a chairman on the board (Jeong and Bae, 2007).These companies have significant related party transactions among subsidiaries, and the Fair Trade Committee restricts such mutual contributions.The committee announces the list of companies which are restricted on mutual contribution based on their total assets in April every year.Hence, the author classifies the companies on this list as owner-managed ones and the others as agent-led ones. ( where, TAC is total accruals for firm i during year t, Rev is the revenue for firm i in year t, PPE denotes property, plant and equipment for firm i at the end of year t, and ROA is return on total assets for firm i in year t -1.All variables are scaled by the total assets at the beginning of year t.The derived nondiscretionary portion of accruals by using the coefficients estimated from regression equation ( 1), as shown in equation (2).
where, Rec is net receivables for firm i at the end of year t.Finally, the discretionary accruals are calculated as the difference between total accruals and non-discretionary accruals, as in equation ( 3): DA1 is the outcome when total accruals are computed by subtracting operating cash flows from net income and DA2 is the outcome when total accruals are computed by subtracting operating cash flows from operating income.Also, in order to capture both upward and downward accrual-based earnings management, The author uses the absolute value of discretionary accruals, i.e., Abs(DA1) and Abs(DA2). (
3) Real-based earnings management
Real-based earnings management is measured following the approach taken by Roychowdhury (2006).Roychowdhury (2006) estimates the normal level of operating cash flows, production costs, and discretionary expenses based on the cross-sectional regression models (at industry year), as shown in equations ( 4)~( 6): where, CFO is operating cash flows during year t, S is the sales revenue for year t, A is the total assets at the beginning of year t, PROD is the production cost for year t ( = COGS + ΔINV), DISX is discretionary expenses during year t ( = Selling and General expense, Taxes, Depreciation, Rent expenses, Insurance expense).
Normal level operating cash flows, production costs and discretionary expenses are calculated using the coefficient estimates ( ) (
4) Control variables
A standard set of controls for regression analyses is used.First, firm size is measured as natural logarithm of a firm's total assets.Large firms have greater political costs and consequently they have greater incentives to smooth earnings than small firms (e.g., Moses, 1987).By contrast, there is more available information and less information asymmetry for large firms.This implies that they have less incentive for income smoothing or earnings management (e.g., Albrecht and Richardson, 1990;Choi and Lee, 2002).
Also, Debt is measured as total liabilities divided by total assets.Highly levered firms are likely to manage accounting income in order to avoid debt covenant violation (Defond and Jiambalvo, 1994).
Next, market-to-book (MTB) ratio is included as a proxy for a firm's growth opportunities.Firms with greater growth opportunities face higher operating uncertainty, thereby having greater incentives to report highquality financial statements to reduce the information asymmetry between the firm and external capital providers.In addition, ROA is added to control the effect of firm profitability on accounting quality.
Moreover, the author adds three more control variables which proxy for audit quality, information environment, and corporate governance.First, a dummy for big 4 auditors (Big4) is included to control for audit quality, since Big4 auditors are documented to be better at constraining a client's earnings management compared to non-Big4 auditors (e.g., Krishnan, 2003).Second, analyst following is included as a proxy for a firm's information environment, as more analysts who cover the firm may impose higher pressure to meet or beat the earnings targets and consequently these firms are more inclined to manage earnings.A dummy variable (i.e., Following) that is set to 1 for firms with at least one analyst following the firm, and 0 elsewhere is used.Third, corporate governance is included, because better governed companies are less likely to engage in accrual-based or real-based earnings management.Hence, the author uses the corporate governance score (i.e., Gscore) which is based on the shareholder protection, board of directors, disclosure, and audit system (provided by Korea Corporate Governance Service).
Test results
A. Descriptive statistics.Table 1 provides the summary statistics of test variables.The means and medians of accrual-based earnings management measures (i.e., absolute value of discretionary accruals) are not significantly different between chaebol firms and non-chaebol firms.However, realbased earnings management measures seem to differ for chaebol firms.APROD and ADISC (ACFO and REM) are greater (smaller) for chaebol firms than non-chaebol firms, on average.This implies that chaebol firms tend to manipulate their production activity to decrease COGS and decrease discretionary expenditures to manipulate earnings upward.
Chaebol and non-chaebol firms seem to have different firm characteristics and information environment as well.Specifically, chaebol firms have larger size, higher profitability, better governance, and less debt, on average.Also, chaebol firms are more likely to be covered by analysts and audited by Big4 auditors.In both columns, the regression coefficient on chaebol dummy variable is not statistically significant.This suggests that the accrual-based earnings management of chaebol firms is not significantly different than other firms.Since chaebol firms are subject to high political costs and risks in case of getting detected by the authorities, they do not particularly engage in accrual manipulation to a greater extent compared to non-chaebol firms.The second set of tests investigates whether realbased earnings management is greater for ownermanaged firms than for agent-led firms.Taken together, the evidence suggests that chaebol firms are more likely to manipulate real operating activities, not accruals, so as to report more favorable earnings.These findings imply that owner-managers have a longer horizon than agents; hence, they could use the channels with long-term impacts on firm value in order to manipulate shortterm earnings.
Conclusions
This paper examines whether the methods of earnings management are related to the family firm ownership, which is proxied by a chaebol firm dummy, in Korea.The evidence shows that chaebol firms use real-based earnings management to a greater extent, but not the discretionary accruals, in order to report favorable accounting earnings.This suggests that owner-managers tend to sacrifice longterm firm value through manipulating productions and discretionary activities.Hence, the findings of this paper suggest that external auditors and regulatory authorities need to enhance the monitoring over chaebol firms.
~(6).Then, I subtract these normal values from their raw values to obtain the abnormal CFOs, abnormal production costs, and abnormal discretionary expenses.For the simplicity of interpretation, real-based earnings management measures are defined as follows: ACFO = abnormal CFO*(-1); APROD = abnormal PROD; ADISX = abnormal DISC EXP*(-1).Also, companies may utilize more than one device to manipulate real activities.Thus, followingCohen et al. (2008) andCohen and Zarowin (2010), a comprehensive measure of real-based management is defined as follows: REM = ACFO + APROD + ADISX.
Table 1 .
Summary statistics of key variables
Table 2 .
The relation between accrual-based earnings management and chaebol firms
Table 3 .
The relation between real-based earnings management measures and chaebol firms
Table 3 (
cont.).The relation between real-based earnings management measures and chaebol firms also examine the relationship between a comprehensive measure of real-based earnings management and chaebol firms.Table4reports the test results when the dependent variable is REM, which is the sum of abnormal operating cash flows, abnormal production costs, and abnormal discretionary expenses.Consistent with individual test results, I find that the regression coefficient on Chaebol dummy variable is positive and statistically significant at 1% level (0.0512, t-value = 4.37). I
Table 4 .
The relation between comprehensive real-based earnings management measure and chaebol firms
|
2018-12-05T05:10:03.111Z
|
2016-07-14T00:00:00.000
|
{
"year": 2016,
"sha1": "846fa2d4b8f678827be48288690d4f8adf12de03",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21511/imfi.13(2-2).2016.03",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ff3d145bc161575fceb45dec3d9eafc08f2ea38b",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
139623844
|
pes2o/s2orc
|
v3-fos-license
|
Rapid Prototyping Technology for Manufacturing GTE Turbine Blades
The conventional approach to manufacturing turbine blades by investment casting is expensive and time-consuming, as it takes a lot of time to make geometrically precise and complex wax patterns. Turbine blade manufacturing in pilot production can be sped up by accelerating the casting process while keeping the geometric precision of the final product. This paper compares the rapid prototyping method (casting the wax pattern composition into elastic silicone molds) to the conventional technology. Analysis of the size precision of blade casts shows that silicon-mold casting features sufficient geometric precision. Thus, this method for making wax patterns can be a cost-efficient solution for small-batch or pilot production of turbine blades for gas-turbine units (GTU) and gas-turbine engines (GTE). The paper demonstrates how additive technology and thermographic analysis can speed up the cooling of wax patterns in silicone molds. This is possible at an optimal temperature and solidification time, which make the process more cost-efficient while keeping the geometric quality of the final product.
Introduction
Blades are one of the most massive and high-loaded parts of GTU and GTE [1,2]. They face higher requirements for geometric precision and surface quality. Due to the lack of developed surfaces for precise basing, one of the most efficient ways (and sometimes the only possible way) to make such blades is the casting of heat-resistant nickel alloys into ceramic molds. Turbine blades are mostly machined by grinding [3], which enables manufacturers to meet high precision and surface roughness requirements [4]. Another, and the most expensive way to machine heat-resistant alloys is dimensional electrochemical machining [5][6][7]. Making geometrically complex parts by investment casting implies making workpieces with zero stock removal for the airfoil profile and the inner surfaces. Laser melting of heat-resistant composition powders has competitive advantages over casting technology [8]. The main process component that determines the final geometric precision of casting is the wax pattern making process; such models can be produced differently: by using metal [8], FDM [9], or elastic silicone forms [10]. wax pattern making parameters such as casting time, solidification time, and casting temperature are crucial. Thus, controlling these parameters helps avoid defects related to dimensional deviations. One of the biggest disadvantages of investment casting is its high general costs. This is due to the necessity of special machinery, heat-resistant binding materials, and the labor intensity of making molds. Besides, making the metal molds for wax casting is time-consuming. Given all the problems of pilot and small-batch multiple-item manufacturing, making wax patterns for turbine blades conventionally by using metal molds seems to be a very long and costly process as well. This is why the authors believe the rapid prototyping technology to be promising, as it reduces costs and simplifies the process. Fast prototyping is primarily discrepant from the classical approach in the following: − far lesser mold costs and production time values (usually five times less); − far less usage cycles compared to metal molds. Fast prototyping is more cost-efficient when one has to make small batches of wax patterns for investment casting [10]. Silicone molds are usually made by using a 3D-printed master pattern. The paper describes the development and use of this technology for making geometrically complex and highly precise products for the aerospace industry. The method can be used for making wax turbineblade models using silicone molds for subsequent investment casting. The goal of the research is to show how rapid prototyping can be used at a manufacturing facility to make geometrically complex castings of required precision and surface quality.
Materials and methods
A blade of an auxiliary GTE power unit turbine was chosen as the object of study. Such blades are made of nickel-based heat-resistant alloy. Table 1 describes the chemical composition of the alloy. The manufacturing process is staged as follows [11,12]: − 3D modeling in CAD with due account of shrinkage and stock to be removed when polishing the inner surfaces of the blade; − growing the master pattern; − making a silicone mold; − making wax patterns in the silicone mold; − assembling the wax patterns using a sprue system (SS); − applying refractory ceramics and removing the wax; − casting metal into the ceramic shell; − removing the SS.
3D model design
When making the master pattern, it is first 3D-modeled with due account of all the drawing-specified technical requirements. The master pattern was built in Siemens NX, a CAD/CAM/CAE package [13]. The authors first formed 2D section profiles, then built their corresponding surfaces that pass through such profiles. One of the obvious advantages of the design process is that when modeling, one can take into account all the stock to be removed in machining, as well as the shrinkage of all the components, i.e.: − the shrinkage of the wax casting after cast into the silicone mold; this shrinkage depends on the wax in use. − the shrinkage of the heat-resistant alloy when solidified in the ceramic mold.
− the shrinkage of the ceramic mold itself. The shrinkage factor was determined based on the following dependencies: where α is the metal shrinkage factor, β is the wax pattern composition shrinkage factor, Lw is the overall dimension of the wax pattern, Lb is the overall dimension of the cast blade, Lm is the overall dimension of the model.
Given the low values of α and β, the product Lм ·αβ can be deemed negligibly small. Thus, the correction factor for the master pattern is the sum of wax and metal shrinkage factors. When using the heat-resistant alloy per Table 1 as well as the wax pattern composition, α = 0.5% and β = 1%. One should keep in mind that this formula ignores the ceramic mold shrinkage, which is approximately 0.1%. Thus, total shrinkage factor equals 1.5%. 0.2 mm of stock is to be removed when polishing the inner surfaces of the blade.
Making the master pattern
Fast prototyping quickly and easily produces a 3D physical object based on the 3D CAD model. 3D printer Objet Eden 350 made this master pattern from a photopolymer by means of layer growth. PolyJet-grown products have the following advantages: 3D-model-based production of parts is fast, and such parts can be made large enough. One notable con is the high roughness of the prototype surface.
Before silicone molds were made, the geometry of the master pattern were checked using a DEA GLOBAL Performance coordinate-measuring machine at 20 °С ± 2 °С and a relative humidity of 80%. To control the geometry, the authors used the PC-DMIS СAD ++ Ver. 4.3 MR1 software package.
The master pattern was fixed on the CMM table with a grip at the blade root trim surface, see Figure 1a.
Basing was done at the root surfaces. Coordinate location per the 3D model of the workpiece. The master pattern airfoil was measured in three sections, sec_31, sec_50, sec_76, see Figure 1b, which are located 31 mm, 50 mm, and 76 mm away from the blade sole per the 3D model.
As Objet EDEN 350 cannot reach an airfoil profile roughness Ra =1.25…2.5 µm. When modeling, the authors added 0.2 mm of stock to the inner surface and the airfoil of the blade to be removed in polishing. To make wax patterns and metal castings of required precision and roughness, the authors added a master pattern polishing operation to the process. 3D-printed master pattern had a roughness Ra = 3.2 µm, see Figure 2b, whereas design documentation requires an airfoil profile surface roughness Ra=1.6 µm. a b Figure 1. CMM measurements of the master pattern Airfoil and inner surface polishing was done manually in multiple runs. In the first run, the authors removed the initial layer using a Р1200 sandpaper; for every consecutive run, finer sandpaper was used until they reached a fineness of Р2500. After the airfoil and the inner surfaces were polished, the authors checked the surface roughness using a Hommel-EtamicTesterW55 profilograph, whereas geometry was controlled using the CMM (see Figure 2a). When the master pattern matches the design According to the authors' checks, the master pattern made matched the requirements for airfoil profile geometry deviations. The deviations did not exceed the allowance of +0.2 mm -0.15 mm.
Making an elastic silicone mold
After the sprue system was fully assembled, the authors began making a silicone mold. The first stage for that is to assemble a Plexiglas container for the master pattern. The container is filled with silicone, vacuumed and placed in a climatic cabinet to solidify, where it is kept for 4 hours at 40°С. The chemical reactions thus triggered result in the solidification of the mold. After the mold is fully solidified and cooled down, the Plexiglas container is disassembled along the joint lines, and the master pattern is taken out. The mold is then assembled, and wax is liquated and vacuumed in a vacuum chamber; it is then cast into the silicone mold. After the wax solidifies, the mold is disassembled, and the finished wax pattern is taken out. A silicone mold can survive about 80-100 castings depending on the geometric complexity of the pattern. This process is significantly cheaper than conventional mold making.
The precision of the wax pattern depends on the casting parameters, i.e. the melting temperature, the exposure time, and the mold temperature. The casting temperature and the exposure time are more important for dimension-related precision. Exposure time must be selected based on the cooldown rate, the wax pattern shape, and the wax composition casting temperature. As a part of this thermographic experiment, the authors studied the effect of exposure time and casting temperature on the shrinkage and the final geometric precision of the wax pattern, with a goal of optimizing the casting parameters. For that purpose, the authors did a series of test castings. Casting was done at various temperatures. Silicone molds were cooled down by forced cooling (blowing) and without it. Thermographic experiment results showed that when using forced blowing, the wax pattern in a silicone mold cools down to room temperature in 2.5 hours; without forced cooling, it takes 4 hours, or 1.7 times more time, see Figure 3. CMM was then used to measure the inner surfaces of the wax patterns made; the procedure was the same as for the master pattern. The model was checked for shape and surface positions relative to the blade root. Figure 4 shows the blade airfoil profile control process by measuring the drawing sections with a CMM. According to the data in Table 2, casting at a temperature of 95°r elated dimensional deviations. As the upper parts of the blade have less stock and are thinner, they cool down faster. A shrinkage cavity emerges in the blad Thermographic analysis of the silicone mold cooldown Forced blowing of the silicone molds resulted in lesser shrinkage of the wax pattern. To sum up, it should be noted that forced blowing is necessary for saving time when making a single wax pattern CMM was then used to measure the inner surfaces of the wax patterns made; the procedure was the same as for the master pattern. The model was checked for shape and surface positions relative to the e airfoil profile control process by measuring the drawing Table 2 specifies maximum errors in the airfoil profiles of the wax pattern relative to the master pattern depending on the wax composition casting temperature. According to the data in Table 2, casting at a temperature of 95°С results in the least shrinkage related dimensional deviations. As the upper parts of the blade have less stock and are thinner, they cool down faster. A shrinkage cavity emerges in the blade root as it is the most massive part of the Thermographic analysis of the silicone mold cooldown Forced blowing of the silicone molds resulted in lesser shrinkage of the wax pattern. To sum up, it should be noted that forced blowing is necessary for saving time when making a single wax pattern CMM was then used to measure the inner surfaces of the wax patterns made; the procedure was the same as for the master pattern. The model was checked for shape and surface positions relative to the e airfoil profile control process by measuring the drawing-specified wax pattern. The emergence of the shrinkage cavity is confirmed by wax pattern measurements and is compensated by the implied removal of stock in subsequent machining at the modeling stage, see Figure 5. a b Figure 5. Cooldown thermography: a -at the onset of cooldown; b -when the wax pattern is removed from the silicone mold Based on the results of these experiments, the authors have optimized the following parameters of casting wax into silicone forms: cooldown (the best option is forced blowing), exposure time (2.5 hours), and casting temperature (95°С).
Economic comparison of wax pattern making technologies
To analyze the cost and time of making wax patterns by rapid prototyping and by using metal molds on a CNC machining center, the authors calculated time consumption, see Figure 3.
Results
DEA Global Performance was used to measure 220 wax patterns and castings. Analysis of data obtained by comparison to the 3D model showed that the blade geometry and the airfoil twist angle were within the allowed range. The authors have determined the optimal conditions for casting wax into silicone molds and for making wax patterns.
Silicone molds can be used to make geometrically complex wax patterns for multiple-item smallbatch or pilot manufacturing, thus eliminating the need for CNC production of metal molds.
Conclusions
Fast prototyping is a cost-efficient solution for small-batch manufacturing of GTE blades. Silicone molds have a number of advantages over the conventional technology, mainly faster production and lower costs. Studies have shown that silicone molds can be used to make blades as they provide necessary accuracy. However, the conventional metal-mold technology is still better for large-scale
|
2019-04-30T13:07:16.938Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f12cc1978e6c43ace20536691a5cb135a194c9a8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/327/2/022025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "874f2ccff2411926543ccb29f4c406dc5b2ddd8a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
}
|
248811607
|
pes2o/s2orc
|
v3-fos-license
|
Interactive Dashboard to Monitor the COVID-19 Outbreak and Vaccine Administration
Dashboards are the most common visualization method for displaying COVID-19 data and informing the public. We examined 15 different dashboards to see how various visualization techniques were used. This paper describes the creation and implementation of a dashboard for COVID-19 epidemic and vaccination administration data in Sri Lanka.
Introduction
COVID-19 has expanded over the globe, having a significant impact on our daily lives and work. Early responses and timely decisions and actions are critical to saving communities and economies worldwide. Data is essential in order to make effective decisions. Data-driven information guides the decision-making process and also evaluates the effectiveness of strategies taken.
Massive amounts of data are being generated in the response to the COVID-19 pandemic. Given this available data, it is critical to create tools for exploratory analysis for policy-makers, health officials, and the general public. Dashboards are one of the greatest visual interpretation methods for tracking the COVID-19 pandemics spread and vaccine administration. Dashboards allow users to quickly interact with a combination of exploratory visualizations and gain a quick overview of the data. This paper describes the development and implementation of a dashboard for the COVID-19 outbreak and vaccine administration data in Sri Lanka.
There are a plethora of COVID-19 visualization dashboards that have been designed to visualize the pandemics global and local status. Different software can be used to generate dashboards. We explored 15 dashboards designed to visualize COVID-19 data at the global and country levels. First, dashboards were compared to identify the various features, visualization approaches, and enhancements that should be implemented. Next, we developed an interactive dashboard to visualize the COVID-19 outbreak and vaccination information in Sri Lanka. This dashboard provides front-line health officers a situational awareness of the spread of COVID-19 and the status of the vaccination program.
The rest of the paper is organized as follows: Section 2 of dashboards created using data related to the COVID-19 pandemic. Section 3 presents the methodology and basic design concept; Section 4 presents the results; and Section 5 concludes.
Literature Review
Dashboards are one of the best visual interpretation methods for tracking the spread and communication of the COVID-19 pandemic. The 15 dashboards we used in the literature survey are listed in Table 1. We compared dashboards to identify data types, plotting techniques, colour themes, and other features such as interactivity on plots and panel numbers. Table 02, all dashboards which are considered in this paper represent the data related to COVID-19 confirmed cases, recovered cases, and deaths. There were 8 dashboards out of 15 dashboards that contained vaccination details. Table 03 highlights the dashboard visualization techniques. Value boxes have been utilized to display total figures on practically every dashboard. The most common ways of visualizing confirmed cases, recovered cases, deaths, and immunization details are bar charts and line charts (trend lines). The majority of dashboards displayed data on a daily or weekly basis. The spatial distribution of COVID-19 cases by country, province, regional, and other factors is tracked using choropleth maps. When visualizing the data by the map colour code system, circles with respect to the size of the cases have been used to visualize the variation in size. Several dashboards use doughnut-shaped pie charts to indicate total COVID-19 confirmed cases, recovered cases, active cases, and deaths as a proportion. Furthermore, region, gender, age group, and ethnicity can be identified as common breakdowns of COVID-19 cases. Data tables for representing cases' distribution by province/region have been added to some dashboards. Very few dashboards have been visualized in the COVID-19 test details. Only 6 dashboards have been compared to global situations. In addition, the fatality rate, incidence rate, ICU beds, stage of the patients, and hospitalized details have been contained in the several dashboards.
Comparison of Dashboards
Before developing a dashboard, it is necessary to think about which visualization tools and features should be contained in the dashboard. What are the most suitable plots, how many panels in the dashboard, what data should be included, how to fit the dashboard on a screen, colours, and is it real time updated or not are the common things that should be considered before developing the dashboards. Table 4 summarizes information under the following categories: i. Number of panels -How many panels are included in the dashboard. ii. Visualization tools -what are the graphical representations of data. iii. Fitted on a single screen -whether the dashboard fits on a single screen or not (users can see the whole dashboard on a single screen without adjusting through grid overlay or not). iv. Colour theme -is there a unique colour used for one data type in the whole dashboard (i.e.: one colour scale for one data type everywhere on the dashboard). v. Dark background -the background colour of the dashboard is dark or light. vi. Data available -whether users can downloaded or whether data is available to reproduce the results. vii. Real time updated -whether the dashboard is updated daily/ specific time (live dashboard) or not.
Data
We obtained data from COVID-19 situation reports published by the Epidemiology Unit, Ministry of Health Sri Lanka. The data includes the number of death cases, number of hospitalized cases, number of recovered cases, and COVID-19 vaccinated counts in Sri Lanka. The data is made available through an open-source R package covid19srilanka (Talagala 2021).
Design and development
R software was used for data cleaning and analysis. The flexdashboard (Iannone, Allaire, and Borges 2020) package was used to build the data visualization dashboard. The initial layout for the dashboard was prepared based on Krispin (2021). Data visualizations are generated using the ggplot2 (ggplot2?) and plotly (Sievert 2020) packages in R. We used colour-blind friendly colour palettes for the graphics. A diverging colour palette was used to represent qualitative data, and to represent numeric variables, a sequential colour theme was used. Table 5 provides an overview of methods that have been used to visualize data. We now describe the novel visualization approaches we included in our dashboard. To effectively distribute the vaccine and to support situational awareness and inform policy-makers' decision-making, it is important to know the district-wise spread of COVID-19 cases. We have daily COVID-19 data related to confirmed cases in all 25 districts in Sri Lanka. This structure generates a multiple time series collection. Visualizing this time series data is useful to identify similarities and dissimilarities between districts and their general trends. There are two approaches to visualizing these time series: (i) creating individual time series plots for each district (as shown in Figure 1-A), and (ii) plotting all time series on a single panel at the same time (as shown in Figure 1-B). Plotting all time series simultaneously is also not possible due to overlapping time series and scale differences. Plotting separate panels for each district is not effective. The reason is that it is hard to compare across 25 different panels at once. In order to overcome these problems in multiple time series visualization, we use heat maps (Peng (2008)) to visualize global and local similarities and dissimilarities across districts. The associated results are shown in Figure 2. Here, two heat maps are used to show the global variations (Figure 2: A) and local variations (Figure 2: B) in the time series collection. Figure 2-A cell colours represent the actual counts of the COVID-19 confirmed cases. This is useful to get an idea of the differences in absolute values. Figure 2-B cell colours represent the normalized values created by applying the min-max transformation. A min-max transformation is applied to each district's time series by using the corresponding district's minimum and maximum value. This helps us to get an idea about patterns within districts. For example, according to Figure 2A, we can see that in Colombo, Gampaha, and Kalutara districts, COVID-19 cases are significantly higher than in other districts. According to Figure 2B, all districts show an increasing trend pattern as the right-hand side of the cells are lighter than the left-hand side cells in the heat map. Furthermore, according to Figure 2B, all districts reported a high number of cases on August 19, 24, and 29, 2021. Figure 3B is useful for identifying these local outlying behaviours. As shown in Figure 4, we also use a Choropleth map and Dorling cartogram to visualize the spatial distribution of COVID-19 cases. The vaccination information is visualized through interactive time series plots and bar charts.
Results
The "Sri Lanka COVID-19 Dashboard" provides an overview of the COVID-19 pandemic and administration of vaccine information in Sri Lanka. This dashboard has eight panels as listed in Table 6.
Discussion and Further research
Bar charts and line charts are the most frequently used tools for the visualization of total cases, daily cases, and weekly cases, and comparisons with respect to time. Some dashboards contained doughnut-shaped pie charts to summarize the total figures. In almost every dashboard, value boxes have been used to represent total figures. Some dashboards contained interactive maps and data tables to visualize the distribution of cases by country, province, region, or state. All dashboards are updated daily in real time. Gender, age groups, and ethnicity can be identified as common breakdowns. Most data sets and related links are available.
We used a colour-blind-friendly theme when creating our dashboard. The dashboard includes both static and interactive charts that can be used for explanatory (telling you something), exploratory (give user the chance to interact with the plots), and exhibitory (showing data visually) purposes. The data speaks to us, but it is not always easy to understand in what language they are doing it. Hence, we examine the same data under different angles using different types of plots. We do not have district-wise vaccination details. This is a potential future direction to think about. Our dashboard is completely reproducible. The source code for reproducing the results is accessible in a public GitHub repository at https://github.com/thiyangt/covid19srilanka.
|
2022-05-17T01:16:18.257Z
|
2022-05-15T00:00:00.000
|
{
"year": 2022,
"sha1": "6dc05826f585d889abafc1a783b0856f0f59f55b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6dc05826f585d889abafc1a783b0856f0f59f55b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
240632358
|
pes2o/s2orc
|
v3-fos-license
|
Utility of silhouette showcards to assess adiposity in three countries across the epidemiological transition
The Pulvers’ silhouette showcards provide a non-invasive and easy-to-use way of assessing an individual’s body size perception using nine silhouette shapes. However, their utility across different populations has not been examined. This study aimed to assess: 1) the relationship between silhouette perception and measured anthropometrics, i.e., body mass index (BMI), waist circumference (WC), waist-height-ratio (WHtR), and 2) the ability to predict with silhouette showcards anthropometric adiposity measures, i.e., overweight and obesity (BMI ≥ 25 kg/m2), obesity alone (BMI ≥ 30 kg/m2), elevated WC (men ≥ 94 cm; women ≥ 80 cm), and WHtR (> 0.5) across the epidemiological transition. 751 African-origin participants, aged 20–68 years old, from the United States (US), Seychelles, and Ghana, completed anthropometrics and selected silhouettes corresponding to their perceived body size. Silhouette performance to anthropometrics was examined using a least-squares linear regression model. A receiver operator curve (ROC) was used to investigate the showcards ability to predict anthropometric adiposity measures. The relationship between silhouette ranking and BMI were similar between sexes of the same country but differed between countries: 3.65 [95% CI: 3.34–3.97] BMI units/silhouette unit in the US, 3.23 [2.93–3.74] in Seychelles, and 1.99 [1.72–2.26] in Ghana. Different silhouette cutoffs predicted obesity differently in the three countries. For example, a silhouette ≥ five had a sensitivity/specificity of 77.3%/90.6% to predict BMI ≥ 25 kg/m2 in the US, but 77.8%/85.9% in Seychelles and 84.9%/71.4% in Ghana. Ultimately, silhouettes predicted BMI, WC, and WHtR similarly within each country and sex but not across countries. Our data suggest that Pulvers’ silhouette showcards may be a helpful tool to predict anthropometric and adiposity measures in different populations when direct measurement cannot be performed. However, no universal silhouette cutoff can be used for detecting overweight or obesity status, and population-specific differences may stress the need to calibrate silhouette showcards when using them as a survey tool in different countries.
Background
The prevalence of overweight and obesity is increasing in populations spanning the epidemiological transition and may be particularly high in individuals of African-origin. [1][2][3][4] Elevated weight is associated with the development of non-communicable diseases (NCDs), including cardiovascular disease, type 2 diabetes mellitus, hypertension, dyslipidemia, cancers, and sleep apnea. [5][6][7][8] Because of its simplicity and ease of measurement, body mass index (BMI, kg/m 2 ) is widely used to assess a person's adiposity. In addition to BMI, waist circumference (WC) and waist-to-height ratio (WHR) correlate well with fat mass as assessed by accurate methods such as computed tomography (CT). [9][10][11][12] However, BMI does not discriminate well between adipose and lean mass, and waist circumference and waist-to-height ratio have been suggested to predict adiposity better. 9-11 Yet, while they may not predict actual adiposity perfectly at the individual level, these simple adiposity markers may reliably predict mean BMI levels and the prevalence of obesity at the population level. 9,12 Measures of adiposity that do not rely on actual measurements may be useful in some situations, such as in surveys and studies of public health, anthropology, economics, and marketing, particularly when studies must be performed without direct contact with a person (e.g., mail-order or internet-based) or to avoid the burden of asking respondents to remove clothing. Furthermore, self-reported adiposity (e.g., self-reported height and weight) are prone to reporting bias and can also depend on access to home anthropometric tools like scales and varying cultural views on body size. [13][14][15][16][17][18][19] Initially developed by Stunkard and colleagues, sex-speci c silhouette showcards (referred to as "silhouettes" hereafter) can be used to determine one's perception of their body size. This tool relies on presenting to respondents a series of pictures/drawings of distinct body sizes in an increasing sequence, from which respondents select the one they think best re ects their body size. 20 Silhouettes should be ethnically ambiguous enough to be used in different cultures, but still detailed enough to be relatable. A variety of silhouette tools have been developed and validated for different populations. [21][22][23][24] Pulvers and colleagues created culturally relevant body image silhouette showcards for African Americans (Fig. 1). 25 These silhouettes were validated in different populations of African-origin such as Seychelles, the Caribbean, and the USA. [25][26][27][28] While many studies have shown a good association between the silhouettes and adiposity measures, including for the prediction of obesity, most studies have only assessed their validity in a single population at a time. [21][22][23][24][25][26][27][28][29][30][31][32] Also, only a few studies have directly compared the associations of silhouette ranking between different populations with diverse ethnic backgrounds or with different population mean BMI levels. Assessing the validity of silhouettes to predict adiposity in different populations may be challenging as one's assessment of body image relies on an individual's ability to appraise their current body size and correctly classify their weight relative to objective measurements and, also, considering that cross-cultural evaluation should rely on studies that use the same methodology in different countries. 29,33−36 Therefore, our study aims to assess: 1) the relationship between Pulvers silhouette showcard ranking and measured adiposity markers (BMI, WC, and WHR), 2) the performance of silhouette ranking to predict adiposity makers, particularly overweight and obesity based on elevated BMI, and 3) the performance of silhouette ranking to predict BMI, WC, and WHR, in three different African-origin populations representing different stages of social and economic development and different prevalence of obesity. 25 Page 3/11
Study Populations and Ethics Approval
This study is a subset analysis from the METS-Microbiome study (R01DK111848) initiated in 2017, for which the protocol has been published. 37 The METS-Microbiome study continues yearly measurements of participants initially recruited for the Modeling the Epidemiological Transition Study (METS; R01-DK080763) in ve African-origin populations spanning the epidemiologic transition varying by the United Nations Human Development Index (HDI) 2010. 38,39 The current data were collected between 2018-2019 from participants in metropolitan Chicago, IL, USA (HDI: 0.92), the mixed urban/rural Seychelles islands The study sample consisted of men and women aged 20-68 years old who were of African-origin except for Seychelles, where both Black-African participants and participants of mixed racial ancestry were included. Approximately 66% of the whole sample identi ed as female.
Survey and Body Size Silhouette Showcards
The survey component of the METS-Microbiome study consisted of a face-to-face interview performed by centrally trained personnel, capturing participants' sociodemographic status, health-related behaviors, and medical history. Participants were also presented with sex and ethnicity-speci c silhouette showcards created by Pulvers (Fig. 1). 25,27 This nine-image tool displayed sex-speci c body sizes in increasing order ranging from very thin to severely obese. To measure participants perceived body size, participants were asked, "In the drawing, which gure best re ects how you think you look with regards to your body shape?".
Participants' responses were recorded on a scale from 1 (representing the thinnest silhouette) to 9 (representing the most obese silhouette).
Anthropometric and Adiposity Measurements
Participants completed a health examination, including measured height (m), weight (kg), and waist circumference (cm). Across all sites, standardized equipment and protocols were used, as previously described. 37 Body mass index (BMI, weight/height 2 ) was calculated and classi ed as underweight (BMI < 18.5 kg/m 2 ), normal weight (BMI 18.5-24.9 kg/m 2 ), overweight (BMI 25.0-29.9 kg/m 2 ) or obese (BMI ≥ 30 kg/m 2 ). 40 A dichotomous waist circumference (cm) variable was used to classify the presence of central obesity as de ned by the International Diabetes Federation (≥ 94 cm in men, ≥ 80 cm in women) for European or African-origin individuals. 11 WHR (waist in cm/ height in cm) was calculated and dichotomized using a widely used cut-off point for normal (WHR ≤ 0.5) or increased central obesity (WHR > 0.5). 41
Statistical Analyses
Participant characteristics were summarized using means and 95% con dence intervals. Proportions were calculated and presented as a percent (%) and 95% con dence intervals for categorical variables. Spearman's rank correlation coe cients were used to describe the associations between the self-reported perceived silhouette ranking and BMI, WC, and WHR.
Mean BMI and 95% con dence interval for each silhouette rank was determined by sex and by country. To assess whether the slopes of the relation between silhouette ranking and adiposity markers differed between countries and sex, we estimated the linear regression coe cients (i.e., the change of the three adiposity markers corresponding to 1 silhouette ranking change) by sex and country with accompanying 95% con dence intervals.
The self-reported silhouette showcards were assessed for accuracy in predicting widely used dichotomized adiposity markers, e.g., overweight and obesity (BMI ≥ 25 kg/m 2 ) or obesity alone (BMI ≥ 30 kg/m 2 ), elevated waist circumference (cm, (≥ 94 cm in men, ≥ 80 cm in women) and elevated waist-to-height ratio (WHR > 0.5) using sex and country-speci c receiver-operator curve (ROC) analysis. 26 We used the area under the curve (AUC, i.e., the c-statistic) and sensitivity and speci city associated with different cut-offs of the silhouettes to predict these dichotomous adiposity categories.
All statistical analyses were performed using STATA SE 12 (StataCorp, College Station, TX, USA). Table 1 shows the main characteristics of the 751 participants from the three countries. Mean age differed slightly across countries and was highest in men in the USA (47.1 years) and lowest in women in Ghana (41.4 years). Table 2 shows the Spearman's correlation coe cients of the relationship between the perceived self-reported silhouette rankings with BMI, WC, and WHR, by country and sex. These coe cients ranged between 0.71 and 0.80 in men and women in all countries, except in men in Ghana (0.55-0-58), (p < 0.001 for all coe cients). Relationship between silhouette ranking and measured BMI Table 3 shows a graded increase in mean BMI according to silhouette ranking by sex and country. The table also depicts the least-squares linear regression coe cients by sex and country between participants' measured BMI and the self-reported silhouettes. Regression coe cients (i.e., slopes of the regression lines) were higher in women compared to men in all three countries. Regression coe cients were signi cantly lower in Ghana than in the other two countries for both men and women. In the USA and Seychelles, an increase in 1 silhouette unit was associated with an increase of 3.05-3.75 BMI units (kg/m 2 ) but only 1.15-2.06 BMI units in Ghana. Nearly identical trends were observed for WC and WHR (Supplementary tables 1 and 2). A robust regression analysis, which lessens the in uence of outliers on the regression coe cient estimates, was also performed, and estimates were almost identical as those in the least-squares linear regression. Self-reported silhouette as a discriminator of overweight and/or obesity Performance between silhouette ranking to BMI, waist circumference, and waist-to-height ratio in detecting adiposity Table 5 shows the sex and country-speci c AUCs (i.e., c-statistic) of silhouette ranking to predict overweight or obesity status (BMI ≥ 25 kg/m 2 ) or obesity alone (BMI ≥ 30 kg/m 2 ). AUCs ranged between 0.79 and 0.92 in men and between 0.87 and 0.97 in women, with little differences by sex or country. Similar AUC values were found for silhouette ranking to predict elevated WC and WHR.
Discussion
This study continues on the foundation established by Pulver and colleagues in creating the silhouette showcards and subsequent validation in populations of African-origin. [25][26][27] Our data suggest that the Pulvers' silhouette showcards may be a useful tool for predicting objective body size such as BMI, WC, and WHR, in different populations of mainly African-origin. However, the relationship between silhouettes and adiposity markers differed according to the country.
Overall, our data suggest that silhouettes may be a useful tool to predict actual adiposity measures, conditional to adequate calibration for a speci c population.
BMI and other adiposity measures correlated strongly with silhouette ranking in all populations. However, the magnitude of the linear regression coe cients between silhouette ranking and actual adiposity markers differed between the three countries in this study. For example, an increase of 1 silhouette unit was associated with an increase of 3-4 BMI units (kg/m 2 ) in the USA and Seychelles but only 1-2 BMI units in Ghana. This difference suggests varying perceptions of one's body shape according to mean population BMI. One may speculate that in the USA and Seychelles, where mean population BMI is high, individuals with adiposity are more inclined to view a large body shape as normal compared to populations (e.g., Ghana) where mean population BMI is lower.
Again, this altered view suggests that silhouette showcards need to be speci c (i.e., calibrated) to different populations when used for predicting individuals' actual adiposity. From a prevention perspective, the differences in perceptions of one's body size across populations may suggest larger tolerance for larger body shapes in populations with high adiposity levels. Overall, this underlies that silhouettes can have a role for assessing adiposity in populations when direct measurements cannot be made (i.e., for surveillance purposes, as evaluated in this study), but also for assessing perceptions and attitudes of people for weight control programs.
The relationship between silhouettes and adiposity markers can differ according to sex in the same population. Using different silhouette showcards, 21,22,26 It is therefore likely that the same linear regression models can be used in men and women for calibration of the association between silhouettes and BMI (or other adiposity markers) within the same population, as long as mean BMI in the population is similar in both sexes. Inversely, as our data in Ghana suggest, different predictive models may need to be developed in men and women when mean BMI markedly differs between men and women in the same population. Differences in the slopes of the associations between silhouettes and BMI (and other adiposity markers) may also partly depend on different sex-speci c perceptions of body shape, and this question necessitates further studies.
The country and sex-speci c associations between silhouettes and adiposity markers were quite similar when using BMI, WC, and WHR. This relationship is not unexpected as BMI, WC, and WHR quite strongly and similarly inter-correlate with each other, e.g., correlation coe cients of 0.77 to 0.96 in our study, which is consistent with correlations found in other studies. 42 However, the fact that these associations between silhouettes and BMI, WC, and WHR (and the associations between these adiposity markers and objectively measured fat mass) are still not extremely strong, implies that silhouettes would not be a reliable tool to predict adiposity at the individual level (sensitivity and speci city are not optimal), but they can be useful when assessing adiposity levels (e.g., the prevalence of obesity, mean BMI) at a population level, conditional on appropriate calibration in a speci c population. More generally, our data suggest that a subjective two-dimensional pictorial body size assessment (silhouette drawings) can be a useful tool for predicting a volumetric dimension (adiposity), at least at the population level.
This study's main strength was the use of the identical methodology in the three countries, allowing us to make direct comparisons between populations of the same racial origin and that the three populations differed largely according to mean adiposity levels and socioeconomic development stages. However, the study also has limitations. First, although the study was designed to include participants of African-origin in all sites, in order to control for ethnic differences, persons from mixed origin were also included in varying but small proportions, particularly in Seychelles. Second, the study included middle-aged adults, and the ndings may not necessarily extend to older or younger individuals. Third, Pulver's silhouette tool presents body size silhouettes from thinnest to heaviest, which could lead to reporting bias. Future studies should examine if presenting the silhouettes in random order would gather different results. Fourth, survey administrators presented silhouettes to the participants; further studies should assess if results would differ if participants had assessed their silhouettes in the absence of assisting personal. Finally, our analysis, according to sex, was limited because of the limited sample size.
Conclusions
This study supports the utility of Pulvers' silhouette showcards as a useful tool to predict adiposity in populations in settings where body size cannot be measured directly, conditional to adequate adjustment (i.e., calibration) of the associations between silhouette ranking and actual adiposity markers. Although this was not the aim of this study, our results also emphasize potential bene ts of using silhouettes to assess individuals' perceptions and attitudes in the context of weight control programs at clinical or public health levels. 25 Declarations Ethics approval and consent to participate: Proportion with normal weight, overweight, and obese within each silhouette category in the USA, Seychelles, and Ghana. Notes: N weight: normal weight (BMI 18.5-24.9 kg/m2); Overweight (BMI 25.0-29.9 kg/m2); Obese (BMI ≥ 30 kg/m2); Sey: Seychelles.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. SupplementalTables.docx
|
2020-07-30T02:04:19.922Z
|
2020-07-24T00:00:00.000
|
{
"year": 2022,
"sha1": "39807f8067d228c825689c72f14b948b9d275399",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-46029/v1.pdf?c=1598400229000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3dc09bd906d438d0c90c158c1ca7be7e86c27605",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
117833246
|
pes2o/s2orc
|
v3-fos-license
|
Virtual singular braids and links
Virtual singular braids are generalizations of singular braids and virtual braids. We define the virtual singular braid monoid via generators and relations, and prove Alexander- and Markov-type theorems for virtual singular links. We also show that the virtual singular braid monoid has another presentation with fewer generators.
Introduction
J.W. Alexander [1] showed that any oriented classical link can be represented as the closure of a braid. Moreover, it is well-known that two braids have isotopic closures if and only if they are related by braid isotopy and a finite sequence of the so-called Markov's moves (see [13,16]). The first complete proof of this result was given by J. Birman [3]. Other proofs have been provided by D. Bennequin [2], H. Morton [14], P. Traczyk [15], and S. Lambropoulou [10].
Analogous theorems for the virtual braid group have been proven by L.H. Kauffman and S. Lambropoulou [9] using the, so-called, L-equivalence and by S. Kamada [6] using Gauss data. Moreover, J. Birman [4] proved an Alexander-type theorem for the singular braid monoid and singular links and B. Gemein [5] provided a Markov-type theorem for singular braids. Further, S. Lambropoulou [12] derived the L-move analogue for singular braids via L-move methods, recovering the result of Gemein. In this paper we consider oriented virtual singular links and prove Alexander-and Markov-type theorems for this class of links. These theorems are crucial in understanding the structure of virtual singular knots and links. We first define the virtual singular braid monoid using generators and relations. This definition reveals that the virtual singular braid monoid on n strands is an extension of the singular braid monoid on n strands by the symmetric group on n letters. Various braiding algorithms can be used to prove that the Alexander theorem extends to the class of virtual singular braids. For our purpose, we borrow the braiding algorithm described in [9] and extend it to include singular crossings. We then show that the L-moves used in [9] for the class of virtual braids and links can be extended to the class of virtual singular braids and links. In the presence of singular crossings and additional relations describing the virtual singular braid monoid, we need to introduce a new type of L-moves, namely a new type of 'threaded L v -moves' involving classical, singular, and virtual crossings. We state and prove first an L-move Markov-type theorem for virtual singular braids and then use it to provide an algebraic Markov-type theorem for virtual singular braids.
During our study of this problem, we found that we were able to modify the arguments of [9] and take the same diagrammatic geometry so as to prove our main results. Consequently, several figures in this paper are similar or exactly the same as certain figures in [9]. For example, if the reader would examine in this paper Figures 6 through 11 and compare with Figures 7,9,11,12, and 13 in [9], they would see the precise analogy of our arguments and the arguments of [9]. Motivated by L.H. Kauffman and S. Lambropoulou's work in [8,Section 3], we also prove that the virtual singular braid monoid on n strands admits a reduced presentation using fewer generators, namely three braiding elements together with the generators of the symmetric group on n letters.
Virtual singular links
A virtual singular link diagram is a decorated immersion of (finitely many) disjoint copies of S 1 into R 2 , with finitely many transverse double points each of which has information of over/under, singular, and virtual crossings as in Figure 1. The over/under markings are the classical crossings, which we will refer to as real crossings. Virtual crossings are represented by placing a small circle around the point where the two arcs meet transversely. A filled in circle is used to represent a singular crossing. We assume that virtual singular link diagrams are the same if they are isotopic in R 2 . Note that the moves involving virtual crossings can be considered as special cases of the detour move depicted in Figure 3 ([7, 8, 9]). This move is the representation of the principle that the virtual crossings are not really there but that are rather byproducts of the projection. To understand the detour move, suppose an arc is free of real (classical) and singular crossings, and which may contain a consecutive sequence of virtual crossings. Then that arc can be arbitrarily moved, keeping its endpoints fixed, to any new location and placed transversally to the rest of the diagram, adding virtual crossings whenever these intersections occur. (In Figure 3, the grey box represents an arbitrary virtual singular tangle diagram; a braid representation of the detour move is given in Figure 28.) Figure 3. The detour move Conversely, the detour move can be obtained by a finite sequence of the moves shown in Figure 2 that involve virtual crossings. Consequently, the virtual singular equivalence is generated by the Reidemeister-type moves for singular link diagrams (that is, the classical Reidemeister moves together with the moves RS1 and RS3) and the detour move.
←→
When working with equivalent virtual singular link diagrams, it is important to avoid the moves depicted in Figure 4. Although these moves are similar to some of the extended virtual Reidemeister moves, the diagrams of the two sides of a forbidden move do not represent equivalent virtual singular links. For this reason, we refer to these as the forbidden moves for virtual singular link diagrams. Recall that a singular link is an immersion of a disjoint union of circles in threedimensional space, which has finitely many singularities (namely singular crossings) that are all transverse double points. Equivalently, a singular link is an embedding in three-dimensional space of a 4-valent graph with rigid vertices (where these vertices are the singular crossings). These type of embedding are also called rigid vertex knotted graphs.
Similar to the case of virtual knot theory, there is a useful topological interpretation for virtual singular knot theory in terms of embeddings of singular links (or equivalently, of rigid vertex knotted graphs) in thickened surfaces. For this, interpret each virtual crossing as a detour of one of the arcs in the crossings through a 1-handle that has been attached to the 2-sphere of the original diagram. We obtain an embedding of a collection of immersed circles into a thickened surface S g × I, where I is the unit interval, S g is a compact oriented surface of genus g, and g is the number of virtual crossings in the original diagram. Then singular knot theory in S g × I is represented by diagrams drawn on S g taken up to the Reidemeister-type moves for singular link diagrams transferred to diagrams on S g . Recall that the Reidemeister-type moves for singular link diagrams contain the classical Reidemeister moves R1, R2 and R3 together with the moves RS1 and RS3 shown in Figure 2.
Alexander-and Markov-type theorems
A virtual singular braid on n strands is a braid in the classical sense, which may contain real, singular, and virtual crossings as 'interactions' among the n strands of the braid. By connecting the top endpoints with the corresponding bottom endpoints of a virtual singular braid using parallel arcs without introducing new crossings we obtain a virtual singular link diagram, called the closure of the braid.
Similar to the case of classical braids, virtual singular braids are composed using vertical concatenation. For two n-stranded virtual singular braids β and β , the braid ββ is obtained by placing β on top of β and connecting their endpoints. The set of isotopy classes of virtual singular braids on n strands forms a monoid, which we denote by V SB n . The monoid operation is the composition of braids, and the identity element, denoted by 1 n , is the braid with n vertical strands.
3.1. The virtual singular braid monoid. The virtual singular braid monoid on n strands, V SB n , is the monoid generated by the virtual singular braids σ i , σ −1 i , v i and τ i , for 1 ≤ i ≤ n − 1, depicted below: and subject to the following relations: These relations taken together define the isotopies for virtual singular braids. Each relation in V SB n is a braided version of a virtual singular link isotopy. That is, two equivalent virtual singular braids have isotopic closures. Note that the type 1 moves R1 and V 1 are not reflected in the defining relations for V SB n , because these moves cannot be represented using braids. Note also that only the generators τ i are not invertible in V SB n .
3.2.
A braiding algorithm. In this section we present a method for transforming any virtual singular link diagram into the closure of a virtual singular braid. For that, we borrow the braiding algorithm introduced in [9] and extend it to our set-up where we add singular crossings, to prove a theorem for virtual singular links analogous to the Alexander theorem for classical braids and links.
We will work in the piecewise linear category, which gives rise to the operation of the subdivision of an arc (in a virtual singular link diagram) into smaller arcs, by marking it with a point. Note that local minima and maxima are subdivision points of a diagram. Definition 2. We fix a height function in the plane of the diagram, and use the following conventions necessary for our braiding algorithm: First it is understood that only one crossing (real, singular or virtual) can occur at each level (with respect to the height function) in a virtual singular link diagram. Likewise, we arrange our diagram so that no crossings or subdivision points are vertically aligned, so as to avoid triple points when new pairs of braid strands are created with the same endpoints (this will be made more clear later as we explain our braiding algorithm). In addition, a crossing must not coincide with a local maximum or minimum. Lastly, a diagram should not have any horizontal arcs (it will only have up-arcs and down-arcs). If a virtual singular link is arranged so that it satisfies each of these conventions, we say that the diagram is in general position.
It is easy to see that by applying small planar shifts, if necessary, any virtual singular link diagram can be transformed into a diagram in general position.
When converting a virtual singular link diagram to a diagram in general position, we make certain choices which result in local shifts (which are called direction sensitive moves in [9]) of crossings and subdivision points with respect to the horizontal or vertical direction. The swing moves given in Figure 5 are the most interesting direction sensitive moves; these moves are necessary so that we avoid the coincidence between a crossing (real, singular, or virtual) and a maximum or minimum in a diagram.
←→
←→ ←→ Figure 5. The swing moves Two isotopic virtual singular link diagrams in general position differ by the extended virtual Reidemeister moves (provided in Figure 2) and the direction sensitive moves. For the remainder of this section, we will work with virtual singular link diagrams in general position.
We describe now the braiding algorithm for transforming an oriented virtual singular link diagram (assumed in general position) into the closure of a virtual singular braid.
After placing the subdivision points using the conventions explained above, we apply the braiding algorithm locally, by eliminating each up-arc in the diagram (which can be either an up-arc in a crossing or a free up-arc), one at a time.
We first braid the crossings containing one or two up-arcs. If a crossing has no up-arcs we leave it as it is. We place each crossing that needs to be braided in a narrow rectangular box, called the braiding box, with the arcs of the crossing serving as diagonals of the box. A braiding box would have to be sufficiently narrow, so that the region it defines does not intersect the braiding box of another crossing. We braid each crossing, one at a time, according to the braiding chart given in Figure 6 (see also [9,Figure 7]). Any new crossing created between the new braid strands and the rest of the diagram outside the braiding box will be assumed to be virtual; this is indicated abstractly by putting virtual crossings at the ends of the new pair of braid strands.
Note that, locally speaking, for each crossing that was braided, connecting the corresponding pair of braid strands (outside of the resulting diagram) yields a virtual singular tangle diagram (with four endpoints) which is isotopic to the starting one (the tangle represented by the crossing in the braiding box). The free up-arcs are arcs joining braiding boxes. Once all crossings have been braided, we braid each of the free up-arcs using the basic braiding move depicted in Figure 7 (see [9, Figure 9]). During this move, we first cut a free up-arc and then extend the upper end upward and the lower end downward, such that the new pair of strands are vertically aligned and such that they cross only virtually any other arcs in the original diagram (which is represented by an abstract virtual crossing on the ends of the new braid strands), as shown in Figure 7. As in the case of braiding a crossing, by connecting the pair of the new braid strands outside of the original diagram results in a local virtual singular tangle diagram (with two endpoints) which is isotopic to the local tangle before the braiding.
The braiding algorithm given above will braid any virtual singular link diagram, creating a virtual singular braid whose closure is isotopic to the original diagram. Indeed, for all braiding moves, even for those that do not contain singular crossings, −→ Figure 7. A basic braiding move it is important to observe that there may be singular crossings in the rest of the braid and that upon closure these are detoured freely by the virtual crossings of the new braid strands. Therefore, we have proved the following statement.
Theorem 1 (Alexander-type theorem for virtual singular links). Every oriented virtual singular link can be represented as the closure of a virtual singular braid.
3.3. L-moves and Markov-type theorems for virtual singular braids. Two virtual singular braids may have isotopic closures, and thus we would like to describe virtual singular braids that result in isotopic virtual singular link diagrams via the closure operation. Therefore, we are interested in Markov-type theorems for virtual singular braids and links. For this purpose, we need to introduce the singular L vmoves for virtual singular braids. These moves enlarge the set of the L v -moves for virtual braids, described in [9]. Here, the subscript v stands for 'virtual'.
We remind the reader that the classical L-moves were introduced by S. Lambropoulou in [10] to provide a one-move Markov-type theorem for classical braids and links. We also refer the reader to [11], where the L-move equivalence for classical braids is established.
We recall from [9] that a basic L v -move involves cutting a braid strand and pulling the upper endpoint of the cut downward and the lower endpoint upward, and in doing so, creating a pair of new braid strands which cross virtually all of the other strands in the diagram; this is abstractly denoted by a pair of virtual crossings at the points where the two new braid strands cross the box in which the L v -move is applied (see Figure 8).
Note that an L v -move may introduce a crossings, which may be real or virtual, as shown in Figure 9 (see [9, Figure 11]). To stress the existence of the real or virtual crossing, these moves are called the real L v -move or virtual L v -move, respectively (abbreviated to rL v -or vL v -move, respectively), and there are two versions of them, namely left or right (depending whether the new crossing is on the left or on the right of the arc that was cut during the move). Figure 9 displays right virtual and left real L v -moves. Note that by connecting the pair of the newly created braid strands (outside of the diagram) we obtain a tangle diagram which is isotopic to the tangle diagram we started with (the detoured loop contracts to a kink which involves either a virtual crossing or a real crossing). This is explained in Figure 10.
move with a virtual crossing in which, before stretching the arc of the kink, we perform a classical type 2 Reidemeister move using another strand of the braid, called the thread. Depending whether we pull the kink over or under the thread, we have an over-threaded L v -move or an under-threaded L v -move; both of these moves come with the left and right versions. (We refer the reader to the analogous definition in [9,Definition 4].) Figure 11 shows under-threaded L v -moves, both left and right versions. Due to the forbidden moves, a threaded L v -move cannot be simplified on the braid level; that is, the move does not involve isotopic braids but isotopic closures of braids.
In addition, we can create a multi-threaded L v -move by performing two or more classical type 2 Reidemeister moves before pulling open the arc of the kink. (See [9, Figure 14].) When singular crossings are present, there is another type of threaded move in which the thread 'crosses' the detoured loop in a pair of a singular crossing and a real crossing. We call such a move an rs-threaded L v -move; this move also comes in two variants, namely left and right. Figure 12 exemplifies such a move, with only one of the two versions for the real crossing involved in the move. An rs-threaded L v -move cannot be applied (simplified) in the braid. However, it is not hard to see that the closures of the two sides of an rs-threaded L v -move are isotopic diagrams (via an RS1 move), as explained in Figure 13. Finally, we define the notion of conjugation and commuting in the virtual singular braid monoid, V SB n . Given a virtual singular braid ω ∈ V SB n , we say that the braids Definition 4. We say that two virtual singular braids are singular L v -equivalent if they differ by virtual singular braid isotopy and a finite sequence of the following moves or their inverses: (i) Real conjugation and singular commuting (ii) Right virtual and right real L v -moves (iii) Left and right under-threaded L v -moves (iv) Left and right rs-threaded L v -moves.
We remark that the singular L v -equivalence on virtual singular braids contains as a subset the L-equivalence on virtual braids defined in [9,Definition 6]. We remind the reader that the L-equivalence for virtual braids comprises the real conjugation, the right real and right virtual L v -moves, the left and right under-threaded L v -moves, and the virtual braid isotopy.
It was proved in [9] that the virtual conjugation, basic L v -moves, left real and left virtual L v -moves, over-threaded L v -moves, and multi-threaded L v -moves follow from the L-equivalence. Therefore, these moves also follow from the singular L v -equivalence, and thus we do not need to include them in our L-move Markov-type theorem for virtual singular braids, which we are now ready to state and prove.
Theorem 2 (L-move Markov-type theorem for virtual singular braids). Two virtual singular braids have isotopic closures if and only if they are singular L v -equivalent.
Proof. It is easy to see that singular L v -equivalent virtual singular braids have isotopic closures.
We will now work on the converse. First, we need to show that different choices made in the braiding process result in braids that are singular L v -equivalent. The choices made during the braiding process are the subdivision points and the order of the braiding moves. The subdivision points are needed for marking the braiding boxes and the up-arcs. Using a similar argument as in [9, Corollary 2], it is not hard to see that given two subdivisions of a virtual singular diagram, the resulting virtual singular braids obtained by our braiding algorithm are singular L v -equivalent. Due to the narrow condition for the braiding boxes, the braidings of the crossings are local and independent, so the order in which we braid the crossings has no effect on the final output. Moreover, the order in which we braid the free up-arcs is also irrelevant. Due to the braid detour moves, we can in fact braid first the free up-arcs (or just some of them) and then braid the crossings (and any remaining free up-arcs).
Second, we need to show that, different choices in bringing a virtual singular diagram to general position result in braids (obtained by our braiding algorithm) that are singular L v -equivalent. Using a similar argument as in [9,Lemma 7], it is easily seen that planar isotopy moves applied away from any of the crossings in a virtual singular link diagram result in braids that are related by braid isotopy and the basic L v -move. Indeed, the addition of singular crossings in the setting does not change the situation. It was also shown in [9,Lemma 7] that, by applying the braiding algorithm to diagrams that differ by a swing move involving a virtual crossing or a real crossing results in braids that are L-equivalent. Therefore, for our case of virtual singular link diagrams, it remains to verify the swing moves containing a singular crossing. These swing moves can be verified in the same manner as the swing moves involving a real crossing, by merely replacing the real crossing in [9,Figures 26,27]) with a singular crossing.
Finally, we need to show that two virtual singular braids with isotopic closures are related by singular L v -equivalence. For that, we need to prove that virtual singular link diagrams (in general position) that differ by the extended virtual Reidemeister moves (recall Figure 2) correspond to closures of virtual singular braids that are singular L v -equivalent. By [9, Theorem 2], we know that the isotopy moves involving only real and virtual crossings follow from the L-equivalence for virtual braids (and thus from singular L v -equivalence). Therefore, we only need to consider the extended virtual Reidemeister moves involving singular crossings, and these moves need to be considered with any given orientation of the strands.
Note that if all strands involved are oriented downward, the statement follows directly from the relations defined on V SB n . We consider all cases of each isotopy move involving singular crossings. We consider diagrams that are identical, except in a small region where they differ as shown in the figures; that is, the isotopy move is applied in that small region.
We start with the move RS1 and allow one strand to be oriented upward. If we apply the braiding algorithm to the diagrams on both sides of the move (followed by braiding isotopy), the resulting diagrams differ by a left rs-threaded L v -move, as explained in Figure 14. Note that if we reverse the orientations of the two strands in the move, the corresponding braids differ by a right rs-threaded L v -move, as explained in Figure 15 (besides reversing the orientations of the two strands, we also changed the sign of the classical crossings from positive to negative, for more variety). Figure 15. RS1 move-case 2 Figure 16 shows that if we take the isotopy move RS1 with both strands oriented upward and braid the diagrams on each side of the move, we find that the two corresponding braids are related by a series of conjugations and the braid-type RS1 move.
Observe that in Figure 15 we used virtual conjugation, which is not a move of the singular L v -equivalence. However, recall that the virtual conjugation follows from the L-equivalence and hence from the singular L v -equivalence (see [9, Figures 17, 18]). We will now take a slightly different approach to prove that the moves RS3 and V R3 hold with any possible orientations on the strands. Again, the case where all strands are oriented downward follow from braid equivalence. We start by considering the RS3 move with one strand oriented upward and apply an R2 move to create locally three downward oriented strands. After a couple of RS1 moves, we apply an RS3 move in braid form (see Figure 17). Similarly, if we start with two strands oriented upward, we apply an R2 move to reduce to the case with one strand oriented upward, as exemplified in Figure 18.
In Figure 19 we consider an RS3 move with all three strands oriented upward and show that it can be reduced to the previous case with two strands oriented upward. The proof for the V R3 moves with various orientations on the strands is done similarly as for the RS3 moves, and therefore they are omitted to avoid repetition. This completes the proof.
In the following theorem we will use ω to represent an arbitrary virtual singular braid in V SB n . We also regard ω as an element of V SB n+1 by adding a strand on the right of ω. (We will not use an extra notation when we regard ω ∈ V SB n as an element in V SB n+1 .) Using this operation (of adding a single identity strand on the right of a braid) the monoid V SB n embeds in V SB n+1 , and we define V SB ∞ := ∪ ∞ n=1 V SB n . In what follows, we also allow adding an identity strand at the left of ω ∈ V SB n and we denote by i(ω) the braid in V SB n+1 obtained in this way.
Theorem 3 (Algebraic Markov-type theorem for virtual singular braids). Two virtual singular braids have isotopic closures if and only if they differ by a finite sequence of braid relations in V SB ∞ together with the following moves or their inverses: (i) Real and virtual conjugation, and singular commuting (see Figure 20): (ii) Right real and right virtual stabilization (see Figure 21) : ωv n ∼ ω ∼ ωσ ±1 n (iii) Right and left algebraic under-threading (see Figure 22): (iv) Right and left algebraic rs-threading (see Figure 23): Proof. It is easily checked that the closures of two virtual singular braids that are related by virtual singular braid isotopy and a finite sequence of the moves listed in Theorem 3 represent isotopic virtual singular links. For the converse, let β 1 and β 2 be virtual singular braids whose closures represent isotopic virtual singular links. By Theorem 2, we know that β 1 and β 2 are singular L v -equivalent. Therefore, it suffices to show that the four types of moves in Theorem 3 follow from the singular L v -equivalence. Clearly, the real, singular, and virtual conjugation follow from the singular L v -equivalence since the first two are part of the Right real and right virtual stabilization (the moves in (ii)) follow from right real and right virtual L v -moves, respectively, plus braid detouring and virtual conjugation in V SB ∞ . Figure 24 explains the case of the right virtual stabilization; the right real stabilization follows similarly. Figure 24. Right virtual stabilization is obtained from the right vL vmove, braid detour, and virtual conjugation We note that in the last step of Figure 24, the virtual conjugation is applied in the smaller braid that contains the threads which cross virtually the pair of braid strands created during the right vL v -move.
←→
The right and left algebraic under-threading (the moves in (iii)) follow from the right and, respectively, the left under-threading L v -moves, braid detour, and virtual conjugation. Figure 25 treats the left algebraic under-threading; the right algebraic under-threading is verified in a similar fashion, and therefore is omitted here.
The right and left algebraic rs-threading (the moves in (iv)) follow similarly. In Figure 26 we show that the right algebraic rs-threading follow from the right rs-threaded L v -move, braid detour, and virtual conjugation (the left algebraic rs-threading follows similarly, only that it uses instead the left rs-threaded L v -move).
This completes the proof. Figure 26. Right algebraic rs-threading follows from the right rsthreaded L v -move, braid detour, and virtual conjugation reduced presentation for the virtual singular braid monoid on n strands, V SB n . This presentation uses fewer generators which listed below:
←→
. . , v n−1 } and assumes the following relations, which we refer to as the defining relations: where 1 ≤ i ≤ n − 2. As shown in Figure 27, the defining relations are the braid form versions of the detour move. In other words, we detour the real crossings σ ±1 i+1 and singular crossings τ i+1 to the left side of the braid using the strands 1, 2, . . . , i.
Any portion of a given virtual singular braid can be detoured to the front of the braid (as shown in Figure 28), where all of the new crossings that are created are virtual. For this reason, in the reduced presentation for V SB n , the relations involving real crossings or singular crossings will be imposed to occur between the first strands of a braid. We remark that the relations v i σ j v i = v j σ i v j and v i τ j v i = v j τ i v j for |i − j| = 1 are not needed in the reduced presentation for V SB n , since they were implicitelly used in the defining relations (4.1) and (4.2). Theorem 4. The virtual singular braid monoid V SB n has the following reduced presentation with generators {σ ±1 1 , τ 1 , v 1 , . . . , v n−1 }, and relations: We give in Figure Note that in the reduced presentation for V SB n we kept all of the original virtual relations (relations involving only virtual crossings/generators). In addition, we have kept the relations involving σ i or τ i that can be represented on the left side of the braid. For convenience, we will call these the base cases of the original relations. For example, the base case for the commuting relations σ i σ j = σ j σ i is the relation σ 1 σ 3 = σ 3 σ 1 , which by the defining relation (4.1) is equivalent to the relation (4.10). Similarly, from the commuting relations τ i σ j = σ j τ i and τ i τ j = τ j τ i (with |i − j| > 1) we kept only the relations τ 1 σ 3 = σ 3 τ 1 and, respectively, τ 1 τ 3 = τ 3 τ 1 , which are represented by the relations (4.11) and (4.12), respectively. We will show that all of the other commuting relations follow from their corresponding base case relations and the virtual relations.
In the statements to follow we show that each of the relations in the original presentation for V SB n hold; therefore, we prove Theorem 4. The first lemma deals with preparatory identities (this statement was given in [8], and thus we only provide a sketch of its proof). In each of our proofs below we will underline the portion of the relation that we will work with next. Lemma 1. The following equality holds for all |i − j| ≥ 2: Proof. Let |i − j| ≥ 2. Then, we have: Proof. The first set of relations were proved in [8, Lemma 1]. We provide here a similar proof for the second set of relations only. By the defining relation (4.2), we have: Since |i − j| > 1, either j ≥ i + 2 or j ≤ i − 2. If j ≥ i + 2, then in the above expression v j commutes with all generators, thus τ i v j = v j τ i . If j ≤ i − 2 we have: Hence, the statement holds.
Lemma 3. The commuting braid relations below hold for all |i − j| > 1: Proof. The same type of proof can be used to show that the given three sets of commuting braid relations hold. For brevity, we will prove here only the last set of relations. Our proof is somewhat different than the one given in [8,Lemma 3] for σ i σ j = σ j σ i . Without loss of generality, assume j > i. Specifically, suppose that j ≥ i + 2. Then, In either case, we have the following: Then, we have: Recall now the relation (4.11): Multiplying this relation on the left and on the right by v 2 v 3 v 1 v 2 and using that v 2 i = 1 for i = 1, 2, 3 and that v 1 v 3 = v 3 v 1 , we obtain: Returning to the computations above and using the latter equality to replace the underlined product, we arrive at: On the other hand, using the relations (4.1), (4.2), (4.4), (4.7), and (4.13), one can show the following equality: Comparing the two results, we obtain the desired equality: τ j σ i = σ i τ j . Lemma 4. The braid relations σ i σ j σ i = σ j σ i σ i hold for all |i − j| = 1.
Proof. We will show that the relation holds for j = i + 1 and i ≥2 (recall that the base case relation corresponds to i = 1 and j = 2, which is represented by the relation (4.8)).
Starting with the left hand side of the desired identity and using the relations (4.1), (4.5), and (4.13), we obtain (see the beginning of the proof for Lemma 2 in [8]): For the right hand side of the identity, we have: Making use of Lemmas 2 and 3, and the relations (4.4) and (4.7), we arrive at: Therefore, the relation holds for all i > 1, which completes the proof.
For a somewhat different proof of the previous lemma (as it applies to the virtual braid group), we refer the reader to [8, Lemma 2].
Proof. We will show that the relation holds for the case j = i + 1 and i ≥2 (the case i = 1 and j = 2 is the base case relation represented by the relation (4.9)). The proof for the other case, namely when j = i − 1 and i ≥ 3 follows similarly. For the right hand side of the identity, we have: Now we will consider the left hand side of the identity.
Employing the commuting relations in Lemma 2 and those in Equations (4.4) and (4.7), we obtain: Therefore, σ i+1 σ i τ i+1 = τ i σ i+1 σ i for all i ≥2. Proof. It is easy to see that these relations hold. Lemma 7. The braid relations τ i σ i = σ i τ i hold for all 1 ≤ i ≤ n − 1.
Proof. Let i > 1. We first use the defining relations (4.1) and (4.2) followed by the virtual relations (4.5) to obtain: Using similar computations, we arrive at: But since τ 1 σ 1 = σ 1 τ 1 , the statement follows. Proof. It should be clear that these relations hold, since they were used in the defining relations (4.1) and (4.2). However, we provide a proof for the second set of relations for j = i + 1 and i ≥ 1 (the first set of relations follow similarly).
This completes the proof.
Concluding remarks. Virtual singular braids have a monoid structure that can be described by generators and relations. Specifically, in this paper, we introduced the virtual singular braid monoid as the algebraic counterpart of the diagrammatic theory of virtual singular knots and links. The virtual singular braid monoid is an extension of the singular braid monoid by the symmetric group. We have proved an Alexandertype theorem for virtual singular knots and links by providing a braiding algorithm that converts any oriented virtual singular knot or link to a virtual singular braid. We also provided two Markov-type theorems for virtual singular links and braids: (1) using an approach involving L-type moves and (2) the classical algebraic approach.
The braiding algorithm described in this paper employs the L-moves for oriented virtual links introduced by Kauffman and Lambropoulou in [9], which in turn are adaptations of prior work of Lambropoulou [10] on the case of oriented classical links. In particular, in this paper we introduced the singular L v -equivalence for virtual singular braids as an extension of the L-equivalence for virtual braids introduced in [9], to include L-type moves involving singular crossings. We first used singular L v -equivalence to prove an L-move Markov-type theorem for virtual singular braids, and then turned this result into an algebraic Markov-type theorem for virtual singular braids of any number of strands. Finally, we derived a reduced presentation for the virtual singular braid monoid using fewer generators. The reduced presentation is based on the fact that the virtual singular braid monoid on n strands is generated by three braiding elements plus the generators of the symmetric group on n letters.
|
2016-06-14T20:41:25.000Z
|
2015-04-05T00:00:00.000
|
{
"year": 2015,
"sha1": "1329017801cb25213076ae1a607805beda438418",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.01086",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "eab1e51fc8cff4dffbbccebc331260b0ca8f5915",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
202071319
|
pes2o/s2orc
|
v3-fos-license
|
Bacillus thuringiensis CbpA is a collagen binding cell surface protein under c-di-GMP control
Cyclic diguanylate (c-di-GMP) signalling affects several cellular processes in Bacillus cereus group bacteria including biofilm formation and motility, and CdgF was previously identified as a diguanylate cyclase promoting biofilm formation in B. thuringiensis. C-di-GMP can exert its function as a second messenger via riboswitch binding, and a functional c-di-GMP-responsive riboswitch has been found upstream of cbpA in various B. cereus group strains. Protein signature recognition predicted CbpA to be a cell wall-anchored surface protein with a fibrinogen or collagen binding domain. The aim of this study was to identify the binding ligand of CbpA and the function of CbpA in cellular processes that are part of the B. cereus group c-di-GMP regulatory network. By global gene expression profiling cbpA was found to be down-regulated in a cdgF deletion mutant, and cbpA exhibited maximum expression in early exponential growth. Contrary to the wild type, a ΔcbpA deletion mutant showed no binding to collagen in a cell adhesion assay, while a CbpA overexpression strain exhibited slightly increased collagen binding compared to the control. For both fibrinogen and fibronectin there was however no change in binding activity compared to controls, and CbpA did not appear to contribute to binding to abiotic surfaces (polystyrene, glass, steel). Also, the CbpA overexpression strain appeared to be less motile and showed a decrease in biofilm formation compared to the control. This study provides the first experimental proof that the binding ligand of the c-di-GMP regulated adhesin CbpA is collagen.
Introduction
The Bacillus cereus group consists of at least seven different species of bacteria, and an additional fourteen species have recently been proposed, on the basis of a wide set of different criteria (Peak et al., 2007;Jung et al., 2010;Jung et al., 2011;Jimenez et al., 2013;Miller et al., 2016;Liu et al., 2017). The group encompasses Bacillus anthracis, an obligate pathogen of herbivores which also causes life-threatening disease in humans, most often through contact with infected animals or animal products -and also includes Bacillus cereus, an opportunistic human pathogen responsible for emetic or diarrhoeal syndromes (strain dependent), but also a range of other diseases in immunocompromised individuals (reviewed in Drobniewski, 1993;Bottone, 2010). Bacillus thuringiensis is phylogenetically intermixed with B. cereus, and is classified through its entomopathogenicity, more specifically its ability to kill insect larvae, through the synthesis of insecticidal protein toxins which are often encoded on extrachromosomal elements . B. thuringiensis, although primarily having been regarded a pathogen of insects, carries the same set of chromosomal virulence factors as B. cereus, several of which are known to be linked to virulence also during human infection (Ngamwongsatit et al., 2008). B. cereus and B. thuringiensis are ubiquitously present in the environment, in soil, air and water, and, analogous to B. anthracis which has a life cycle involving fulminant infection of herbivores and the soil/vegetation niche, have a complex ecological cycle which may involve cycling between the gut of insect larvae and the soil/plant environment (Jensen et al., 2003;Saile and Koehler, 2006;Ceuppens et al., 2013;Vidal-Quist et al., 2013).
The complex life styles of these bacteria require an adaptive mode of global gene regulation, enabling the transition between different hosts and enduring highly diverse environmental niches to which the bacteria may be exposed when being shed from a host. B. thuringiensis harbours key transcriptional regulators involved in governing the infectious cycle in the insect -PlcR constituting a key activator of virulence genes promoting spread of the bacterium within the insect during active infection (Gohar et al., 2008) and NprR governing necrotrophism, the survival of the bacterium following the death of the insect host (Dubois et al., 2012).
B. cereus group bacteria, similar to Bacillus subtilis, have the capability of forming biofilmsa process regulated by the transcriptional regulator SinR, a repressor of biofilm matrix genes (Pflughoeft et al., 2011;Fagerlund et al., 2014). Biofilm formation is also controlled by the second messenger cyclic di-GMP (c-di-GMP), which has been shown in a range of bacterial species to be involved in regulating the switch between a motile and sessile lifestyle (e.g. reviewed in Jenal et al., 2017). Most B. cereus group strains carry a complement of diguanylate cyclases and phosphodiesterases, which are enzymes involved in c-di-GMP synthesis and breakdown, respectively (Fagerlund et al., 2016). One of the highly conserved c-di-GMP metabolism genes is cdgF, which encodes a dual function protein capable of acting as a diguanylate cyclase (GGDEF domain) or a phosphodiesterase (EAL domain), depending on its red-ox status (Fagerlund et al., 2016).
Adhesion, whether to biotic or abiotic surfaces, constitutes the initial step of biofilm formation, and is also a key step during the infectious process for many human and animal pathogens, enabling colonisation of the host. Collagen and fibrinogen may constitute ligands for bacterial surface adhesins governing these initial steps of the infectious process. Here we identify, by global transcriptional profiling of a cdgF deletion mutant (Fagerlund et al., 2016), a putative cell surface adhesion protein in B. thuringiensis 407 (cry-) which is under c-di-GMP transcriptional control and linked to CdgF function (Lee et al., 2010;Tang et al., 2016). We perform a thorough characterisation of the protein and establish the first binding ligand identification for a c-di-GMP effector protein in the B. cereus group, constituting the first step towards unravelling a c-di-GMP responsive network for a B. cereus group bacterium.
Bioinformatics analysis
At the time of analysis 144 fully sequenced and closed genomes of members of the B. cereus group were available from the NCBI (https:// www.ncbi.nlm.nih.gov/). Their genomes and proteomes were searched with the cbpA/CbpA sequence from B. thuringiensis 407 using BlastN and BlastP. InterProScan (Jones et al., 2014) was utilized for prediction of protein domains and Illustrator for Biological Sequences (IBS) was used to illustrate protein domain structure (Liu et al., 2015). The webapplication Riboswitch Scanner (Singh et al., 2009;Mukherjee and Sengupta, 2016) (http://service.iiserkol.ac.in/~riboscan/application. html) was used to explore DNA sequences for class I c-di-GMP riboswitches.
Phylogenetic tree construction
Altogether 113 strains were used to build a phylogenetic tree representative of the B. cereus group. The tree was constructed as in Fagerlund et al. (2016) by employing the distance based Neighbour-Joining method BioNJ (Gascuel, 1997) on MLST data obtained using the Tourasse-Helgason MLST scheme (Helgason et al., 2004) (http:// mlstoslo.uio.no). Tree construction and visualization was executed using Seaview 4 (Gouy et al., 2010) and TreeGraph 2 (Stover and Muller, 2010).
Bacterial strains and culture conditions
The model strain used in this study is the acrystalliferous B. thuringiensis strain 407 Cry - (Lereclus et al., 1989;Sheppard et al., 2013).
Cultures were routinely inoculated to an optical density at 600 nm (OD 600 ) of 0.05 from an overnight culture and grown at 30 ͦ C and 220 rpm in Luria Bertani (LB) broth or bactopeptone medium (1% w/v bactopeptone, 0.5% w/v yeast extract, 1% w/v NaCl). For cloning and expression in Escherichia coli, ampicillin was used at 50 or 100 µg ml −1 , kanamycin at 50 µg ml −1 , erythromycin at 200 µg ml −1 (pMAD), and tetracycline at 12 µg ml −1 . Erythromycin was used at 10 µg ml −1 to maintain the pHT304-Pxyl plasmid constructs in B. thuringiensis. Unless otherwise indicated, induction of gene expression from the xylA promoter in pHT304-Pxyl was performed by addition of 1 mM xylose to the growth medium.
Construction of deletion mutants and overexpression strains
For overexpression and complementation, the low-copy number E. coli/Bacillus shuttle vector pHT304-Pxyl was used, in which xylR and the xylA promoter from B. subtilis was inserted into the pHT304 cloning site (Arantes and Lereclus, 1991), allowing xylose-inducible expression of downstream cloned genes. The cbpA gene was PCR amplified from B. thuringiensis 407 using primers listed in Supplementary Table S1, and inserted into the plasmid vector using primer-incorporated restriction sites.
The mutant allele was designed to contain the start and stop codons of the deleted cbpA gene, separated by the CAATTG recognition sequence for restriction enzyme MfeI, thus creating an in-frame deletion not expected to exert polar effects on surrounding genes. Approximately 700 bp of DNA sequence homologous to the upstream and downstream region of the gene was PCR amplified using primers listed in Supplementary Table S1. The up-and downstream fragments were fused using the primer-incorporated restriction sites, cloned into pCRII-TOPO vector (Invitrogen) and then transferred to pMAD-I-SceI. The constructs were introduced into B. thuringiensis 407 by electroporation (Masson et al., 1989) and allelic exchange was performed essentially as described (Janes and Stibitz, 2006). All mutant alleles were verified by sequencing of PCR products generated with primers designed to anneal outside of the sequences used for homologous recombination (Supplementary Table S1). Plasmid and strains used in this study are listed in Table 1.
Isolation of RNA and reverse transcription quantitative PCR
For the isolation of RNA from biofilm and planktonic cells for subsequent RT-qPCR, methods described in Fagerlund et al. (2016) were followed. Briefly, cells were allowed to form biofilm on glass wool and samples were harvested after 24 h incubation by transfer of biofilm cells to ice-cold 60% methanol. Planktonic cells were incubated with an equal volume of ice-cold methanol and all samples were centrifuged for collection of cells. Cell lysis was done using a Precellys 24 Tissue Homogenizer (Bertin) and RNeasy Mini or Midi Kits (Qiagen) were used for RNA isolation. After treatment with DNase and further purification, cDNA synthesis was performed with SuperScript III Reverse Transcriptase (Invitrogen), performed in duplicate for each sample (two technical replicates). For all samples a negative control reaction without reverse transcriptase was included. RT-qPCR was carried out in a LightCycler 480 Real-Time PCR System (Roche) using primers shown in Supplementary Table S1. The second derivative maximum method in the LightCycler 480 software (Roche) was utilized to obtain quantification cycle (C q ) values. The expression of the target gene in each biological replicate was converted into E Cq values (Pfaffl, 2001) and then normalized to the geometric mean of the E Cq values determined for the three reference genes gatB/yqeY, rpsU, and udp (Reiter et al., 2011).
2.6. Global transcriptional profiling using 70-mer oligonucleotide microarrays RNA isolation, cDNA synthesis, labeling and purification was performed as previously described (Gohar et al., 2008). RNA was isolated from B. thuringiensis 407 cells in exponential growth (1.5 h after inoculation) and was precipitated (20 μg) in 0.3 M NaAc (pH 5.5) and 70% ethanol overnight at −80°C ahead of cDNA preparation. Microarray slides were printed at The microarray core facility of the Norwegian University of Science and Technology (NTNU). Design, printing, prehybridization, hybridization and scanning of the slides, and analysis of the data, was performed as described in Gohar et al. (2008). The microarray experiment was based on four slides, all biological replicates. P-values were computed using a false discovery rate (FDR) of 0.05.
The microarray slides contain 70-mer oligonucleotide probes designed to detect open reading frames (ORFs) in B. anthracis Ames, B. anthracis A2012, and B. cereus ATCC 14579, in addition to selected genes from B. cereus ATCC 10987 (Kristoffersen et al., 2007). To facilitate analyses in B. thuringiensis 407, all probe sequences on the microarray had been analyzed by BLAST for hits to annotated genes in the B. thuringiensis 407 draft genome sequence (GenBank accession ACMZ00000000.1, as of 30.04.2009), and the gene lists used for microarray analysis are based on the annotations from this GenBank entry. Only probes with 93% identity or greater to a transcript/feature sequence of B. thuringiensis 407 were included in the analysis. COG categories were obtained for the analyzed genes as reported in the IMG database (http://img.jgi.doe.gov). The data obtained from this analysis was submitted to the ArrayExpress archive (https://www.ebi.ac.uk/ arrayexpress/) under the accession number E-MTAB-8092.
Biofilm assay
Biofilm forming capability was investigated in a multi-well plate screening assay, modified from a previously described method (Auger et al., 2006). Cultures grown to early exponential phase in bactopeptone medium at 30°C (OD 600 ≈ 0.3) were used to inoculate fresh bactopeptone medium to an OD 600 of 0.01. For each strain, four wells of a 24-well polystyrene plate (Falcon 353047) were filled with 500 µl of the bacterial culture suspension. Plates were produced in duplicate, and each plate contained four wells of bactopeptone medium as control. Following incubation at 30 ͦ C for 24 h, 48 h and 72 h, the wells of one of the microplates were washed once with phosphate-buffered saline (PBS) and stained with an 0.1% (w/v) aqueous solution of methyl violet 6B for 30 min at room temperature. The wells were then washed three times with PBS and dried upside down over night. For quantification of biofilm formation, the dye was solubilized by adding 500 µl of a 1:4 acetone/ethanol mixture to each well and incubated for 10 min at room temperature, followed by measuring the absorbance at 575 nm in a plate reader (BMG Labtech ClarioStar).
Motility assay
Motility was assessed essentially as described in Fagerlund et al. (2016). Bacteria were grown in LB medium to OD 600 ≈ 1.0, and 5 µl of culture was spotted on LB agar plates containing 0.3% agar, with 1 mM xylose and/or 10 µg ml −1 erythromycin added if appropriate. Plates were incubated for 7 h at 30°C and motility was measured as the distance between the colony edge and the outer line of the swimming zone.
Whole cell adhesion assay
Binding to biotic surfaces was investigated with a whole cell adhesion assay adapted from the method described in Saragliadis and Linke (2019). The following substances (purchased from Sigma-Aldrich and Merck) were used in the assay: Collagen type I (calf skin, C9791), collagen type II (bovine cartilage, C1188), collagen type III (human placenta C4407), collagen type IV (human placenta, C7521), collagen type V (human plasma, C3657), fibrinogen (bovine, 341573), fibronectin (bovine plasma, F4759). All collagens were diluted to 20 µg ml −1 in 0.01 M acetic acid, while fibrinogen and fibronectin were diluted in PBS. 96-well polystyrene microplates (Corning 3370) were coated with 125 µl per well of each substance for a duration of 1 h at room temperature. After blocking with 50 mg ml −1 bovine serum albumin (BSA, Sigma-Aldrich) in PBS for 30 min at room temperature, the wells were washed three times with 0.1 mg ml −1 BSA in PBS. Ahead of Table 1 Strains and plasmids used in this study. a Ap r , Ery r , Tet r : resistance to ampicillin, erythromycin, tetracycline, respectively. the assay, bacteria were grown to early exponential phase at 30 ͦ C (OD 600 ≈ 0.2-0.3) in LB medium, supplemented with 10 µg ml −1 erythromycin and 1 mM xylose for strains containing the pHT304-Pxyl plasmid. Bacterial culture suspensions were centrifuged for 15 min at 1500×g, resuspended in PBS and adjusted to an OD 600 of 0.9-1.0. For each strain in each independent experiment, eight collagen-coated wells and eight empty wells, representing technical replicates, were filled with 125 µl of the bacterial suspension and incubated for 1 h at room temperature. Additionally, eight collagen-coated and eight empty wells (technical replicates) were incubated with PBS as negative controls. After washing three times with PBS, wells were stained with 125 µl of an 0.1% (w/v) aqueous solution of methyl violet 6B for 30 min at room temperature. Solubilization of the dye was performed as described for the biofilm assay, but with 125 µl 1:4 acetone/ethanol mixture. Absorbance at 575 nm was measured in a plate reader (BMG Labtech ClarioStar).
Adhesion to abiotic surfaces
For examination of binding to abiotic surfaces, bacteria were grown and prepared as described above (2.9). Steel coupons were prepared as described earlier (Castelijn et al., 2013) by treatment with 1 M NaOH at 50°C for 30 min, followed by washing with dH 2 O and incubation in acetone for 15 min at room temperature. After four subsequent washes in dH 2 O the steel coupons were autoclaved. Glass cover slips were autoclaved and all plates were dried overnight. The adhesion assay is based on a method described by Hayrapetyan and co-workers (Hayrapetyan et al., 2015). Coupons and plates were placed vertically into the wells of 12-well plates (Corning 3737) containing 2 ml bacterial suspension per well. After 60 min of incubation at room temperature, coupons and plates were washed by gently dipping them into sterile PBS. Each coupon was then transferred to a 50 ml tube containing 3 ml sterile PBS and 0.5 g autoclaved glass beads (Sigma-Aldrich G4649 (< 106 µm)). To detach cells from the coupons, tubes were vortexed at full speed for 1 min. The number of colony forming units (CFUs) for each coupon was determined from this bacterial suspension.
Galleria mellonella in vivo infection assay
The virulence-related properties of CbpA were assessed by comparing the killing effect of the B. thuringiensis 407 wild type (Bt407), the cbpA deletion strain (Bt407ΔcbpA), and the complementation strain (Bt407ΔcbpA pHT304-Pxyl-cbpA), by infection (force feeding) in 5th instar Galleria mellonella larvae. G. mellonella eggs were hatched at 28°C and the larvae reared on beeswax and pollen. For infection experiments, groups of 20 to 25 G. mellonella larvae, weighing about 200 mg were used. Xylose (20 mM) was added to the LB growth medium of all strains, as well as to the bacterial inoculums and the toxin alone control (Cry1C) at time zero (time point of force feeding). Larvae were force fed a second time with 10 µl 20 mM xylose 5 h later (in order to again activate CbpA expression from the pHT304-Pxyl plasmid in the complementation strain). Infections were otherwise performed as previously described (Fedhila et al., 2006) by force feeding larvae with 10 µl of a mixture containing 4-5 × 10 6 of vegetative bacteria (exponential growth OD 600 ≈ 1 in LB with 20 mM xylose) and 3 µg of activated Cry1C toxin. The larvae in the control group were fed either 10 μl PBS buffer or 10 μl Cry1C toxin + xylose. Mortality of the infected larvae was observed after 4 h, 24 h and 48 h. The chosen dose was expected to result in about 70 ( ± 5) % larvae mortality for the infection with the wild type B. thuringiensis 407 at 37°C after 48 h .
Transcriptomic analysis reveals that cbpA expression is influenced by cellular c-di-GMP levels
Species in the B. cereus group have several genes encoding proteins with GGDEF and/or EAL domains (Fagerlund et al., 2016) predicted to be involved in metabolism of c-di-GMP. In the biofilm model strain B. thuringiensis 407, belonging to phylogenetic group IV within the B. cereus group (Guinebretiere et al., 2008), eleven genes encoding GGDEF and/or EAL domains have been identified, of which eight were predicted or shown to have c-di-GMP metabolizing enzymatic activity. Among these, the tandem GGDEF-EAL protein CdgF was shown to be the main c-di-GMP metabolizing enzyme controlling biofilm formation in B. thuringiensis 407 under oxygenated conditions, and biofilm formation was abolished in a markerless cdgF deletion mutant, which contains lower cellular levels of c-di-GMP (Fagerlund et al., 2016). In order to identify gene candidates that are potentially affected by lower levels of c-di-GMP, a whole-genome transcriptional profiling analysis comparing the markerless cdgF deletion mutant (Bt407ΔcdgF ; Table 1) with the isogenic wild-type strain was performed with exponentially growing cells, after 1.5 h of growth in planktonic culture. The microarray analysis revealed that among only three B. thuringiensis 407 genes differentially regulated in the cdgF deletion mutant during early exponential growth (using cutoffs of two-fold up-or downregulation and false discovery rate (FDR)-adjusted p-value < 0.05; Supplementary Table S2), BTB_RS05575 (GenBank: BTB_c11270) was downregulated 2.5-fold in the cdgF deletion mutant, suggesting that this gene was directly or indirectly induced by CdgF. Expression of the two other genes, one encoding a putative aldo/keto reductase and the other a putative MFS-type transporter, was upregulated 2.5-fold (Supplementary Table S2).
3.2. Sequence analysis suggests that the protein encoded by BTB_RS05575 (BTB_c11270) is a member of the MSCRAMM family BTB_RS05575 (BTB_c11270) was interrupted by a gap between two contigs in the first available draft genome sequence of B. thuringiensis 407 (GenBank accession number ACMZ00000000) and is annotated as a pseudogene due to a frameshift mutation in the closed B. thuringiensis 407 genome (RefSeq: NC_018877.1; (Sheppard et al., 2013)), potentially due to challenges in assembly of repeat regions. We therefore designed PCR primers (Supplementary Table S1) matching the contig ends in ACMZ00000000, PCR amplified across the gap, and sequenced the gap fragment. This approach revealed a novel 192 bp sequence closing the gap between the two contigs, and corrected the frameshift mutation present in the gene sequence in NC_018877.1. The obtained sequence of the closed and intact BTB_RS05575 gene and organization of the surrounding sequence is shown in Supplementary Fig. S1. The protein encoded by BTB_RS05575 (BTB_c11270) in B. thuringiensis 407 is an ortholog of BC1060 from B. cereus ATCC 14579. Analysis of the BTB_RS05575 (BTB_c11270) promoter region showed that it contains a match to a class I c-di-GMP responsive riboswitch (GEMM), originally identified and named Bc2 in B. cereus ATCC 14579 by Sudarsan and coworkers (Sudarsan et al., 2008). This riboswitch was shown to respond to increased levels of c-di-GMP by switching on transcription of the downstream gene (Lee et al., 2010;Tang et al., 2016), suggesting that this is the mechanism by which lower c-di-GMP levels in the cdgF deletion mutant mediates regulation of expression of BTB_RS05575 (BTB_c11270).
The corrected BTB_RS05575 (BTB_c11270) coding sequence in B. thuringiensis 407 is predicted to be 2187 amino acids in length, and contains an N-terminal signal peptide and a C-terminal cell wall sorting signal comprising an LPXTG sortase substrate motif (Gaspar et al., 2005). The processed protein is predicted to have a molecular mass of 236 kDa and a length of 2157 amino acids. Protein domain analysis using InterProScan (Jones et al., 2014) further revealed that BTB_RS05575 (BTB_c11270) shares the typical domain structure of proteins belonging to the group of MSCRAMMs (microbial surface components recognizing adhesive matrix molecules). These proteins are typically characterized by the presence of an A-domain which facilitates binding to extracellular matrix molecules such as collagen, fibrinogen or fibrinogen, the LPXTG cell-wall anchor, and often a B-repeat region (Patti et al., 1994). The non-repetitive N-terminal A-region of BTB_RS05575 (BTB_c11270) was predicted to contain a fibrinogen binding domain (Interpro IPR011252; residues 31-175), a collagen binding domain (Pfam PF05737; residues 184-307), and a fimbrial isopeptide formation D2 domain (Interpro IPR026466; residues 327 to 461) which among others is found in the backbone of the pilus of Streptococcus pneumoniae (Spraggon et al., 2010) (Fig. 1B). The two Nproximal domains (aa 31-307) however shared only low primary sequence identity (17-20%) with well-characterized binding domains from other MSCRAMM proteins, such as the N2-N3 domains of fibrinogen-binding SdrG from Staphylococcus epidermidis (Ponnuraj et al., 2003), S. aureus ClfA (Ganesh et al., 2008) and S. aureus SdrD (Wang et al., 2013), and the N1-N2 domains of the collagen-binding protein Cna from S. aureus (Zong et al., 2005). In accordance with current terminology the subsection aa 31-307 of the A-region of BTB_RS05575 (BTB_c11270) will be referred to as the N1 and N2 domains, while the subsection covered by residues 327-461 will be referred to as the N3 domain. The B-repeat region in BTB_RS05575 (BTB_c11270), covering residues 491-2034, is composed of 17 units of the Cna_B-type repeat domain (Pfam PF05738). Cna_B repeats were first observed in the Bregion of the S. aureus collagen binding MSCRAMM protein Cna (Deivanayagam et al., 2000), but are also found in Gram positive pilins (Krishnan, 2015). Also, a proline-rich region composed of eight highly similar repeats with the consensus sequence PGTP[N/D]PEK is located between the B-region and the LPXTG sortase substrate motif (residues 2059-2149). Such proline-rich short repeated sequences are present in the C-terminal region of numerous gram positive cell wall-anchored surface proteins, including S. aureus Cna (Symersky et al., 1997). Based on the domain structure of BTB_RS05575 (BTB_c11270), we named the protein CbpA (Collagen binding protein A).
Expression of cbpA peaks during early exponential growth
Investigation of cbpA mRNA expression levels throughout the B. thuringiensis 407 growth phase in a planktonic culture by RT-qPCR revealed an expression peak in the early exponential growth phase, and a substantial expression decline throughout exponential growth (Fig. 1A). Expression of cbpA was reduced by 99% after 3 h cultivation compared to its highest expression at 1.5 h, which was the earliest time point of analysis. The level of cbpA expression in a 24 h biofilm was roughly the same as in a planktonic culture after 2.5 h incubation.
Distribution of cbpA in the B. cereus group population structure
Since cbpA was under c-di-GMP transcriptional control through its upstream riboswitch (Sudarsan et al., 2008;Tang et al., 2016) and the CdgF diguanylate cyclase and most of the other proteins potentially involved in c-di-GMP metabolism are highly conserved within the B. cereus group (Fagerlund et al., 2016), we investigated the phylogenetic distribution of cbpA. All strains belonging to the B. cereus group for which fully sequenced and closed genomes were available through NCBI at the time of analysis were checked for the presence of cbpA by using BlastN and BlastP, using cbpA and CbpA sequences from B. thuringiensis 407 as query. Interestingly, all strains belonging to the phylogenetic cluster IV carry the cbpA gene, in addition to a few other strains within phylogenetic groups III, V and VI (Fig. 2). The surrounding chromosomal region was investigated for evidence of horizontal gene transfer (HGT), but organisation of the surrounding genes in the strains included in the study (Fig. 2) and the fact that the GC content of this area conforms with the average GC content of the rest of the chromosome of strain B. thuringiensis 407, does not suggest that cbpA was acquired by HGT. Furthermore, no evidence of transposable elements was identified in this region by running the chromosome of the B. thuringiensis 407 strain through the ISsaga2 semi-automatic pipeline (Varani et al., 2011) and no indication for the presence of prophages in this region was found by using the phage identification tool PHASTER (Arndt et al., 2016). Phylogenetic cluster IV is comprised of B. cereus and B. thuringiensis strains which are often isolated from nonclinical sources including nature environments and insects (Guinebretiere et al., 2008), http://mlstoslo.uio.no). cbpA was not identified in B. anthracis or any emetic-toxin producing strains which group within phylogenetic cluster III, and only four of the 40 B. cereus or B. thuringiensis strains belonging to cluster III contained cbpA.
The highest variability within CbpA of the analysed strains was observed in the B-region and in the proline-rich region ( Supplementary Fig. S2), while the N1-N3 putative binding domains were highly conserved, generally at > 90% identity. Notably, several of the analyzed strains were found to contain the cbpA gene, but based on the genome sequences present in GenBank carry genetic changes implying the strains may carry cbpA pseudogenes or alternatively, as found in our resequencing of B. thuringiensis 407 cbpA, may be the result of genome sequencing errors due to extensive internal repeat regions in the cbpA gene. All strains that contained the cbpA gene (or pseudogene) also contained a predicted class I c-di-GMP binding riboswitch in the upstream untranslated region (UTR).
CbpA of B. thuringiensis 407 binds collagen during early exponential growth
To uncover a possible host cell molecule binding to CbpA, a markerless gene deletion mutant ΔcbpA and corresponding cbpA complementation-and overexpression strains utilizing the xylose-inducible vector pHT304-Pxyl were constructed (Table 1). The capability of these strains to bind to different adhesive host cell matrix molecules was examined with a bacterial whole cell adhesion assay. Considering that cbpA reached its peak expression during the early exponential growth phase (Fig. 1A) bacteria were harvested at this point of growth for the assay, and adhesion was tested for collagen types I -V. For the B. thuringiensis 407 ΔcbpA deletion mutant (Bt407ΔcbpA), binding was completely abolished to all types of collagen tested, while the cbpA overexpression strain (Bt407 pHT304-Pxyl-cbpA) showed statistically significant increased adhesion to collagens I -IV compared to the vector control strain (Bt407 pHT304-Pxyl) (Fig. 3A). As a control, adhesion was restored in the complementation strain (Bt407ΔcbpA pHT304-Pxyl-cbpA) to roughly the same levels as in the wild type strain for all tested collagens (no statistically significant difference). The adhesion assay clearly revealed that under the tested conditions, CbpA is essential for binding to collagen.
As described above, prediction protein domain analysis methods potentially suggested additional binding ligands for CbpA, and specific MSCRAMM proteins are known to have the ability to adhere to other extracellular matrix molecules than collagen (Wann et al., 2000;Vazquez et al., 2011;Larson et al., 2013). Deletion of cbpA did however not produce any difference in adhesion to either fibrinogen or fibronectin compared to B. thuringiensis 407 wild type. In fact, under the tested conditions none of the strains produced detectable binding of the bacterial cells to wells coated with either of these two molecules (Fig. 3A).
The influence of CbpA on B. thuringiensis 407 adhesion to abiotic surfaces was also investigated. The control from the binding assay for biotic factors included a background test, showing that CbpA did not affect bacterial whole cell binding to polystyrene (Fig. 3A). Binding to steel and glass surfaces was investigated with an assay based on CFU count as described by Hayrapetyan and co-workers (Hayrapetyan et al., 2015). Based on the results of this assay CbpA does not influence either binding to steel or glass in B. thuringiensis 407 (Fig. 3B).
Other phenotypes affected by CbpA
Both biofilm formation and motility are regulated by intracellular cdi-GMP levels in many bacteria, including those in the B. cereus group (Fagerlund et al., 2016). Adhesion is the first step of the biofilm formation process, and expression of the CbpA adhesin was shown to be positively controlled by a c-di-GMP binding riboswitch (Lee et al., 2010;Tang et al., 2016). Therefore a role of CbpA in both of these cellular processes was investigated. The CpbA overexpression strain (Bt407 pHT304-Pxyl-cbpA) formed less biofilm compared to the vector control strain (Fig. 4A), however we did not observe any significant effect on biofilm formation following deletion of cbpA.
As for the biofilm assay, we did not observe any change in motility for the cbpA deletion mutant compared to its isogenic wild type strain (tested at OD 600 = 1; Fig. 4B). However, the overexpression strain (Bt407 pHT304-Pxyl-cbpA) had a significantly smaller swimming zone compared to the vector control strain (Bt407 pHT304-Pxyl), indicating an impairment in motility upon cbpA overexpression ( Fig. 4C and D). In order to ensure that the influence of CbpA on motility is not dependent on growth phase, the test was also done with cells harvested during the early exponential growth phase (OD 600 = 0.2) and stationary phase, obtaining similar results compared to the original assay ( Supplementary Fig. S3).
Since adhesion to host factors is the first step of infection, the function of MSCRAMMs is usually predicted to be connected to infection processes (Patti et al., 1994). In order to investigate if the presence of CbpA on the surface of B. thuringiensis 407 cells might play a role during infection, a well-established in vivo infection model was utilised, employing G. mellonella larvae (Bouillaut et al., 2005;Ramarao et al., 2012;Voros et al., 2014;Kamar et al., 2017). We did not observe any significant phenotype for the ΔcbpA deletion mutant compared to the B. thuringiensis 407 wild type strain during infection of G. mellonella (twotailed students t-test; Fig. 5). There was however a trend towards mortality being slightly but not statistically significantly higher in larvae infected with the complementation strain (B. thuringiensis 407 ΔcbpA pHT304-Pxyl-cbpA) compared to the wild type.
Discussion
To the best of our knowledge this study presents the first experimental evidence that the B. thuringiensis MSCRAMM protein CbpA mediates binding to collagen, an extracellular host cell matrix protein.
The CbpA ortholog in the B. thuringiensis strain BMB171 has been subject of a study by Tang and colleagues (Tang et al., 2016), which focused mainly on the function of the upstream c-di-GMP riboswitch but also aimed to elucidate the function of this protein in cellular processes. Although Tang and colleagues named this protein Cap ("collagen-adhesion protein") their previous study does not provide any investigation into an actual binding ligand for this putative adhesion protein, and the name was given solely on the basis of comparison of predicted protein domains, and comparison to proteins with very low sequence homology. As described here (3.2), domain analyses of the primary amino acid sequence predicted both collagen and fibrinogen as possible binding ligands for CbpA/Cap. By utilizing a bacterial whole cell adhesion assay we were able to determine that CbpA is indeed capable of binding to collagen, as adhesion to this extracellular host cell matrix protein was completely abolished in a ΔcbpA deletion mutant. In our study, CbpA is essential for B. thuringiensis 407 attachment to collagens I, II, III, IV and V, for bacterial cells in exponential growth phase. Results from the adhesion assays further suggest that CbpA is not able to bind to neither fibrinogen nor fibronectin, contrary to several other MCSRAMM proteins (Wann et al., 2000;Vazquez et al., 2011;Larson et al., 2013). We propose that the name of this protein is changed from Cap to CbpA (collagen-binding protein A), in line with existing genetic nomenclature. The name Cap used for an adhesion protein also leads to unnecessary confusion, since within the B. cereus group cap genes are a hallmark for genes responsible for synthesis of the poly-γ-D-glutamic acid capsule (Cachat et al., 2008), typically found in highly virulent strains.
Members of the B. cereus group continue to pose a significant problem in the food industry, and the ability of these bacteria to bind to abiotic surfaces, especially steel which is commonly used in food processing plants, has been investigated previously (Wijman et al., 2007;Hayrapetyan et al., 2015;Galie et al., 2018). Although there are examples of other MSCRAMM proteins having a functional role in attachment of bacterial cells to polystyrene (Shimoji et al., 2003;Schroeder et al., 2009), as well as instances of other cell surface protein structures such as pili playing a role in adhesion of Gram-positive bacteria to abiotic surfaces (Pratt and Kolter, 1998;Giltner et al., 2006), there is no evidence suggesting that CbpA facilitates attachment to surfaces such as polystyrene, glass or stainless steel, substantiating the possibility that CbpA-mediated adhesion is specific to collagen.
Biofilm formation and motility are cellular processes which are typically affected by c-di-GMP signalling (Fagerlund et al., 2016;Jenal et al., 2017). Since cbpA expression is regulated at least partly by an upstream c-di-GMP binding riboswitch (Lee et al., 2010;Tang et al., Fig. 3. Effect of CbpA on adhesion in B. thuringiensis 407. Binding of B. thuringiensis 407 wild type (Bt407), cbpA deletion mutant (Bt407ΔcbpA), empty vector control strain (Bt407 pHT304-Pxyl), CbpA overexpression strain (Bt407 pHT304-Pxyl-cbpA) and complementation strain (Bt407ΔcbpA pHT304-Pxyl-cbpA) to extracellular matrix molecules and abiotic surfaces was investigated by a bacterial whole cell adhesion assay. (A) Bacterial cells bound to collagen, fibronectin and fibrinogen coated onto polystyrene plates were visualized and quantified by staining with methyl violet 6B. The mean and standard deviation of three independent experiments are shown for each extracellular matrix molecule tested, except for the binding assays against collagen I and collagen IV where eight and four independent experiments were performed, respectively. (B) Adhesion of cells to steel coupons and glass plates was measured by CFU count. The mean and standard deviation of three independent experiments for each abiotic surface are shown. For both experiments (A, B) a two-tailed paired t-test was performed to test for statistical significance, comparing the deletion mutant and the complementation strain with the wild type, and the overexpression strain with the vector control strain, respectively, for each of the tested conditions (surfaces) separately (*P < 0.05; **P < 0.01; ***P < 0.001).
2016), we aimed to determine if CbpA had a role in mediating these processes. Considering that it has been shown that higher levels of c-di-GMP promotes biofilm formation in B. thuringiensis 407 (Fagerlund et al., 2016) and that expression of cbpA is positively regulated by c-di-GMP, it could be expected that cbpA expression in biofilm cells would be high (Lee et al., 2010;Fagerlund et al., 2016;Tang et al., 2016). Analysis by RT-qPCR however revealed that cbpA expression was not at its peak in biofilm cells, but in planktonic cells during the early Fig. 4. Influence of CbpA on biofilm formation and motility. The B. thuringiensis 407 wild type (Bt407), cbpA deletion mutant (Bt407ΔcbpA), empty vector control strain (Bt407 pHT304-Pxyl), CbpA overexpression strain (Bt407 pHT304-Pxyl-cbpA) and complementation strain (Bt407ΔcbpA pHT304-Pxyl-cbpA) were assessed for biofilm formation and motility. (A) The capability to form biofilm was investigated with an assay based on staining of the biofilm with methyl violet 6B. The mean and standard deviation of three independent experiments are shown. A two-tailed t-test was performed to test for statistical significance, comparing the deletion mutant and the complementation strain (respectively) to the wild type, and the overexpression strain to the vector control strain (*P < 0.05; ***P < 0.001). (B-C) Swimming motility was examined after growth for 7 h on LB plates containing 0.3% agar. Shown are the mean values and standard deviation of normalized distances from at least three independent experiments. Within each experiment the cbpA deletion mutant was normalized against the wild type strain (B), the CbpA overexpression strain normalized against the empty vector control strain (C) and the complementation strain was normalized against both the wild type and the vector control strain (B, C). A single star (*) symbolises p < 0,05 in a two-tailed paired t-test. (D) shows a motility assay plate which displays representative results for the experiment. The swimming zones are marked with circles for better visualization. exponential growth phase, possibly indicating that expression of cbpA is subject not only to transcriptional control by c-di-GMP, but potentially also by other mechanisms. We did not detect a significant difference in biofilm mass nor in motility upon cbpA deletion in B. thuringiensis 407. In contrast, Tang and colleagues (Tang et al., 2016) report both increased biofilm formation and higher motility in a cbpA deletion mutant of B. thuringiensis BMB171. This is surprising, since B. thuringiensis 407 and B. thuringiensis BMB171 are closely related strains within the B. cereus group, and the sequence identity between the predicted binding domains N1-N3 of CbpA in these two strains is 97%. Although we could not relate the absence of CbpA from the B. thuringiensis 407 cell surface to biofilm or motility phenotypes, the overexpression of CbpA in B. thuringiensis 407 led to a decrease in both biofilm formation and motility. This is in contrast to the common pattern of c-di-GMP regulatory networks where motility and biofilm formation are normally oppositely regulated (Jenal et al., 2017), and which is also found within the c-di-GMP regulatory network in B. thuringiensis 407 (Fagerlund et al., 2016), possibly suggesting that the observed effects of CbpA overexpression on biofilm formation and motility may be caused by simple physical interruption of biofilm and motility processes due to unnaturally high amounts of CbpA protein on the cell surface, rather than representing a true biological function of CbpA. CbpA is a large protein with a predicted molecular weight of 236 kD and contains a long repetitive Bregion which is thought to help project the binding domain A away from the bacterial cell surface, presenting it to its ligand(s) for binding (Rich et al., 1998;Deivanayagam et al., 2000;Jemima Beulin and Ponnuraj, 2017). With the predicted size and structure of the CbpA protein one may speculate that increased amounts of the protein present at the cell surface could possibly disrupt normal movement of the flagella, as well as potentially disturb the accessibility of other surface adhesins, leading to decreased swimming motility and a decreased ability of the bacteria to attach to surfaces and/or to each other.
Most infections caused by B. cereus are gastrointestinal tract infections, but the bacterium is also known to cause a variety of other infections, particularly in immunocompromised patients (Drobniewski, 1993;Bottone, 2010). Collagen is the major component of connective tissue and the extracellular matrix in mammals, and is found in all organs including the gastrointestinal tract. The collagen content in the human intestine was found to be predominately collagen I (68%), collagen III (20%) and collagen V (12%) (Graham et al., 1988), and collagen is located in the submucosa of the gastrointestinal wall, but also in the basement membrane which is attached to the overlaying epithelium and contains collagen IV (Visser et al., 1993;Beaulieu et al., 1994). Collagen is also found in invertebrates including insects, where it is located under the epidermis and around organs including the midgut, and there is evidence which supports that insect collagens are similar to mammalian collagen I and IV (Francois et al., 1980;Francois, 1985;Lunstrum et al., 1988). The phylogenetic distribution of CbpA with its predominance in phylogenetic group IV may imply that this protein could be more abundant in B. cereus group strains isolated from non-clinical sources. Interestingly however, all collagens that were used in the adhesion assay in this study originate from mammalian tissues, providing a potential for CbpA to make a contribution to attachment of B. cereus group strains to tissues also during human infection. Also, while B. thuringiensis is often considered non-pathogenic for humans and is widely used as an biological pesticide (Jouzani et al., 2017), some studies have suggested that the bacterium may have pathogenic potential to humans (Damgaard et al., 1997;Hernandez et al., 1998;Helgason et al., 2000) and B. thuringiensis strains generally contain the same chromosomal virulence factors that are often associated with human infections caused by B. cereus. This includes the Hbl, Nhe and CytK cytotoxins (Ngamwongsatit et al., 2008;Guinebretiere et al., 2010), the function of which during B. cereus group infections is not completely elucidated. Yet these proteins are generally theorized to cause food-borne diarrhoeal disease by disruption of the epithelial layer, functioning as enterotoxins. Also, the insecticidal Cry toxins of B. thuringiensis function by forming pores in the epithelial cells in the insect midgut . One may therefore speculate that expression of collagen-binding proteins like CbpA could potentially facilitate attachment to any exposed collagen after disruption of the epithelial layer, and thereby assist in the infection process in different hosts. The in vivo infection assay in the wax moth G. mellonella larvae did not reveal a significant decrease in mortality for the cbpA deletion strain. Quite possibly, this could be explained by genetic redundancy, as B. thuringiensis 407 contains two other proteins that are annotated as collagen adhesins, as well as an ortholog to B. anthracis BA0871 for which binding of collagen has been shown experimentally (Supplementary Table S3) (Xu et al., 2004). Although our results suggest that CbpA is not essential for infection of insects by B. thuringiensis, the protein could still facilitate adhesion during the colonisation of a host and it is interesting to observe that a slight trend towards increased virulence was observed when the cbpA gene is expressed from the low copy plasmid pHT304-Pxyl in the complementation strain. Also, B. thuringiensis has been suggested to have the capacity to form biofilm in the insect gut (reviewed in Majed et al., 2016). In the previous work by Tang and co-workers (2016) it was reported that the CbpA ortholog (named Cap) from B. thuringiensis strain BMB171, although carrying a similar complement of putative collagen adhesion proteins to B. thuringiensis 407, indeed affected virulence towards the insect larvae Helicoverpa armigera. Here the in vivo assay was however run under substantially different experimental conditions, using young larvae and bacteria in the sporulation stage, and the effect measured over six days following free feeding. This is different from the G. mellonella analysis performed in the present study, with a one shot dose of a controlled number of vegetative bacteria on large larvae, possibly explaining the difference in obvious impact.
This study presents the first experimental proof that the extracellular matrix protein collagen is a ligand of the CbpA cell surface adhesin, and that CbpA-mediated adhesion is collagen-specific and not implicated in adhesion to fibrinogen or fibronectin, nor to tested abiotic surfaces. CbpA is present in a range of B. cereus group strains, and may constitute a novel surface protein potentially involved in adhesion processes during infection.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2019-09-10T00:27:58.681Z
|
2019-08-23T00:00:00.000
|
{
"year": 2019,
"sha1": "848a2cea8d6ebee14229bd7dfcfb815fc813e5b8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.tcsw.2019.100032",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c780d1ba80d51bff093f72a1ddb21dce432e8963",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
29418864
|
pes2o/s2orc
|
v3-fos-license
|
Fluconazole Prophylaxis in Neonates ( Non-Systematic ) Literature Review
Background: Nosocomial infection remains an important contributing factor for morbidity and mortality in neonates. Coagulase-negative staphylococci have emerged as the predominant pathogens of late onset sepsis. This is followed by staphylococcus aurous, gram negative bacilli, and fungi. Old studies noted that mortality due to candidemia was higher in infants weigh less than 2000 g after being exposed to risk factors. The prophylactic use of fluconazole for the prevention of IC in extremely low birth weight was first reported in 2001. Methods: Current guidelines from Europe and North America that refer to the treatment of fungal infections are included. Literature search was performed using Medline, Scopus and Cochrane Central Register of Controlled Trials through March, 2016. Conclusion: Mortality was not different in early studies. However, recent studies concluded that mortality was reduced in the fluconazole arms. Risk-based approach towards fluconazole prophylaxis seems to be safe and effective.
Introduction
Nosocomial infection remains an important contributing factor for morbidity and mortality.Neonatal sepsis is divided into two classifications: early onset and late onset sepsis.Early onset sepsis occurs within the first 72 hours of life, while late onset sepsis occurs after 72 hours of birth.The incidence of late onset sepsis is inversely related to birth weight and gestational age [1].Coagulase-negative staphylococci have emerged as the predominant pathogens of late onset sepsis.This is followed by staphylococcus aureus, gram negative bacilli, and fungi.Different risk factors were found to predispose neonates to invasive candidiasis.According to published data, colonization with candida species increases the risk of candidemia (OR 5.1% 95% CI 1.01 -25.6) [2].Neonates in the NICU may be colonized with candida species after birth through two different mechanisms.First, vertical transmission from maternal flora.Second, horizontal transmission from the hands of health care workers [3].Old studies noted that mortality due to candidemia was higher in infants weighed less than 2000 g after being exposed to risk factors [4].Mechanisms that put preterm neonates at fungal infections are multifactorial that include but may not be limited to immature immune cells, use of broad-spectrum antibiotics, and frequent breeches of the skin [5].Studies found that candida species colonizing GI tract are identical to candida species isolated from blood in patients with candidemia [6].Haematogenous Candida meningoencephalitis (HCME) is a unique syndrome in preterm infants, where candida invades the central nervous system.This syndrome occurs in 15% -20% of patients with invasive candidiasis and may contribute to long-term neurodevelopmental abnormalities [7].In addition to that, sepsis in general was found to be associated with increase neurodevelopmental impairment among survivals [6].Candida albicans by far are the most common species colonizing GI tract and causing invasive infection followed by candida parapsilosis [3] [8].The European Society of Medical Infectious Diseases recommends fluconazole as the drug of choice in extremely low birth weight infants in centers where the incidence of invasive candidiasis (IC) is more or equal to 2%, while centers were the incidence of IC is less than 2% should be made on a case-by-case basis and embedded in a risk stratifications strategy [7].On the contrary, the most recent guidelines from the Infectious Disease Society of America recommended initiating fluconazole prophylaxis in centers where the incidence of IC is 10% or more.The prophylactic use of fluconazole for the prevention of IC in extremely low birth weight was first reported in 2001 [5].
Methods
Current guidelines from Europe and North America that refer to the treatment of fungal infections are included.Literature search was performed using Medline, Scopus and Cochrane Central Register of Controlled Trials through March, 2016, that reported on fluconazole prophylaxis in neonates restricted to English language.The following searching strings were used [Fluconazole AND prophylaxis AND preterm AND neonates AND invasive candidiasis AND central venous catheters].Only clinical trials and meta-analysis reported on fluconazole prophylaxis in neonates were reviewed for analysis (Figure 1).third day for 2 weeks, then every other day for the third and fourth week, then daily during the fifth and sixth week vs. placebo.They limited their study for the first 6 weeks of life of preterm neonates with extreme low birth weight and fluconazole was discontinued once central venous catheters are removed and neonates are extubated.Unfortunately, the incidence of IC at their facility was not stated in this study and was not specified if fluconazole will continue if other risk factors were still existed (Table 2) [5].
Rolnitsky et al. conducted a retrospective study to evaluate the efficacy of fluconazole
prophylaxis in VLBW neonates.The incidence of candidiasis in their institution is low so they developed a risk based approach to initiate fluconazole in high risk neonates.They considered neonates with ELBW, gestational age of less than 28 weeks, and broad-spectrum antibiotics as major criteria and fluconazole will be discontinued once the risk factor is no longer available.Central venous catheters, endotracheal intubation, Total parenteral nutrition were considered minor criteria and a VLBW neonate will not receive fluconazole prophylaxis if he had only one risk factor (Table 2) [9].Manzoni et al. conducted a randomized controlled trial to evaluate the prophylactic use of fluconazole in VLBW and ELBW preterm infants.The mean weight was 1065 +/− 280 in the 6 mg/kg group, 1060 +/− 245 in the 3 mg/kg group, and 1120 +/− 270 in the placebo group.The mean gestational age was 28.9 +/− 2.3.The primary outcome in this study was the incidence of colonization, and the incidence of IC.Colonization occurred less frequently in the 6-mg group (9.8%) and the 3-mg group (7.7%) than in the placebo group (29.2%);P < 0.001 for both comparisons.Lastly, invasive fungal infection occurred in 2.7% in the 6-mg group and in 3.8% in the 3-mg group, as compared with 13.2% in the placebo group; P 0.005 and P 0.02, respectively (Table 2) [10].Kirpal et al. conducted a randomized controlled trial to evaluate the safety, an efficacy of fluconazole prophylaxis in VLBW (Table 1) neonates.The mean weight was 1250 +/− 0.36 in the fluconazole group and 1220 +/− in the control group.The primary outcome in this study was the development of IC (Table 1).All cause mortality was considered a secondary outcome.The incidence of IC was significantly lower in the fluconazole group compared to placebo 21% versus 43.2%, 95% CI 0.09 -0.37, P 0.04.All cause mortality was also lower in the fluconazole group compared to States that looked at the efficacy of fluconazole in preventing IC. 72% of the enrolled neonates had a birth weight of less than 750 g in the placebo group and 75.58% had a birth weight of less than 1000 g in the fluconazole group.The primary outcome of this meta-analysis was the composite endpoint of IC and death.The OR for the composite end point was 0.48, P < 0.003, and the OR for IC was 0.2, P < 0.001.The incidence of death was not significantly different between placebo group and fluconazole group OR.68, P 0.14 [12].
Risk Factors for Invasive Candidiasis (IC) Infection
1) Very low birth weight and extremely low birth weight (Table 1).
Results
Fluconazole prophylaxis was effective in reducing the incidence of colonization and the incidence of IC in neonates with risk factors.The primary and secondary outcomes were not similar between studies (Table 2) [10] [11].Some studies measured the difference between candida colonization with or without fluconazole prophylaxis and found significant reduction in the incidence of colonization, which was translated to reduction in the incidence of IC as well (Table 2) [5] [10] [13].Only one trial reported the incidence of IC in their center and designed a risk-based approach for fluconazole elgibility [9].Fluconazole was associated with transient elevation in liver transaminases that returned to baseline after discontinuation [10] [11] [13].
Discussion
Literature of the recent years supports the prophylactic use of fluconazole in ELBW and in VLBW preterm neonates with one or more risk factors (Table 1).Risk-based ap proach towards fluconazole prophylaxis seems to be safe and effective.Mortality was not different in early studies.However, recent studies concluded that mortality was reduced in the fluconazole arms [10].In addition to that, due to the morbidity associated with systemic fungal infections, prophylaxis may be warranted in institutions where IC is at least 2% until the risk factors are no longer available.On the contrary, the most recent IDSA guidelines recommend using fluconazole prophylaxis in centers where the incidence of IC is more than 10%.Neurodevelopmental complications were similar in the fluconazole group compared to placebo in patients who already developed IC.The inclusion criteria in some trials included patients with VLBW and didn't evaluate the presence of other risk factors evaluated in the previous trials [11].
Two dose regimens were used in the clinical trials and both of them showed similar outcomes: • Fluconazole 3 -6 mg /kg every third day for 2 weeks, then every other day for the third and fourth week, then daily during the fifth and sixth week.• Fluconazole 3 -6 mg/kg twice weekly for 6 weeks.
This literature review has some limitations.First, trials that investigated the safety and efficacy of oral non-absorbable antifungal agents were excluded (Figure 1).Second, research was restricted to English, which may result in missing studies investigated fluconazole prophylaxis in neonates published in different languages.Finally, larger randomized trials are needed to evaluate fluconazole safety and efficacy in Neonatal intensive care units where incidence of IC is low or unknown.
Conclusion
Fluconazole prophylaxis in preterm neonates and/or neonates with ELBW seems to be safe and effective in reducing IC.Important to remember that only IV fluconazole was tested in clinical trials for 6 weeks or as long as risk factors are available.
Conflict of Interest
Author declares that he has no conflict of interest.
Kaufman et al. conducted the first prospective study to evaluate the efficacy of fluconazole prophylaxis in preterm neonates.The birth weight at baseline was 717 +/− 150 in the fluconazole group and 744 +/− 157 in the placebo group.All neonates were preterm with mean gestational age of 25.5 +/− 1.6 in fluconazole group and 25.7 +/− 2 in placebo group.They were randomized to receive Intravenous fluconazole 3 mg/kg every
Figure 1 .
Figure 1.Studies included in review.
Table 2 .
Selected studies evaluated the prophylactic use of fluconazole in preterm neonates.
|
2017-08-30T07:32:41.944Z
|
2016-12-08T00:00:00.000
|
{
"year": 2016,
"sha1": "89da5a358a9e73cde786f32b0fc0e332850b0ebc",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=72625",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "89da5a358a9e73cde786f32b0fc0e332850b0ebc",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
128046
|
pes2o/s2orc
|
v3-fos-license
|
Fungal Infections in Intensive Care Unit: Challenges in Diagnosis and Management
Infections have almost become an inseparable part of the intensive care units throughout the globe in spite of numerous advancements in diagnostic and therapeutic interventions. With advances in critical care medicine and introduction of broad-spectrum antibiotics, the incidence of invasive fungal infections in intensive care is on the rise, especially in patients with immunosuppression. The aim of this review is to collect recent information about various types of invasive fungal infections prevalent in the intensive care unit, the problems in their diagnosis and recent trends in their management. A thorough literature search was made in PubMed and Google using the following keywords for our search: Invasive fungal infection, antifungal therapy in intensive care unit, candidiasis. The major fungi implicated worldwide are Candida and Aspergillus spp., followed by Cryptococcus, Histoplasma, etc., in endemic areas. These produce a wide variety of infections that are difficult to diagnose as most of the diagnosing tests are non-specific and the culture takes a long time. An early suspicion of fungal infection with institution of appropriate antifungal therapy is mandatory for a positive outcome and to prevent development of invasive fungal infection.
Introduction
Infections have almost become an inseparable part of the intensive care units throughout the globe in spite of numerous advancements in diagnostic and therapeutic interventions. The presence of infection in critically ill patients poses unique challenges as it can directly influence the morbidity and mortality. Of the various infections prevalent in an intensive care unit, invasive fungal infection has always been considered to occur infrequently, but, over the past few years, with the surge in broad-spectrum antibiotic usage and improved knowledge of fungal diseases, the incidence has risen. At present, systemic fungal infections constitute a major problem in intensive care units in both developed and developing nations. However, intensivists in tropical developing countries like India face an uphill task during management of this ever-increasing menace of fungal infections. The fungi can survive in extremes of environment, but only 150-200 of over several thousand species of fungi are found to be pathogenic to humans. The incidence of systemic fungal infections is much more in immunocompromised individuals such as organ-transplanted patients, patients with hematologic malignancies and HIV-infected patients. In the United States, the incidence of nosocomial fungal infections has increased from 2 to 3.8 per 1000 discharges. [1] Candida spp. was found to be the most common, accounting for 8-15% of nosocomial blood stream infections and the fourth most common isolate of patients of the intensive care unit. [2] Among the developing countries, the diverse climatic conditions in India also are well suited for various fungal infections. The incidences of candidemia, systemic aspergillosis, cryptococcosis and zygomycosis in India have shown a steep rise, with emergence of newer fungal infections like Apophysomyces elegans. [3][4][5][6] text words and combinations: Fungal infections in ICU, invasive fungal infections, antifungal therapy -recent trends. The PubMed search was made from the year 1989 till date. The references of relevant articles were cross checked and the articles describing various types of fungal infections in the intensive care unit and recent trends in treatment of fungal infections were included.
Common Fungal Infections in the Intensive Care Unit: Etiology and Clinical Scenario
The diagnosis of fungal infections in critically ill patients is an extremely difficult task as the symptoms are invariably masked by the presence of dominant primary pathology. The etiology of fungal infections can be broadly subdivided into systemic infections caused by true pathogenic fungi and infections caused by opportunistic saprophytic fungi.
True pathogenic fungi Histoplasmosis Epidemiology and clinical features
It is the most common among endemic mycosis, and is caused by a soil-dwelling dimorphic fungus, Histoplasma capsulatum. It is endemic in most parts of the United States and in many states in India. The portal of entry is respiratory tract, with severity of disease depending on the infecting dose. The spectrum of disease varies from mild pulmonary disease, severe pulmonary disease, chronic pulmonary disease and disseminated histoplasmosis. The chest radiograph may show multilobar infiltrates but is not diagnostic, while in patients with full-blown acquired immunodeficiency syndrome (AIDS), it may show predominantly reticulonodular infiltrates. [7]
Diagnosis
The diagnosis is by a high degree of suspicion, with a positive travel history to the endemic area, while the demonstration of Histoplasma antigen in urine, blood or bronchoalveolar lavage fluid of infected patients is diagnostic and rapid. If disseminated histoplasmosis is suspected, then a bone marrow biopsy may be helpful.
Management
Most infections are self-limited and require no specific treatment. The treatment of severe and disseminated disease involves administration of amphotericin B; both the conventional and the lipid formulations can be used. However, the lipid formulation is found to be less nephrotoxic. Following initial stabilization, oral itraconazole can be added twice daily for at least 12 months. [8] Monitoring of itraconazole serum levels has to be done every 2 weeks with monitoring of liver and renal functions.
Epidemiology and clinical features
The incidence of blastomycosis is much less than that of histoplasmosis, and the endemicity exhibit almost similar pattern as of histoplasmosis. [9] The mode of infection is through inhalation of infected particles; however, extrapulmonary involvement is much less as compared with histoplasmosis. The usual clinical presentation is a community-acquired pneumonia not responding to usual antibiotics. It is less common in HIV-infected patients, but, if present, produces widely disseminated disease often involving the meninges. Other sites of infections are skin, bones and genitourinary tract.
Diagnosis
The presumptive diagnosis is made by demonstration of characteristic appearance of the fungus on 10% KOH digest of the respiratory secretions or bronchoalveolar lavage fluid. [10,11] Serologic tests have low sensitivity and hence are seldom used, but an open lung biopsy of the lesion may sometimes be required. Definitive diagnosis requires growth of organism from a clinical specimen.
Management
The treatment for the severe infection causing respiratory distress is by Amphotericin B (both conventional and liposomal), and followed by oral itraconazole twice daily for 12 months after initial stabilization. The meningitis usually responds to Amphotericin B with concurrent or sequential itraconazole or fluconazole.
Epidemiology and clinical features
It is caused by the soil-dwelling dimorphic fungus Coccidiodes immitis, which is found to be endemic in the desert-like terrain of America. The portal of entry is by respiratory tract and the usual presentation is community-acquired pneumonia. The most dreaded complication is meningitis, which may or may not be preceded by an acute illness. The patient presents with increasing severity of headache with altered mental status, which may lead on to development of hydrocephalus. Occasionally, it may present as stroke syndrome due to vasculitis of cerebral vessels. [12,13]
Diagnosis
The diagnosis is by demonstration of fungus from culture of respiratory secretions or of cerebrospinal fluid, and is thus more difficult. The serologic tests are usually used to establish the diagnosis. The cerebrospinal fluid analysis in meningeal disease shows predominantly lymphocytic pleocytosis with occasional eosinophils.
Management
The treatment is often difficult as it is less susceptible to antifungal drugs as compared with histoplasmosis or blastomycosis. In severe and disseminated disease, amphotericin B (liposomal or conventional) is still used; however, oral fluconazole or itraconazole is considered the main drug for less-severe disease. There is, however, no consensus for treatment duration. [14] Invasive and disseminated disease in immunocompromised patients is treated with Amphotericin B till clinical improvement, followed by itraconazole or fluconazole for at least 1 year.
These fungal infections may or may not exhibit clinical symptomatology, but their occurrence greatly affect the prognosis and outcome. The most challenging aspect is the timely diagnosis, and by that time it almost causes a great damage to the cellular functions. Once diagnosed, it should be treated as an emergency that can substantially lower the morbidity and mortality among critically ill patients.
Opportunistic fungi
These opportunistic fungal infections pose numerous challenges and difficulties in the management of critically ill patients, especially those who need broad-spectrum antibiotics for their primary pathology. It involves fungi like Candida, Cryptococcus, Aspergillus and Zygomycetes, which are regularly associated with immunocompromised patients such as hematooncological patients, patients undergoing organ transplantation or patients with immunodeficiency syndromes.
Candida SPP Epidemiology and clinical features
Invasive fungal infections with Candida spp. are the most common systemic fungal infections in the intensive care unit, accounting for 9% of all such infections in the United States. [15] The most common species implicated was C. albicans until recently, when the incidence of non-albicans Candida (NAC) has risen dramatically. [16] In India also, the isolates of NAC range from 52 to 96%, but the predominant species was found to be C. tropicalis instead of C. glabrata or C. parapsilosis in all age groups. [17][18][19] This increased incidence of NAC may be attributed to the increased use of fluconazole prophylaxis in immunocompromised patients, central venous cannulations and prior gastrointestinal surgery, and the mortality due to invasive candidiasis can range from 40 to 60%. [20] The various risk factors associated with the development of systemic candidial infection are summarised in Table 1. [21] Opportunistic candidial infections in patients with AIDS have been reduced due to the advent of highly active antiretroviral therapy (HAART) in developed nations, but it is still high in developing nations like India due to the high cost of such therapy. In intensive care units, catheter-related candida infections account for 30-80% of the proven or suspected cases of invasive candidiasis. [22] Diagnostic methodologies According to the European Organisation for Research and Treatment of Cancer/Mycoses Study Group, the diagnosis of candidiasis can be "definitive" or "probable." The demonstration of Candida in blood and histologic identification in tissues are considered to be definitive, but about 50% of these patients may show false-negative results, and the tissues may not be readily available in an intensive care setting. [23] The use of risk-factor-based prediction of invasive candidial infection has been described, which classified the patients as low risk or high risk depending on the presence or absence of four independent risk factors, namely total parenteral nutrition (1 point), multifocal colonisation sites (1 point), severe sepsis (2 points) and surgery (1 point). Accordingly, a "Candida Score" was calculated and a score of ≥3 predicted invasive candidial disease with a sensitivity of 81% and a specificity of 74%. [24,25] Laboratory tests can aid in increasing the accuracy of the above tests in diagnosing invasive candidiasis. The various laboratory tests involved are: • Beta-D-Glucan assay: It detects b-D-glucan, an important constituent of the cell wall of pathogenic fungi. It cannot however differentiate between Candida and Aspergillus, and is not helpful in diagnosing infection with Cryptococcus and Zygomycetes. [26] False-positive results may be found in patients receiving beta lactams, albumin and immunoglobulins and patients on hemodialysis with cellulose membrane. A single positive test has low sensitivity for diagnosing invasive candidiasis, and serial measurements may be more sensitive. • Polymerase chain reaction (PCR): It detects fungal nucleic acid and has been found to have a sensitivity of 90.9% with 100% specificity. [27] These results may be highly promising, but their exact utility in clinical settings is questionable.
Management strategies
• Pre-emptive treatment: It is considered in those patients who have a high risk of developing candidiasis but lack a definitive diagnosis of infection. Fluconazole in a dose of 400 mg to 800 mg per day is usually used, and this therapy has shown to significantly reduce the incidence of definitive candidiasis. [28] However, no improvement of outcome could be documented so far. • Prophylactic treatment: The prophylactic use of fluconazole is recommended in patients receiving chemotherapy for hematologic malignancies who are expected to be neutropenic, in patients with solid organ transplantation and in patients undergoing bone marrow transplantation. This prophylaxis can also be used in non-neutropenic at-risk intensive care patients. [29] However, no benefit of such prophylaxis could be documented till now. [30] • Definitive treatment: In patients with a definitive diagnosis of candidiasis, all catheters, including the central venous catheter, should be removed and invasive candidiasis such as endophthalmitis should be excluded.
In clinically stable patients, fluconazole is used in a dose of 400 mg to a maximum of 1600 mg per day for at least 2 weeks after the last positive blood culture unless azole resistance is suspected. Echinocandins (caspofungin, micafungin, anidulafungin) should be preferred if NAC such as C. glabrata or C. krusei are suspected. [29] In unstable and severely ill patients, amphotericin B (conventional or liposomal) alone or in combination with fluconazole, echinocandins (caspofungin, micafungin, anidulafungin), voriconazole or high-dose fluconazole (800 mg per day) may be used.
The duration of therapy should be 14 days after the first negative blood culture and clinical resolution of symptoms in patients with candidaemia. In patients with invasive candidiasis, antifungal treatment has to be given for a longer period of time until clinical and radiological resolution of the disease.
Epidemiology and clinical features
Aspergillosis is a serious infection in the intensive care unit, which is difficult to diagnose due to the lack of definitive diagnostic criteria. Majority of infections are caused by A. fumigatus, A. flavus and A. niger. The incidence in western countries was found to be about 15%, with a mortality rate of approximately 80%. [31] In India, A. flavus is the most common species, followed by A. fumigatus, and A. niger is the third most common species implicated.
The risk factors for development of invasive aspergillosis are similar to that of invasive candidiasis, with the predominance of immunosuppression in aspergillosis. Chronic obstructive pulmonary disease is considered to be an important risk factor for aspergillosis. [32] The portal of entry is mainly the respiratory tract, both lungs and sinuses. Aspergillosis can be acquired in the intensive care unit through an improperly cleaned ventilation system and contaminated water.
Diagnostic methodologies
The diagnosis of aspergillosis is often difficult, and the following methods are employed: • Culture and histopathology: Identification of the fungus in culture and tissue specimens is the gold standard for diagnosing aspergillosis, but may not be feasible in the intensive care setting due to the long time required and difficulty in obtaining tissue samples in these patients. • Radiology: Computed tomography showing a characteristic "crescent" or "halo" sign is highly suggestive of aspergillosis, especially in neutropenic patients, but only in 5% of non-neutropenic patients. [33] However, there might be many confounding factors that can affect the radiological diagnosis in these patients. • Galactomannan test: It is an important constituent of the cell wall and it can be detected in the serum or bronchoalveolar lavage fluid; however, brochoalveolar lavage is considered to be more sensitive. [34] • PCR: It detects the fungal nucleic acid and is sensitive when combined with other tests.
Combination of bronchoalveolar lavage fluid galactomannan test with the PCR greatly increases the detection of Aspergillus spp. Detection of aspergillus from patients without any risk factors should encourage further diagnostic workup.
Management strategies
Amphotericin B was initially used as the mainstay therapy, but, due to various adverse effects, voriconazole has now become the treatment of choice. The treatment for invasive aspergillosis can be described as: • Primary therapy: Intravenous voriconazole in the dose of 6 mg/kg every 12 hourly followed 24 h later by 4 mg/kg every 12 hourly. This should be followed by oral voriconazole 200 mg twice daily until clinical and radiological stabilization occurs. Alternatively, amphotericin B (liposomal) can be used intravenously in a dose of 3-5 mg/ kg/day followed by oral voriconazole (200 mg twice daily) until clinical improvement. • Salvage therapy: Echinocandins are utilized in this therapy when the primary therapy fails. Intravenous Caspofungin in a dose of 70 mg on Day 1, followed by 50 mg per day thereafter is usually used; however, intravenous micafungin can also be used. Recently, oral posaconazole in a dose of 200 mg every 6 hourly, followed by 400 mg twice daily, has been found to be equally effective. [35] • Combination therapy: There has been interest in the combination of various antifungal agents for the treatment of invasive aspergillosis and combination of liposomal amphotericin B and itraconazole as well as combination of voriconazole with caspofungin has been studied and found to be equally effective. [36] However, further studies are required for strongly recommending any such therapies.
The therapy is usually continued for 6-12 weeks. The galactomannan test can be used as a marker of effectiveness of the therapy. The therapy is found to be more effective when the immunosuppression is reversed.
The treatment for other forms of aspergillus infection, such as allergic bronchopulmonary aspergillosis and aspergilloma is usually symptomatic, and surgical resection may be warranted.
Epidemiology and clinical features
The most common organism implicated is C. neoformans, and the most common clinical feature is cryptococcal meningitis. It was considered rare before the emergence of AIDS, but now with the AIDS pandemic, its incidence has increased to about 5% in western countries while in Sub-Saharan Africa, its incidence reaches up to 30%.
The usual habitat of C. neoformans is in the bird droppings. The pulmonary manifestations range from pulmonary nodules, interstitial pneumonitis, pleural effusions or adenopathy in immunocompetent individuals to ground glass opacities, consolidation and hilar lymphadenopathy in patients with AIDS. [37,38] Diagnosis Diagnosis is delayed due to non-specific symptomatology.
Cerebrospinal fluid analysis shows pleocytosis with predominant lymphocytes and elevated proteins with depressed glucose. Serological cryptococcal antigen test of blood and cerebrospinal fluid is indicated in central nervous infections, but are costly and are not preferred in developing countries. Lateral flow assay was introduced recently for diagnosis, and has shown to be equally effective to enzyme immune assay.
Demonstration of the fungus in cerebrospinal fluid culture is diagnostic.
Management
Treatment in immunocompetent patients includes fluconazole or itraconazole for mild pulmonary disease and intravenous amphotericin B with or without flucytosine for the initial 2 weeks followed by fluconazole or itraconazole for a further 10 weeks.
In immunocompromised patients, treatment of pulmonary disease consists of fluconazole or itraconazole continued for 6-12 months, followed by secondary prophylaxis, while in that of central nervous system infections, the treatment consists of intravenous amphotericin B with Flucytosine for the initial 2 weeks followed by fluconazole for the next 8 weeks and then continuing with secondary prophylaxis. The secondary prophylaxis consists of fluconazole, and can be continued till the highly active antiretroviral therapy is instituted. [39] Zygomycetes Epidemiology and clinical features Zygomycosis is another opportunistic nosocomial infection commonly found in patients with uncontrolled diabetes and other forms of metabolic acidosis, like burns and hematological malignancies. The incidence is difficult to predict due to difficulties in antemortem diagnosis.
The common species implicated are Rhizopus arrhizus, R. microsporus, Lichtheimia corymbifera, etc., These produce spores that are commonly found in soil, dead organic matter and hospital environment. These may be found in dead necrotic tissues in the body.
Management
Treatment includes high-dose liposomal amphotericin B plus/ minus posaconazole. Debridement or surgical removal of dead necrotic tissue may be required.
Pneumocystis
This is an important opportunistic fungus that causes pneumonia in immunocompromised patients, especially in patients with AIDS. Pneumonia is caused by Pneumocystis jiroveci, which is commonly present in the environment but is non-infectious in healthy individuals. [40] The infection usually starts with fever and non-productive cough, gradually progressing to shortness of breath and hypoxia. Pneumothorax is a well known complication and should be suspected in acute chest pain with breathlessness with unilateral reduced breath sounds. It mainly involves the interstitial fibrous tissue of the lungs, leading to thickening and impaired oxygenation.
Diagnosis
The diagnosis of P. jiroveci infection is usually made by characteristic appearance of widespread pulmonary infiltrates on chest radiograph in an immunocompromised patient. The diagnosis is confirmed by histological demonstration of organism in sputum or bronchoalveolar lavage fluid, by staining showing characteristic cysts. PCR can also be used to detect the DNA.
Treatment
Primary prophylaxis is indicated in patients with AIDS with CD4 cell counts less than 200 cells/µl, patients on chronic steroid therapy, patients with malignancy on cytotoxic drug therapy and patients with organ transplantation. Co-trimoxazole is the first choice drug while dapsone, atovaquone and aerosolized pentamidine may be used as second-line drugs. [41] The treatment of established infection is by co-trimoxazole in divided doses for at least 3 weeks. The other drugs that may be used as second choice are dapsone, atovaquone, clindamycin, primaquine and pentamidine. Adjunctive steroids are usually indicated in moderate to severe disease to prevent inflammation and worsening of symptoms due to the treatment of infection.
Conclusion
The recent advances in management of life-threatening infections in intensive care unit with advent of broad-spectrum antibiotics have greatly reduced mortality but have significantly increased the incidence of invasive fungal infections. These invasive fungal infections are often difficult to diagnose and treat in the intensive care setting. With the advances in antifungal therapy, the mortality and morbidity in intensive care can be greatly reduced.
|
2017-06-18T01:22:33.402Z
|
2013-04-01T00:00:00.000
|
{
"year": 2013,
"sha1": "6ad2770e7547cd28fc91f9c930ac1d029acc327c",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3728870",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c80b606b3006fc4b9765ac229f899bf20a8e641",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269692945
|
pes2o/s2orc
|
v3-fos-license
|
Numerical simulation of hot soak in cabin based on ventilation strategy
To address the issue of excessive heat within the vehicle’s cabin, this study employs transient simulation methods to explore and analyze how various ventilation tactics and parameters influence the cabin’s temperature distribution and air quality. Findings indicate that the optimal thermal comfort and air quality conditions are achieved through the implementation of a top ventilation strategy. Specifically, with an air supply velocity of 3 m/s, a supply air temperature of 19°C, and an airflow direction of 15°, the air age at the driver’s breathing zone is measured at 18.92 seconds, while it stands at 20.35 seconds at the child passenger’s breathing zone. This ventilation setup achieves an air exchange efficiency of up to 80.1%, nearly complete pollutant removal efficiency, and places the thermal comfort at monitored human body points within a range deemed satisfactory or comfortable. Overall, this configuration yields the most favorable conditions for the comfort of the driver and passengers compared to other scenarios examined.
Introduction
The cabin of a car can have serious hot soak problems in the summer due to the rapid increase in air temperature under the effect of solar radiation [13].It is a significant issue that impacts cabin air quality, driver's thermal comfort, health hazards, and longevity [2,14], and the driving condition has been greatly affected and can even lead to traffic accidents.
To solve the problem of hot soak in cabin, AL-KAYIEM researched that opening the front window slit of a car by 2 cm in the summer would reduce the heat accumulation in the car [1].Ozeki et al. conclude that infrared reflective glass can reduce air conditioning loads [11].Ingersoll optimizes body shape, glass properties, ambient climate, and other factors in thermal comfort models [7].Jung et al. optimized a recirculation system for air conditioning [9].Jiangtao derives the maximum heat load on the driver of the car at a solar altitude angle of 63.51° [8].Guo et al. simulated the ventilation cooling, and the results show that the increase in air volume is favorable to the cooling of heat-immersed cars [6].Deng et al. proposed a dynamic heat transfer characteristic law based on simulation, quantitative comparison, and analysis, and proposed a more accurate heat transfer model.Increasing the solar radiation boundary parameter, Jingyu Wang showed that the air supply angle changed the flow field distribution inside the car [16].
Aiming at the heat immersion problem of the automobile cabin, the effects of various ventilation strategies and parameters in the cabin are investigated based on the transient simulation method, which provides a reference for the practical engineering application.It can provide a basis and guidance for the optimal design of air conditioning systems such as the cabin of automobiles, the prediction of air
Methods
Computational fluid dynamics methods were used for the experiments.The Reynolds number in the cabin of an automobile is relatively small, and the air can be regarded as steady-state turbulence, the flow of fluids in nature needs to obey the three laws of fluid dynamics [4].
Mathematical model
In this research, the primary equations subjected to numerical analysis include: the equation for conservation of mass, the equation for conservation of momentum, and the equation for conservation of energy.
Vehicle model
A three-dimensional (3D) model created with the Volkswagen vehicle cabin 1:1 in Figure 1, and the specific parameters are shown in Table 1.Two dummies are placed in the cabin.One is placed in the front row as the driver, and one is a child dummy, which is placed in the rear child safety seat.To enable precise data collection at particular locations within the cabin, the microenvironment surrounding the occupants was examined.This involved the study of seven measurement points located at the head, chest, L-arm, R-arm, abdomen, L-leg, and R-leg of both the driver and child passengers.The coordinates are detailed in Table 2.
Mesh segmentation and validation
To guarantee the precision of the computations while minimizing processing time, an optimal mesh size was determined that balances accuracy and efficiency.Additionally, a mesh independence verification was conducted to ensure that the chosen mesh configuration would provide a suitable simulation environment [4].The overall grid cell size of the cabin is set to 25 mm, curvature capture and capture proximity are turned on, and all other settings are left at default values.To verify grid independence, the grids are set to 334146, 928484, 1795662, and 2461340.The velocity distributions predicted at various points were analyzed and presented in Figure 2. It was observed that beyond the 179-watt grid, increasing the grid cell count does not markedly influence the local velocity patterns.Thus, the 179-watt grid model was selected to optimize both accuracy and computational efficiency, as depicted in Figure 2.For subsequent simulations, the model comprises 337,204 nodes and 1,795,662 grids, with over 80% of the mesh quality exceeding a value of 0.8.
Boundary conditions and numerical Settings
For different requirements, a suitable model is selected for simulation, the chosen solution models and configurations for the cabin's airflow are detailed in Table 3.The automobile cabin is divided into six parts: the body frame, the windows, the dashboard/rear bulkhead, the ceiling/floor, the seats, the driver (passenger), etc., and the results are in Table 4 [1].As can be seen from Figure 3, the error is within 4.9%, thus verifying the accuracy of the numerical calculation of air-conditioning cooling and the setting of boundary conditions.
Effects of hot soak on the cabin
In this paper, a numerical simulation was designed to study the changes in the compartment with the front end facing west at 2:00 p.m. on August 1 as the time point.In order to analyze the above simulation data, a 20 minutes thermal immersion simulation was conducted and the results are shown in Fig. 4. In Figure 4, the cabin can reach up which is an unacceptable temperature for drivers and passengers to enter.
Impact of three ventilation strategies
In response to the serious problem of hot soak, ventilation strategies need to be proposed to solve the problem.Fan raised the need to look at new strategies to analyze by heat, air, and viruses.For the three ventilation strategies above according to the parameters of the experimental design, 21 kinds of ventilation conditions are designed, and the specific set of ventilation speed, ventilation temperature, ventilation angle, and other parameters are shown in Table 5. Taking the top ventilation as an example, it can be seen from Figure 7 that the change of ventilation angle helps the fresh air to be well mixed in the cabin and has a good replacement effect on the corner air.Therefore, later in this paper, it is chosen to analyze with or without ventilation angle.It can be obtained from Figure 8 that in hot soak, the high-temperature region gradually increase, which will lead to the increase of the whole interior temperature of the cabin without good effects, so the air supply temperature will not be further investigated in the later section.
Air of age.
In order to further study and evaluate the indoor airflow condition and freshness, based on the velocity field analysis, several working conditions with more velocity streamlines and good air mixing are selected for analysis.
After preparing the air age UDF file, it was loaded into FLUENT for setup and calculation to obtain the distribution cloud of air age in the cabin.As can be seen in Figures 9 and 10, Case 4 and Case 6 show that under the front ventilation strategy, the air age of the crew compartment on the driver XY section ranges from 30.6-51.2 s, and most of the air age of the crew compartment on the driver ZX section ranges from 34.7-55.3s.Case 11 and Case 13 show that under the top ventilation strategy, the average air age of the cabin on the driver XY section ranges from 10-30.6 s, and the majority of the air age of the cabin on the driver ZX section ranges from 18.2-34.7 s.Case 18 and Case 21 show that under the sidewall ventilation strategy, the average air age of the cabin on the driver XY section ranges from 14.1-34.7 s, and the air age of most of the cabin on the driver ZX section ranges from 26.5-47.1 s.As can be seen in Figures 11 and 12, Case 4 and Case 6 show that under the front ventilation strategy, the air age of the cabin on the child XY section ranges from 30.6-63.5 s, and most of the air age of the cabin on the child ZX section ranges from 14.1-59.4s.Case 11 and Case 13 show that under the top ventilation strategy, the average air age of the cabin on the child XY section ranges from 10-30.6 s, and the majority of the air age of the cabin on the child ZX section ranges from 18.2-30.6s.Case 18 and Case 21 showed that under the sidewall ventilation strategy, the average air age of the cabin on the Child XY section ranged from 26.5-47.1 s, and the air age of the majority of the cabin on the Child ZX section ranged from 18.2-38.8s.As shown in Table 6, among the three ventilation strategies, the highest ventilation strategy has an air age of 23.97 s at the driver's breathing point and 13.41 s at the child passenger's breathing point for case 11, and 18.92 s at the driver's breathing point and 20.35 s at the child passenger's breathing point for case 13, which is superior to the other two ventilation strategies.The effect was superior to the other two ventilation strategies.In terms of air delivery angle, the change in air delivery angle for top ventilation resulted in a decrease in the air age of the driver and child sections, while both front ventilation and sidewall ventilation resulted in a slight increase in the air age of the driver and child sections.
Ventilation efficiency.
Meanwhile it can also put forward new requirements for the design and transformation of ventilation methods.Air exchange efficiency ε is a parameter of air distribution quality in the cabin, mainly reflecting the good or bad form of airflow organization.Comparing the airflow form in the cabin with the ideal airflow form reflects the scale and strength of the pull back the wind in the cabin.The formula is as follows [15]: ͲͲͲ where ηa represents the efficiency of air exchange via the ventilation approach; _ τ p denotes the mean age of air within the cabin; τn is the shortest duration required for air to traverse the cabin.
The air exchange efficiency is calculated from the simulation data of conditions 4, 11, and 18 for comparison.From Table 7, adjusting the angle for the front and sidewall ventilation strategies results in lower air exchange efficiency.The angle stays default, the front ventilation strategy has a somewhat lower air exchange efficiency than the top and sidewall ventilation strategies.In the top ventilation strategy case 11 , the air exchange efficiency is as high as 80.1%, which is the best air exchange strategy.
Contaminant removal efficiency.
The Contaminant Removal Efficiency (CRE) serves as a measure for assessing how effectively a ventilation system can eliminate pollutants.Using the CH2O concentration in the vehicle as a measure, the formula for removal efficiency is [3]: where CE represents the average pollutant concentration at the air outlet; CS denotes the average pollutant concentration at the air intake; CBZ signifies the average overall pollutant concentration within the space.A higher CRE indicates more efficient pollutant removal, reducing the need for fresh air and lowering energy consumption for the respective air processing and distribution.As can be seen in Figure 13, adjusting the 15°blowing angle led to a decrease in contaminant removal efficiency under the front ventilation strategy.Under the top ventilation strategy, adjusting the blowing angle had no significant change in contaminant removal efficiency.Under the sidewall ventilation strategy, adjusting the angle of air delivery resulted in a slight enhancement of the CRE.From the three ventilation strategies, top ventilation and sidewall ventilation will have better contaminant removal efficiency, while front ventilation may not be able to remove pollutants in time due to changes in the blowing angle.
Thermal comfort.
The PMV-PPD model for thermal comfort proposed by Fanger is currently recognized and widely used by most of the international.PMV refers to the Predicted Mean Vote, which predicts the skin sensation of the majority of the population in the same environment according to the seven levels.
Due to the existence of sensory differences and inconsistent sensitivity to sensation, there will be an inconsistent evaluation of environmental sensation, at which point it will be necessary to further predict the satisfaction of the environment using PPD.
Based on the data above, the derived monitoring point temperature and velocity of each part of the driver and passenger are calculated to analyze the changes of PMV-PPD of the human microenvironment in the cabin of the car with different front ventilation strategies, the distribution of PMV-PPD of the driver and passenger under different ventilation strategies is shown in the figures below.From Figures 14 and 15, it can be seen that the driver's local microenvironmental PMV is closer to 0 than the other scenarios in Case 13.From the PPD, it can be more intuitively seen that the driver's satisfaction level is the best in Case 13.
Conclusion
(1) In Case 11, the air age at the driver's breathing zone is measured at 23.97 seconds, while at the child passenger's breathing zone, it is 13.41 seconds.Conversely, in Case 13, the air age at the driver's breathing zone decreases to 18.92 seconds, and at the child passenger's breathing zone, it increases to 20.35 seconds.Alterations in the angle of air delivery through the top ventilation approach lead to a decreased air age for both the driver and child passenger areas.Meanwhile, employing either the front-ventilation or sidewall ventilation strategies slightly elevates the air age for these sections.
(2) In the top ventilation strategy, the air exchange efficiency is as high as 80.1% , when the angle of air supply is set at 15°, this represents the optimal strategy for air exchange.
(3) The top ventilation strategy and the sidewall ventilation strategy will have better contaminant removal efficiency, while the front ventilation strategy may not remove pollutants in time due to the change of blowing angle.
(4) With the top ventilation approach, the PMV-PPD levels at the human observation point fell within the basic comfort and satisfaction range, achieved with an air supply speed of 3 m/s, an air supply temperature of 19°C, and an air supply angle of 15°.In general, this resulted in higher comfort and satisfaction levels among the cabin occupants compared to other scenarios.
Figure 1 .
Figure 1.The 3D model of the cabin.
2. 5 .
Model validation Mao et al. confirmed that hydrodynamic simulations can precisely forecast infomation within the vehicle cabin during the different phases of automotive air conditioning systems[10].The accuracy of air conditioning simulation calculations is performed referring to the study in [12] by Sevilgen et al. for the cooling data of automobiles.The simulation conditions are set to be consistent with the study above, and the simulation is carried out for the same time to obtain the numerical simulation temperature values, which are compared with the data of Sevilgen et al. as shown in Figure 3.
Figure 3 .
Figure 3.Comparison of experimental data with simulation experiments.
Figure 4 .
Figure 4. Surface Temperature Distribution of Driver and Passenger.
1 .Figure 6 .Figure 7 .
Figure 6.Driver XY cross-section air age cloud map.Using front ventilation as a case in point, one can deduce from Figure6that the excessively high air speed in Case 3 does not effectively lower the cabin temperature.
Figure 8 .
Figure 8. Driver XY cross-section air age cloud map.
Figure 11 .Figure 12 .
Figure 11.Air age cloud of XY section for child passenger.
Figure 14 .Figure 15 .
Figure 14.PMV of drivers and passengers with different ventilation strategies.
Table 1 .
Main structural dimensions of the cabin.
Table 2 .
Coordinates of monitoring point locations.
Table 6 .
Drivers and passengers breathing a little air age.
Table 7 .
Air exchange efficiency of different ventilation Cases strategy.
|
2024-05-11T15:21:13.811Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "1d9ee70b519e6414fff01583459d357861043eee",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2756/1/012058",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "facf88e59afa813ce3ac46644cd904c647b96dc3",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
4588527
|
pes2o/s2orc
|
v3-fos-license
|
Modelling and Bayesian analysis of the Abakaliki Smallpox Data
The celebrated Abakaliki smallpox data have appeared numerous times in the epidemic modelling literature, but in almost all cases only a specific subset of the data is considered. There is one previous analysis of the full data set, but this relies on approximation methods to derive a likelihood. The data themselves continue to be of interest due to concerns about the possible re-emergence of smallpox as a bioterrorism weapon. We present the first full Bayesian analysis using data-augmentation Markov chain Monte Carlo methods which avoid the need for likelihood approximations. Results include estimates of basic model parameters as well as reproduction numbers and the likely path of infection. Model assessment is carried out using simulation-based methods.
Introduction
In 1967, an outbreak of smallpox occurred in the Nigerian town of Abakaliki. The vast majority of cases were members of the Faith Tabernacle Church (FTC), a religious organisation whose members refused vaccination. A World Health Organization (WHO) report (Thompson and Foege , 1968) describes the outbreak, with information on not only the time series of case detections but also their place of dwelling (compound), vaccination status, and FTC membership. The outbreak has inherent historical interest as it occurred during the WHO smallpox eradication programme initiated in 1959. Although smallpox was declared eradicated in 1980, it regained attention as a potential bioterrorism weapon in the early 2000s (see e.g. Gani and Leach (2001), Meltzer et al. (2001) and Halloran et al. (2002)) 1 and continues to be of interest due to concerns about its re-emergence or synthesis (see e.g. Henderson and Isao (2014), Eto et al. (2015), WHO (2015) and references therein). Public health planning for potential future smallpox outbreaks requires estimates of the parameters governing disease transmission, and thus being able to accurately obtain such quantities from available data is of considerable importance.
Within mathematical infectious disease modelling, the Abakaliki smallpox data set has been frequently cited, the first appearance being Bailey and Thomas (1971). The data are almost always used to illustrate new data analysis methodology, but in virtually all cases most aspects of the data are ignored apart from the population of 120 FTC individuals and the case detection times, while the models used are not particularly appropriate for smallpox (see for example Becker (1976), Yip (1989), O'Neill and Roberts (1999), O'Neill and Becker (2001), Huggins et al. (2004), Boys and Giles (2007), Lau and Yip (2008), Clancy and O'Neill (2008), Kypraios (2009), Shanmugan (2011), Xiang and Neal (2014), McKinley et al. (2014), Golightly et al. (2014), Oh (2014), Xu et al. (2016) and references therein). Ray and Marzouk (2008) use a more realistic smallpox model and take account of the compounds where individuals lived, but again ignore all non-FTC individuals.
The main objective of this paper is to present a Bayesian analysis of the full data set. To our knowledge, the only previous analysis of the full Abakaliki data is in Eichner and Dietz (2003), where the authors define a stochastic individual-based transmission model that considers not only the case detection times but also the other aspects of the data. Their model takes account of the population structure, the disease progression for smallpox, the vaccination status of individuals and the introduction of control measures during the outbreak. The model parameters are then estimated by constructing and maximising a likelihood function which is itself constructed using various approximations. Specifically, the true likelihood of the observed data given the model parameters is practically intractable, since it involves integrating over all possible unobserved events, such as the times at which individuals become infected. Eichner and Dietz tackle this problem by first using a back-calculation method to approximate the distribution of unobserved event times for a given individual, and then by making various assumptions about independence between individuals in order to construct an approximate likelihood function.
An alternative solution to the intractable likelihood problem is to use data-augmentation methods to produce an analytically tractable (and correct) likelihood, which can then be incorporated in a Bayesian estimation framework by using Markov chain Monte Carlo (MCMC) methods along the lines described in O'Neill and Roberts (1999) and Gibson and Renshaw (1998). We adopt this approach to carry out a full Bayesian analysis of the Abakaliki smallpox data, whilst also assessing how well the Eichner and Dietz approximation method works in this setting. Our approach provides results which can be directly compared with those of Eichner and Dietz, specifically estimates of model parameters, estimates of associated quantities of interest such as reproduction numbers, and the sensitivity of the results to the disease progression assumptions. In addition, we also estimate quan-tities derived via data-augmentation, such as who-infected-whom and the time of infection for each individual, carry out various forms of model assessment to see how well the model fits the data, and explore particular aspects of the model via simulation. None of these additional elements feature in the Eichner and Dietz analysis.
The paper is structured as follows. In section 2 we describe the data, stochastic transmission model and method of inference. Section 3 contains results and details of sensitivity analysis and model-checking procedures. We finish with discussion in Section 4. The supplementary material contains details of some likelihood calculations and the MCMC algorithm.
Data, model and inference methods
The outbreak is described in detail in Thompson and Foege (1968) and Eichner and Dietz (2003). There were 32 cases in total, 30 of which were FTC members. All of the infected individuals lived in compounds, which were typically one-storey dwellings built around a central courtyard, and capable of housing several families. The FTC members frequently visited one another and were somewhat isolated from the rest of the community, which is one reason why most previous data analyses only consider FTC members. Although FTC members refused vaccination, many of them had been vaccinated prior to joining FTC as described below. Table 1 contains details of the 32 cases of smallpox recorded during the outbreak, specifying the date of onset of rash, compound identifier, FTC membership status and vaccination status. Note that we set a timescale by setting day zero of the outbreak to be the first onset of rash date. The composition of the affected compounds is provided in Table 2, where the total numbers of vaccinated and non-vaccinated FTC and non-FTC members within each compound are listed. Note that on day 25, four FTC individuals from compound 1 (three vaccinated and one non-vaccinated) moved to compound 2. In addition, quarantine measures were put in place in Abakaliki, but not until part way through the outbreak. The exact time these measures were introduced was not recorded. Thompson and Foege (1968). Since we do not have complete vaccination status for all compounds, we use i 4 , i 5 , i 7 to allow for different possible configurations. The total number of vaccinated individuals is known and so i 4 + i 5 + i 7 = 4. It is also known that i 4 ∈ {0, 1}, i 5 ∈ {0, 1, 2} and i 7 ∈ {1, 2, 3}. Note: this table displays the compound composition after the move of the 4 individuals from compound 1 to compound 2 on day 25.
Stochastic transmission model
We suppose that the residents of Abakaliki form a closed population with N = 31, 200 individuals, labelled 0, 1, . . . , N − 1. Individuals 0, 1, . . . , n com − 1 are those inside the compounds, where n com = 251 is the number of people within the compounds. Any individual k = 0, . . . , N − 1 may be categorised as type (c k , f k ), where (i) c k = 1, . . . , 9 is the compound of k, with c k = 0 if k is outside the compounds, and (ii) f k is k's confession; FTC or non-FTC. These types may lead to differences in the mixing behaviour of individuals, but otherwise individuals are considered to be identical in their susceptibility to smallpox and their ability to infect others. We now describe a stochastic disease-transmission model for the spread of smallpox throughout the population of Abakaliki. This model is essentially the same as that described in Eichner and Dietz (2003), and is a variant of an SEIR (Susceptible-Exposed-Infective-Removed) model. At any given time t each individual in the population will be in any one of six states, namely susceptible, exposed, with fever, with rash, quarantined or removed. For j = 0, . . . , N − 1, let e j , i j , r j , q j , τ j denote, respectively, the times of exposure, fever, rash, quarantine and recovery for individual j. Any susceptible individual may become exposed, as described below, at which point they enter an incubation (or latent) period. They next enter the fever stage of the disease, at which point they become infectious and may hence infect others. During the rash stage which follows, the individual remains infectious but with a potentially different level of infectivity. We define the infectious period to be the combined time spent in the fever and rash stages. Infectious individuals will either become removed (namely recovery or death; we do not distinguish these) or isolated, in which the individual is quarantined and henceforth unable to infect others. Control measures, in which cases are placed into isolation soon after detection, are introduced part way through the outbreak at time t q . We do not allow re-infection, so that individuals who have been infected cannot become susceptible again. The epidemic continues until there are no infectious or exposed individuals remaining in the population, at which point each person will either still be susceptible, or will have been quarantined/removed. The lengths of time spent in each stage of the disease for different individuals are assumed to be mutually independent random variables with specified distributions, the parameter values of which are assumed known. We adopt the assumptions of the Eichner and Dietz model, so that the incubation period, fever period and rash period all have Gamma distributions with values as shown in Table 3. If quarantine measures have been introduced, then an individual may be put into isolation after a random delay following their rash onset date. Specifically, we define the quarantine time of individual j as q j = max(r j , t q )+Γ(2, 2), where Γ(µ, σ) denotes a gamma distributed random variable with mean µ and standard deviation σ. This means that no quarantining occurs prior to time t q , after which it takes an average of two days for a detected individual to be placed in isolation. Note that both removal and quarantining of an individual are equivalent in the sense that both mean the individual can 6 Mean (days) Standard deviation (days) Period before fever µ I = 11.6 σ I = 1.9 From fever to rash µ F = 2.49 σ F = 0.88 From rash until recovery µ R = 16.0 σ R = 2.83 From rash to quarantine or from t q to quarantine µ Q = 2.0 σ Q = 2.00 Table 3: Durations of disease stages in the smallpox model. The time until quarantine changes after the introduction of control measures as described in the text.
no longer infect others, but we include both in the model for clarity, and also for comparison with the Eichner and Dietz model. Note also that an infected individual will have both a removal time and a quarantine time, and both quantities appear in the likelihood function as explained later. We assume that the epidemic is initiated by a single exposed individual, whom we label κ. The epidemic thus begins at time e κ with the exposure of the initial infective κ. Recall that the infectious period is defined in two parts: the fever period and the rash period, during each of which an individual will be infectious, but at potentially different rates. During their rash period, an individual j will have contacts with other members of their compound who are of the same confession at times given by the points of a Poisson process of rate λ h per day. Individuals outside of the nine compounds do not have such contacts. In addition, FTC individuals will have contacts at rate λ f per day with other FTC individuals and contacts at rate λ a per day with anybody (including FTC individuals) in the population. Non-FTC individuals are assumed to have contacts with anybody in the population at rate λ a + λ f per day. This assumption is made to ensure that all individuals have the same average number of contacts per day outside of their own compound. During the fever period, contacts occur in exactly the same manner except that all rates are multiplied by a factor b, to account for a potential difference in infectivity during the fever period. In each case, the individual actually contacted is chosen uniformly at random from the pool of potential individuals in question. For example, contacts made by an individual with the entire population are drawn from the N − 1 other individuals. Note that this means that the individual-to-individual contact rate for such contacts is λ a /(N − 1). Any contact from an infective to a susceptible results in immediate exposure of the susceptible. All of the Poisson processes describing contacts are assumed to be mutually independent.
In addition, a proportion of the population is vaccinated. The vaccination status of all but a few individuals within the compounds is known, and the proportions of FTC/non-FTC vaccinated individuals outside the compounds is assumed to be the same as inside.
However, vaccination is not necessarily effective: each recipient of the vaccine is completely protected with probability v, or remains completely susceptible with probability 1 − v. Although the total number of vaccinated individuals is known, we do not have complete information on the composition of individuals with respect to vaccination status and FTC membership (see Table 2). There are five potential configurations of twelve individuals with unknown details to consider. For each individual in the population we will have a vaccination status, which is assumed known for most individuals, and a protection status, which is unknown.
Reproduction numbers
Within mathematical epidemic modelling, the so-called basic reproduction number is of primary importance. It can be defined as the average number of cases caused by a typical index case in a large population, and its value typically governs certain aspects of the epidemic such as the final number of cases or the probability of an epidemic dying out rapidly. From a mathematical viewpoint, reproduction numbers for stochastic epidemics are typically defined by allowing the population size to become infinite in an appropriate sense. For models featuring structured populations with different kinds of contact rates, reproduction numbers can be defined in various ways. Eichner and Dietz consider two such reproduction numbers, Here, R 0 is a reproduction number for an infected FTC member within the compounds, and can be interpreted as the average number of new infections such an individual would cause, under the assumption that the FTC population, compound populations and entire population are all large. Similarly, R F describes the average number of new infections caused by contacts made during the fever period of an FTC individual within the compounds. As discussed later, one drawback with such definitions for the Abakaliki data is that the compounds themselves are not particularly large. However, for comparison purposes we shall also consider R 0 and R F in our analysis.
Bayesian inference and MCMC
Our aim is to perform Bayesian inference for the unknown model parameters given the data, which consist of rash times for all infectives, knowledge of the population structure and vaccination status of individuals. We will use an MCMC algorithm to obtain approximate samples from the posterior density of the model parameters, namely the infection rates λ a , λ f and λ h , the vaccine efficacy v, the infectivity factor b and the time quarantine measures were introduced, t q . Our approach involves data augmentation, specifically involving the exposure, fever, removal and quarantine times of each infected individual, and also the protection statuses and unknown vaccination statuses. First we derive an expression for the 8 likelihood of the observed and augmented data. Let so that the components of θ are parameters that are assumed known, and where s = (s 0 , s 1 , ..., s N −1 ), where for i = 0, 1, . . . , N − 1, s i is equal to 1 if individual i is vaccinated and 0 if not. These vaccination statuses are assumed known for the majority of individuals, with a small number of exceptions, as shown in Table 2.
We define e, i, q and τ as the unknown sets of exposure (not including the initial exposure e κ ), infection, quarantine and removal times, respectively. Similarly, we define r as the known set of rash times for all infectives. Then we define the augmented data as where p = (p 0 , p 1 , ..., p N −1 ) contains the unknown protection status of each individual, specifically with p i = 1 if individual i is protected, and p i = 0 if they are not.
For an individual j = 0, ..., N − 1 who is susceptible at time t we define Λ j (t) as the infectious pressure acting upon them at time t, so that P(j is exposed in (t, t + δt]) = Λ j (t)δt + o(δt), whilst for an individual j who is no longer susceptible at time t we set Λ j (t) = 0. From the model definition, if j is susceptible at time t then Λ j (t) can be written as We denote the likelihood of the data r given the model parameters Φ as π(r|Φ). This is practically intractable since its evaluation involves integrating over all possible unobserved events. We instead proceed by augmenting the data r with γ to obtain the tractable likelihood where (i) for t ≥ e κ , denotes the total pressure acting on all susceptible individuals at time t; (ii) Λ j (e j −) = lim t↑e j Λ j (t) is the pressure on j just before their exposure; (iii) N inf is the set of individuals who ever become infected; (iv) T is the end of the epidemic, i.e. the first time at which no infectives or exposed individuals remain in the population (in practice we set T equal to the final rash time); and (v) f A , for A = (I, F , R, Q), is the probability density function of a Γ(µ A , σ A ) distribution. The augmented likelihood function in (1) is of a fairly standard form (see e.g. O'Neill and Becker (2001)) and contains the following components. The first product term accounts for the exposure of each of the observed cases and the exponential term gives the probability of individuals avoiding infection (either until they become infected, or throughout the entire epidemic). The second product term gives the likelihood of the times spent in each of the disease progression states for each individual who ever becomes infected. The final terms give the probability of the protection statuses for all individuals in the population. One practical drawback with our data augmentation scheme as it stands is that it includes protection statuses p i for all N = 31, 200 individuals in the population. However, it is possible to integrate out these parameters for all individuals outside the compounds, essentially because the number of protected individuals follows a Binomial distribution. The calculations are fairly lengthy and so are provided in the supplementary material.
We set λ a , λ f and λ h to have Γ(10 3 , 10 6 ) prior distributions which corresponds to vague prior assumptions for these parameters, set v, b and t q to have uniform prior distributions on (0, 1), (0, ∞) and (0, ∞) respectively, and set κ to have a discrete uniform distribution over all infected individuals. Since θ is assumed known, π(θ) is just a point mass. Finally, π(s) consists of a point mass at the known vaccination statuses with a uniform distribution over the five possible configurations of twelve unknown vaccination statuses as shown in Table 2.
We use an MCMC algorithm to produce samples of the parameters of interest from the target posterior distribution, updating both the model parameters and the unknown event times as well as protection statuses and vaccination combinations. The algorithm is non-standard, and although it is similar in principle to that in O' Neill and Roberts (1999), in practice it is far more complex and involves considerable book-keeping. Full details of the algorithm are provided in the supplementary material. Table 4 contains posterior summaries for the model parameters along with the corresponding maximum likelihood estimates from Eichner and Dietz' approximate likelihood method, and Figure 1 contains corresponding density plots, scatter plots and posterior correlation coefficients. Our estimates are fairly similar to those of Eichner and Dietz, and in particular the posterior modal values for the six basic model parameters are quite close. Although they represent different quantities, our posterior credible intervals and the confidence intervals of Eichner and Dietz are also quite similar. Our mean estimate of the basic reproduction number R 0 is 7.96, which is slightly higher than Eichner and Dietz' estimate of 6.87. Similarly, our estimate of the reproduction number for the fever period R F is 0.53 compared to Eichner and Dietz's 0.164. In this case the difference can be explained by the highly skewed posterior density for b, so that the mean and mode are clearly different. The scatter plots and correlation coefficients suggest that the basic model parameters can be separately estimated from the data and that the model is not over-parameterised.
Who infected whom
Since the MCMC algorithm involves imputation of all event times, it is straightforward to obtain estimates of the path of infection, i.e. who infected whom. Specifically, if an individual j is subject to infectious pressure Λ j (t) = m k=1 a k (t) at the time of their exposure, where a k (t) is the pressure from the kth of m infectives at time t, then the probability that individual k actually infected j is simply a k (t)/Λ j (t). Figure 2 shows the most likely infector for each observed case and also a greyscale plot which illustrates the associated uncertainty. We see that most infections occurred within compounds; note that individuals 7 and 8 moved from compound 1 to compound 2 during the outbreak and so in reality most of the infections caused by individual 8 were probably also within-compound. Most individuals give rise to one secondary case, but individuals 0 and 8 both cause multiple secondary cases. The greyscale plot shows that there is a modest degree of uncertainty around the identity of each infector. Figure 3 illustrates the posterior distribution of the initial exposure time for each of the 32 cases. Generally speaking there is relatively little uncertainty, and most of the exposure times follow the ordering of the rash times, both features that are likely to be consequences of the distributions assumed for disease progression. The figure also shows the temporal aspects of the outbreak in terms of generations of infection: the first two generations (i.e. those infected by the index case, and those they infect) are clearly discernible, while the third and fourth generations are less distinct from each other, although (according to Figure 2) the fourth generation only contains two individuals. We see some groups of individuals with very similar exposure times and who are all infected by the same person, according to Figure 2, individuals 4, 5, 6 and 10, 11, 12 being two examples. Such clustering, more akin to a point-source outbreak, illustrates the high transmission potential for smallpox in close-contact settings. Finally, we comment that Eichner and Dietz (2003) also provide a plot showing likely exposure times, along with other event times, but that this is based entirely on back-calculation using the assumed disease progression model. In particular, their plot takes no account of the transmission model itself.
Sensitivity analysis
We now briefly explore the sensitivity of our results to the underlying model assumptions, and in particular the assumed values for the periods of time spent in each disease state. Figure 4 displays posterior densities for the parameters of interest over a range of values taken from Meltzer et al. (2001) and Gani and Leach (2001). As might be expected, when µ R , the mean length of the rash period before removal, is reduced to make shorter average infectious periods, the estimates of the infection rate parameters increase to compensate. It is of interest to note that estimation of R 0 is somewhat sensitive to the choice of µ R . This is likely to be an artefact of the fact that that there are relatively few cases, the population structure and the introduction of control measures, since in a large uninterrupted outbreak we would expect R 0 to be more or less determined by the outbreak size. Figure 5 shows the effect of varying µ Q and σ Q , the parameters which govern the time taken for a case to be quarantined. Specifically, we consider the effect of halving or doubling the mean time, while keeping the coefficient of variation fixed at unity. Here we see relatively little impact, which is reassuring since we have very little information on which to base our modelling assumptions, in contrast to those which depend on more generic features of smallpox.
Model assessment
In order to assess how well our model fits the data we use its posterior predictive distribution. Specifically, we take samples of the basic model parameters from the MCMC output and simulate the model forwards in time for each set of parameter values. We then compare various aspects of the observed data to the ensemble of simulations to see if the former aligns in some sense with the latter. We start with the final size of the epidemic, i.e. the total number of cases. Figure 6 shows that although the observed final size (32) is not untypical of those produced by the model, it is some way from the mean (23.5) and mode of the distribution. This underestimation appears to be largely due to the fact that in the actual outbreak, four individuals, of whom two were infected, moved from compound 1 to compound 2, leading to new cases in compound 2. To account for this, Figure 6 also shows a histogram of the final size distribution among those simulated epidemics in which at least one of the four moving individuals was infected. It can be seen that this adjustment gives a better fit to the observed final size.
We next consider epidemic duration, defined as the length of time between the first case detection (rash) time and the last. Figure 7 shows a histogram of the durations of 5000 simulated outbreaks. The mean duration is 76.8 days, which is very similar to the Abakaliki outbreak (76 days). Including only those outbreaks in which infected individuals moved compound only increases the mean by a few days.
We next compare the simulated cumulative number of cases with the Abakaliki data. This is complicated by the fact that different simulated outbreaks usually have different total numbers of cases, and so to facilitate the comparison we consider only simulated outbreaks that have the same total number of cases (32) as the data. Figure 8 shows the results of 1000 simulations, from which it appears that the observed data are reasonably well captured by the model behaviour. To quantify this more precisely, we calculated a posterior predictive p-value as follows. Recall the chi-squared discrepancy measure (see e.g. Gelman et al. (1996)), which here takes the form where Φ denotes the model parameters and r = (r 1 , . . . , r 32 ) denotes a vector of casedetection (rash) times. Note that neither the mean nor variance term are available analytically, and so in practice they are obtained via simulation: given Φ, we simulate epidemics repeatedly until we have a sample of size M 1 , all with 32 cases. The mean and variance of the jth rash time is then estimated directly from this sample. Suppose now that we have by repeatedly simulating epidemics until we obtain one with 32 cases. Denoting a typical simulation replicate r rep and letting r obs denote the observed rash times, the posterior predictive p-value is defined as To interpret this quantity, note that if typical simulations are close to the observed data then we would expect the ppp-value to be around 0.5, while values close to 0 or 1 would indicate a poor model fit. We carried out this procedure with M 1 = M = 100 and obtained a value of 0.42, which is suggestive of a good model fit. A more accurate value could in principle be obtained using larger values of M and M 1 , but the procedure is highly timeconsuming in practice due to the fact that we require simulated epidemics of a given final size.
As a final assessment of model fit, from 5000 simulations we found that on average 91% of those infected were FTC members, compared with 94% in the data, although we also found that around 20% of those infected were from outside the compounds, compared to no such individuals in the data.
Discussion
Transmission The posterior estimates of the basic model parameters indicate clearly that the dominant mode of transmission was within-compound between individuals of the same confession. Our estimates of who-infected-whom align with this; for instance, from Figure 2, the vast majority of transmission events occurred within a compound in the most-likely infection pathway. The epidemiological investigation reported in Thompson and Foege (1968) found that spread within compounds, and within families in particular, appeared to drive the epidemic and that membership of FTC itself was not the primary transmission route. This is in agreement with our findings. As in Eichner and Dietz (2003), we found that infectiousness is markedly less during the fever period than the rash period, although our mean estimate of the reduction parameter b is larger than their maximum likelihood value, which is most likely due to the skewed shape of the marginal posterior density.
Reproduction numbers Our posterior mean estimate of R 0 is close to 8. This is slightly larger than the Eichner and Dietz estimate (6.87) but underlines the potentially devastating nature of smallpox. Such values are radically different from those obtained using simpler models: for instance, assuming an SIR model in a homogeneously-mixing population of 120 FTC individuals typically results in R 0 estimates slightly larger than one, even allowing for the infection rate to vary with time (see e.g. O'Neill and Roberts (1999), Xu et al. (2016)). This highlights the importance of models which properly take population structure into account. As previously stated, R 0 can be interpreted as the average number of secondary cases produced by a single infective individual in a large susceptible population. For the Abakaliki data, such an interpretation is hard to apply directly since the compounds, wherein most transmission occurs, are small enough to provide a rapid saturation effect via the depletion of available susceptible individuals.
Model adequacy Our model appears to fit the data reasonably well, with the possible caveat that the model invariably predicts cases occurring outside of the compounds. The fact that the entire population of Abakaliki is rather unrealistically modelled as a homogeneously mixing population goes some way to explaining this; in particular, the potential for contacts between those inside and outside the compounds, and especially between FTC members and those outside the compounds, could well have been rather less than that assumed in the model. According to Thompson and Foege (1968), the FTC community was largely isolated from the community at large, although several of its adult members were involved in trading activities in and around Abakaliki. Consequently, a model in which some fraction of FTC members had contact with the outside community might be more realistic, although there are no data to directly inform this. Another aspect that is missing from our model is that of age categories; Thompson and Foege (1968) state that the highest attack rates were among children. However, there do not appear to be sufficient data on compound composition to accurately incorporate age categories, and it seems likely that a model with age-specific transmission rates may be over-parameterised.
Control measures and the end of the outbreak It seems likely that the advent of control measures at time t q played a crucial role in bringing the outbreak to its conclusion rapdily. Under the model assumptions, control measures reduce the rash period from an average of 16 days to just 2 days, which in turn reduces the number of new infections. Interestingly, the posterior mean of R 0 after t q (i.e. R 0 with µ R = µ Q = 2.0) is around 1.5, but this in itself is insufficient to permit further large-scale spread due to the depletion of susceptibles within the compounds, and the fact that the epidemic in the population outside the compounds is sub-critical (i.e. the basic reproduction number is less than 1). Expanding the latter point, if we define pre-and post-control measure reproduction numbers for spread within compounds, FTC and the wider population (e.g. R a = (µ R + bµ F )λ a , etc.), then posterior mean estimates show that (i) within compounds, the epidemic is super-critical before and after t q ; (ii) with the FTC community, the epidemic switches from super-to sub-critical; (iii) in the wider population, the epidemic is always sub-critical. Despite this, increasing the value of t q in simulations was found to increase the outbreak size; for instance, setting t q to be 50, 100 and 200 gave mean outbreak sizes of around 24, 44 and 64, respectively. However, with no restrictions, we found the average outbreak size to be around 86, which underlines the fact that the epidemic was sub-critical in the wider population.
Accuracy of the Eichner and Dietz likelihood approximation It is of interest to see that our results are fairly similar to those obtained by Eichner and Dietz (2003). The most plausible explanation for this is the fact that distributions used for the length of time in each disease stage do not have particularly large variances, which in turn means that the model is not all that different to one in which all event times are assumed known. For such a model, the approximation method used by Eichner and Dietz gives the true likelihood, essentially because the distributions used to approximate uncertain event times collapse to point masses around the true values. A further point of interest is that the Eichner and Dietz approximation produces a likelihood function which is numerically but not analytically tractable, specifically because it involves integrals that must be evaluated numerically. Although this is sufficient for optimization purposes such as maximum likelihood, in practice such likelihood functions can be computationally prohibitive for use within MCMC algorithms since they must be repeatedly evaluated. It would therefore be of interest to develop analytically tractable approximate likelihood functions.
|
2016-05-25T15:11:25.000Z
|
2016-05-25T00:00:00.000
|
{
"year": 2017,
"sha1": "755bef3c8902545e0df52efa6d752e8e6905432a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.epidem.2016.11.005",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "755bef3c8902545e0df52efa6d752e8e6905432a",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics",
"Medicine"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine",
"Computer Science"
]
}
|
54021923
|
pes2o/s2orc
|
v3-fos-license
|
Selenophene Bearing Low Band Gap Conjugated 2 Polymers : Tuning Optoelectronic Properties via 3 Fluorene and Carbazole as Donor Moieties 4
Mustafa Yasa 1, Seza Goker 2 and Levent Toppare 1,2,3,4* 5 1 Department of Polymer Science and Technology, Middle East Technical University, 06800 Ankara, Turkey; 6 mustafa.yasa@metu.edu.tr 7 2 Department of Chemistry, Middle East Technical University, 06800 Ankara, Turkey; sezagoker@gmail.com 8 3 Department of Biotechnology, Middle East Technical University, 06800 Ankara, Turkey; 9 toppare@metu.edu.tr 10 4 The Center for Solar Energy Research and Applications, Middle East Technical University, 06800 Ankara, 11 Turkey; toppare@metu.edu.tr 12 * Correspondence: toppare@metu.edu.tr; Tel.: +90-312-210-3251 13 14 Abstract: In this study, two donor-acceptor (D-A) type conjugated polymers namely PQSeCz and 15 PQSeFl were designed and synthesized. Selenophene was incorporated as a π -bridge, quinoxaline 16 as an acceptor unit while carbazole and fluorene were used as the donor units. Polymers were 17 synthesized via palladium catalyzed Suzuki polymerization reaction. All molecules were 18 characterized by 1H and 13C NMR Spectroscopy. The weight and number average molecular weights 19 of the two polymers were determined by gel permeation chromatography (GPC). Electrochemical 20 and spectroelectrochemical characterizations of the polymers were performed to investigate their 21 optoelectronic properties. Oxidation potentials were 1.15 V/ 0.82 V and 1.11 V/ 0.82 V for PQSeCz 22 and PQSeFl respectively, while reduction potentials were -1.26 V /-1.14 V and -1.48 V/ -1.23 V, 23 respectively. In the visible region, maximum absorption wavelengths for the two polymers were 551 24 nm and 560 nm, respectively. Optical band gaps (Egop) were found from the lowest energy π – π∗ 25 transition onsets as 1.71 eV and 1.58 eV, respectively. Both polymers showed good solubility in 26 common solvents. 27
Introduction
Conjugated polymers, without any doubt, have been recognized as one of the most important materials for plastic electronics.The advantage of them over other semiconductors is the possibility of solution processability, structural modification, low cost and light weight.Conjugated polymers due to their ability to combine tunable electronic and optical properties are ideal semiconductors for the future low-cost plastic electronics [1].For many years, polymers have been used as insulators until the discovery that polymers can be provided with conductivity and conductivity can be increased substantially upon doping process.Applications of conjugated polymers have been limited until 2000s.Synthesizing new polymeric materials with better conductivity and processability gave a new impulse to the field.The interest in conjugated polymers has increased significantly after the discovery of the fact that the electrical conductivity of conjugated polymers can be increased upon oxidation [2].Conjugated polymers have a large range of applications like light emitting diodes, biosensors, batteries, and solar cells due to flexibility, electronic properties, low cost, manipulation of structure, and ease of processing [3][4][5][6][7][8][9][10][11][12][13][14].For conjugated polymers, HOMO and LUMO energy levels and the magnitude of the band gap are the most important parameters to identify the optoelectronic properties [15].Engineering of the band gap can give desired conjugated polymers with desired properties, like electrical and optical properties.Preparation of low band-gap polymers mainly depends on two approaches, stabilization of quinoid resonance structure and donor-acceptor approach.Conjugated polymers have two resonance structures; aromatic and quinoid.Having a smaller band gap, the quinoid structure is less stable compared to aromatic structure.Adaptation of quinoid structure is succeeded via destruction of aromaticity resulting in the loss of stabilization energy.Donor-Acceptor (D-A) interaction is another approach in the preparation of low band gap conjugated polymers.The concept is to employ alternating electron donating and electron withdrawing units in the backbone of the conjugated polymer to reduce the band gap.Such conjugated polymers are called as "D-A" polymers [16].
For the design of new polymeric materials, among many acceptor moieties utilized in conjugated polymers, quinoxaline is one of a kind.Quinoxaline based polymers are highly used for organic electronics as acceptor comonomer.It has desirable properties like strong electron withdrawing capability, simple preparation and side-chain modification.Due to ease of controlling the morphology with introducing new substituents easily, quinoxaline unit is highly attractive.It is obvious that quinoxaline unit has high performance for preparation of low band gap conjugated polymers.Energy levels and band gap of quinoxaline bearing conjugated polymers are tuned due to the structure of quinoxaline that has an electron deficient N-heterocycle.Furthermore, aromatic groups can be introduced to this core unit resulting in both red-shift absorption spectra of the conjugated polymers and tuning the solubility of polymers with chemical modification such as introducing alkyl chains.Conjugated polymers bearing quinoxaline units as acceptor have low band gaps with a deep highest occupied molecular orbital (HOMO) and high solubility [17][18][19][20][21].
Carbazole is one of the most popular moieties used in organic semiconductor devices such as OLED, solar cells, and non-linear optical (NLO) materials either as a main core/or as a substituent because of its excellent hole-transporting capacity, high charge carrier mobility, the formation of stable radical cations coupled with efficient thermal, morphological and photophysical properties [22].On the other hand, fluorene has been extensively used in optoelectronic devices due to ease of synthetic versatility at aromatic 2,7 and C-9 positions along with thermal and chemical stability [23].
Materials
All chemicals were all purchased from Aldrich except THF which was purchased from Acros.
The solvent was removed under reduced pressure and the crude product was dissolved in dichloromethane and washed with water and brine several times.Na2SO4 was used to dry organic layer.The solvent was removed under reduced pressure and the product was recrystallized from methanol.A milky brown solid was obtained.Yield: 9.9 g, 81 %.Tributyl(selenophen-2-yl) stannane was synthesized according to the literature [37].
Selenophene (5.0 g, 38 mmol) was dissolved in anhydrous tetrahydrofuran (70 mL) in a two-neck flask under argon atmosphere.After the solution was cooled to -78 °C, n-butyl lithium (15.3 mL, 38.2 mmol in 2.5 mL hexane) was added drop wise.Then, tributyltin chloride (11 mL, 41 mmol) was added drop wise to the solution.The temperature was maintained at -78 °C for 4h and then the reaction mixture was stirred overnight at room temperature.After evaporation of solvent, the crude product was dissolved in dichloromethane and organic phase was washed with NaHCO3, water and brine.The organic layer was dried over anhydrous Na2SO4 and the solvent was removed using rotary evaporator to afford the product as a pale yellow oil.Yield: 14 g, 95 %. and the solvent was evaporated under reduced pressure.Cold methanol was added to the crude product.Sodium diethyldithiocarbamate trihydrate was added to the solution as Pd scavenger and the solution was stirred for 1.5 h.Then, the polymer was filtered through a Soxhlet thimble and washed with acetone and hexane to remove oligomers.The polymer was recovered with chloroform.Corresponding polymer was obtained as a dark green solid.Yield: 145 mg, 52 %.
Characterizations
In order to determine the structures and purity of synthesized molecules and polymers, a Bruker Avance DPX 400 NMR Spectrometer was implemented to obtain 1 H and 13 C NMR spectra in CDCl3.Chemical shifts were recorded in ppm with respect to tetramethylsilane internal reference.
To investigate the redox properties of synthesized polymers cyclic voltammetry (CV) was implemented.A three-electrode system was used to perform cyclic voltammetry.Silver and platinum wires were used as reference and counter electrodes, respectively.Indium tin oxide (ITO) cast glass substrate was used as the working electrode in an electrolyte containing 0.1 M tetrabutylammonium hexafluorophosphate in acetonitrile.Highest Occupied Molecular Orbital (HOMO) and Lowest Occupied Molecular Orbital (LUMO) levels were determined with respect to Normal Hydrogen Electrode (NHE), -4.75 eV in vacuum.
Electrochemical Properties
Cyclic voltammetry is a practical and useful technique that can be used to investigate redox behaviors of polymers.For this purpose the chemically synthesized polymers (PQSeCz and PQSeFl) were dissolved in chloroform (5 mg.mL -1 ) and coated onto ITO coated glass slides via spray coating.
Prepared ITO films were used both for electrochemical and spectroelectrochemical characterizations.The cyclic voltammograms of polymer films were recorded in 0.1 M tetrabutylammonium hexafluorophosphate (TBAPF6)/acetonitrile (ACN) solutions versus Ag wire pseudo reference electrode.performed in the potential range between -1.8 V and 1.5 V for PQSeCz and -1.7 V and 1.4 V for PQSeFl at a scan rate of 100 mV.s -1 .In addition to redox behaviors, CV can be used to explore the doping characters and the HOMO and LUMO energy levels of corresponding copolymers.As seen in Fig. 1, both polymers have ambipolar character.Polymers that illustrate ambipolar character have both p-type doping and n-type doping behaviors which make them good candidates for different applications such as; batteries, supercapacitors, and light-emitting diodes.The potential at which the polymer is oxidized at the anodic region is p-doping potential and the reverse peak stands for the dedoping process whereas n-doping is the reduction peak at the cathodic region and the reverse peak is for the dedoping.The oxidation (doping/dedoping) potentials were 1.15 V/ 0.82 V for PQSeCz, 1.11V/ 0.82 V for PQSeFl, in the positive potential region.Another important parameters for the characterization of conducting polymers are HOMO/ LUMO energy levels of these materials which affect their application fields importantly.HOMO energy levels were calculated as -5.54 eV for PQSeCz, and -5.58 eV for PQSeFl.Organic anions are mostly unstable and reduction potentials are rarely obtained.For our polymers n-doping was clearly observed by CV studies.The reduction (doping/dedoping) potentials were -1.26 V /-1.14 V for PQSeCz, -1.48 V/ -1.23 V for PQSeFl, in the negative potential region.Moon and coworkers synthesized carbazole and quinoxaline based polymers with alkyl side chains and the π-bridge was thiophene in the polymer backbone.The polymers with different solubilizing side chains were not n-dopable in that study.
Replacing selenium atom with thiophene provided us to obtain a polymer (PQSeCz) with a lower band gap and n-dopable character [34].In this study, two different donor units (carbazole and fluorene) were inserted into the polymer backbones in order to explore the effect of these units on electrochemical and optical properties.Polymers have very similar oxidation potentials and electronic band gaps which may be attributed to their molecular weights.Optoelectronic properties and device performance of the polymers are improved with increasing molecular weights.In literature, Fréchet and co-workers revealed that high molecular weight polymers gave enhanced interconnectivity with high charge carrier mobility.Studies have shown that molecular weight of conjugated polymers can affect the effective conjugated chain length, and thus may vary their optical and electric properties [35].Polymers with different electron densities did not give the expected results in terms of their redox behaviors and HOMO-LUMO energy levels.Carbazole based polymer has a higher molecular weight than its fluorene derivative.The reason of obtaining higher molecular weight for carbazole based polymer could be attributed to its fully aromatic structure with stronger chemical and air stability compared to those of fluorene [36].Due to its air instability, we could not obtain a high molecular weight polymer with fluorene as the donor and this may lead to slight changes in the electronic properties of polymers.
Optical Properties
To explore optical properties of conducting polymers; λmax and optical band gap, spectroelectrochemical studies were performed.In this study, in situ UV-Vis-NIR spectra were monitored in 0.1 M TBAPF6/ACN solution to investigate the spectral response of the polymers to the doping processes.The polymers were dissolved in chloroform and spray coated onto ITO substrate until homogeneous films are obtained.In situ spectroelectrochemical analyses were performed via incrementally increasing applied potentials between 0.0 and 1.15 V for PQSeCz, 0.0 and 1.20 V for PQSeFl.These potentials were obtained from cyclic voltammetry studies and electronic absorption spectra were reported in Fig. 2. Recording neutral film absorptions are very crucial to calculate λmax and Eg op which are important parameters as regards to several applications.While the neutral film absorptions in the visible region were depleted, new transitions in the NIR region aroused during stepwise oxidation proving the formation of charge carriers on the polymer backbone namely polarons (radical cations) and bipolarons (dications).
As seen in Fig. 2, maximum absorption wavelengths in the visible region were centered at 551 nm for PQSeCz, 560 nm for PQSeFl.Optical bandgaps (Eg op ) of polymers were calculated from the lowest energy π -π * transition onsets and found as 1.71 eV, and 1.58 eV, respectively.Optical properties of polymers were summarized in Table 2.The synthesized polymers are electrochromic and both reveal different colors at their neutral and doped states.
Conclusions
Palladium catalyzed Suzuki polymerization was performed to obtain two donor-acceptor type alternating copolymers.Fluorene and carbazole were selected as the electron donor moieties whereas quinoxaline was used as the acceptor in the polymer backbone.Selenophene was introduced to the polymer to decrease steric hindrance between donor and acceptor groups which may arise from solubilizing alkyl chains.Moreover, selenophene can provide improved planarity, enhanced effective conjugation length, and lower band-gap energy due to its relatively lower aromaticity.Since selenophene bearing monomers have lower solubility than thiophene based counterparts, the polymers were synthesized with lower Mn.Due to its air instability, a high molecular weight polymer could not be obtained for the one with fluorene as the donor which may lead to slightly change in electronic properties of polymers compared to carbazole derivative.
PQSeFl possess grayish transparent color in its doped state hence it can be used as active layer in an electrochromic device.
Figure 3 .
Figure 3. Colors of (a) PQSeCz (b) PQSeFl in neutral and oxidized states
Table 1 .
Comparison of electronic properties of two polymers
Table 2 .
Comparison of optical properties of the polymers
|
2018-11-30T20:21:20.057Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "6446da3b49d3e4c426c91f6244c695303e6e721a",
"oa_license": "CCBY",
"oa_url": "https://www.preprints.org/manuscript/201805.0331/v1/download",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "6446da3b49d3e4c426c91f6244c695303e6e721a",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
}
|
8075444
|
pes2o/s2orc
|
v3-fos-license
|
Ipsen 5i is a Novel Potent Pharmacoperone for Intracellularly Retained Melanocortin-4 Receptor Mutants
Inactivating mutations of the melanocortin-4 receptor (MC4R) cause early-onset severe obesity in humans. Comprehensive functional studies show that most of the inactivating mutants of the MC4R are retained intracellularly. In the present study, we investigated whether a small molecule inverse agonist of the MC4R, Ipsen 5i, could act as a pharmacoperone and correct the cell surface expression and function of intracellularly retained mutant MC4Rs using multiple cell lines, including HEK293 and two neuronal cell lines. We showed that Ipsen 5i rescued the cell surface expression of all 11 intracellularly retained mutant MC4Rs studied herein in at least one cell line. Ipsen 5i functionally rescued seven mutants in all cell lines used. One mutant (Y157S) was functionally rescued in HEK293 cells but not in the two neuronal cell lines. Ipsen 5i increased cell surface expression of three mutants (S58C, G98R, and F261S) but did not affect signaling. Ipsen 5i had no effect on mutant MC4Rs with other defects (Δ88-92, D90N, I102S) or no defect (N274S). It also did not affect trafficking of a misrouted MC3R mutant (I335S). Cell impermeable peptide ligands of the MC4R or cell permeable small molecule ligand of δ opioid receptor could not rescue misrouted mutant MC4R. In summary, we demonstrated that Ipsen 5i was a novel potent pharmacoperone of the MC4R, correcting trafficking and signaling of a significant portion (73%) of intracellularly retained mutants. Additional studies are needed to demonstrate its in vivo efficacy.
INTRODUCTION
The melanocortin-4 receptor (MC4R) is a G protein-coupled receptor (GPCR) that is widely expressed in the central nervous system including the cortex, thalamus, hypothalamus, hippocampus, brainstem, and spinal cord (1,2). The MC4R plays a vital role in the leptin-melanocortin pathway in regulating energy homeostasis, affecting both energy intake and expenditure (3,4). Tissuespecific knockout studies revealed that the MC4R expressed in the paraventricular nucleus and/or amygdala neurons regulates food intake (5) whereas the MC4R expressed in the cholinergic neurons regulates energy expenditure and hepatic glucose production (6). Inactivating mutations of the MC4R cause early-onset severe obesity (7)(8)(9), which is the most common monogenic form of obesity in humans (10).
Most of the inactivating MC4R mutants are misfolded and trapped intracellularly by the stringent endoplasmic reticulum (ER) quality control system (11)(12)(13). These mutant MC4Rs may only have minor folding defect but retain pharmacological function. If they are escorted onto the cell surface, they can potentially bind the ligand and initiate signaling. Several studies have attempted to promote the anterograde trafficking of these mutant Abbreviations: DMSO, dimethyl sulfoxide; ER, endoplasmic reticulum; GPCR, G protein-coupled receptor; HEK, human embryonic kidney; MC3R, melanocortin-3 receptor; MC4R, melanocortin-4 receptor; MSH, melanocyte stimulating hormone; WT, wild type.
The therapeutic potential of molecular and chemical chaperones is limited due to disruption of proteostasis or significant side effects whereas pharmacoperone is a promising approach. It has been tested in numerous human diseases caused by misfolded proteins, including neurodegenerative diseases, cystic fibrosis, lysosomal storage diseases, and cancer, with several promising clinical trials underway [reviewed in Ref. (19)(20)(21)]. Misfolding is also the most common defect in diseases caused by mutations in GPCR genes (22). Pharmacoperones have also been identified for several GPCRs, including rhodopsin, V2 arginine vasopressin receptor, gonadotropin-releasing hormone receptor, calcium-sensing receptor, and others [reviewed in Ref. (21)]. In the MC4R, we and others reported a few molecules that act as pharmacoperones (13,(16)(17)(18). These compounds have low affinities for the MC4R, therefore, usually need high concentrations (10 −6 M and higher) to achieve any rescue.
Ipsen 5i was synthesized and identified as a high-affinity antagonist and partial inverse agonist of MC4R competing with [Nle 4 ,D-Phe 7 ]-α-melanocyte stimulating hormone (NDP-MSH) for binding to the MC4R (23)(24)(25). We reported recently that although it decreases basal signaling at the classical Gs-cAMP pathway, it acts as an agonist in the mitogen-activated protein kinase pathway (26). In this study, we investigated whether Ipsen 5i could act as a pharmacoperone promoting the proper folding and trafficking of intracellularly retained mutant MC4Rs using multiple cell lines . A total of 15 mutants were studied, including 11 (S58C, N62S, I69R, P78L, C84R, G98R, Y157S, W174C, P260Q, F261S, and C271Y) that are retained intracellularly and four (∆88-92, D90N, I102S, and N274S) that are expressed relatively normally at the cell surface with other or no defects.
CELL CULTURE AND TRANSFECTION
Human embryonic kidney (HEK) 293, Neuro2a, and N1E-115 cells were purchased from American Type Culture Collection (Manassas,VA, USA) and cultured in Dulbecco's modified Eagle's medium supplemented with 10% newborn calf serum (HEK293 cells) or 10% fetal bovine serum (Neuro2a and N1E-115 cells) at 37°C. HEK293 cells were stably transfected using calcium phosphate precipitation method for transfection and 0.2 mg/ml G418 for selection. Neuro2a and N1E-115 cells were transiently transfected using jetPRIME transfection reagent (Polyplus-transfection, New York, NY, USA) and approximately 24 h later were used for ligand treatment. Cells were treated with indicated concentrations of ligands or 0.1% dimethyl sulfoxide (DMSO) as control for 24 h at 37°C. All cell culture plates were pretreated with 0.1% gelatin before cell plating unless noted otherwise.
CONFOCAL MICROSCOPY
HEK293 stable cells seeded into poly-d-lysine-coated 8-well slides (Biocoat Cellware from Falcon, B&D Systems, Franklin Lakes, NJ, USA) were treated with 0.1% DMSO or 10 −6 M Ipsen 5i for 24 h. On the day of experiment, cells were washed with phosphate buffered saline for immunohistochemistry (PBS-IH, 137 mM NaCl, 2.7 mM KCl, 1.4 mM KH 2 PO 4 , 4.3 mM Na 2 HPO 4 , pH 7.4) and fixed with 4% paraformaldehyde for 15 min. After blocking with 5% bovine serum albumin (BSA) in PBS-IH for 1 h, cells were incubated with mouse anti-myc 9E10 monoclonal antibody (Developmental Studies Hybridoma Bank, The University of Iowa, Iowa City, IA, USA) 1:40 diluted in PBS-IH containing 0.5% BSA for 1 h. Cells were then washed and incubated with Alexa Fluor 488-labeled goat anti-mouse antibody (Invitrogen, Grand Island, NY, USA) 1:2000 diluted in PBS-IH containing 0.5% BSA for 1 h. Cells were washed, covered with Vectashield mounting media (Vector Laboratories, Burlingame, CA, USA) and a glass coverslip, and dried overnight at 4°C. Images were taken using a Nikon A1 confocal microscope. All the steps were performed at room temperature unless mentioned otherwise.
FLOW CYTOMETRY
HEK293 stable cells and Neuro2a transiently transfected cells were treated with either 0.1% DMSO or Ipsen 5i (10 −6 or 10 −5 M) for 24 h at 37°C. On the day of experiment, cells were washed with ice-cold PBS-IH, detached, and precipitated by centrifugation at 500 × g for 5 min. Cells were then incubated with antibodies the same way as described above for confocal microscopy. For immunostaining of MC3R, cells were incubated with HA.11 antibody (Covance, Princeton, NJ, USA) at 1:100 dilutions and then stained with secondary antibody as described above. Cells were analyzed using a C6 Accuri Cytometer (Accuri Cytometers, Ann Arbor, MI, USA). Fluorescence of cells expressing the DMSOtreated empty vector (pcDNA3.1) was used for background staining. The expression of the mutants was calculated as percentage of DMSO-treated WT receptor expression using the following formula:
DATA ANALYSIS
Data were analyzed using GraphPad Prism 4.0 software (San Diego, CA, USA). The statistical significance of the differences between DMSO and Ipsen 5i treated cells was assessed by Student's t -test.
To quantitate the rescuing effect of Ipsen 5i on the cell surface expression of mutant MC4Rs, flow cytometry studies were performed using HEK293 and Neuro2a cells. HEK293 cells stably expressing WT or mutant MC4Rs were treated with 10 −5 M (Figure 3A) or 10 −6 M ( Figure 3B) Ipsen 5i for 24 h. Consistent with confocal microscopy results, the cell surface expression of 10 mutants (S58C, N62S, P78L, C84R, G98R, Y157S, W174C, P260Q, F261S, and C271Y) was significantly increased with Ipsen 5i treatment to a level similar to or even higher than that of the DMSO-treated WT receptor. I69R was not rescued by Ipsen 5i in HEK293 cells. Neuro2a cells transiently expressing WT or mutant MC4Rs were treated with 10 −6 M Ipsen 5i for 24 h and then were used for flow cytometry studies. S58C was not further studied in neuronal cells because as described later its function was not rescued by Ipsen 5i. As shown in Figure 4, the cell surface expression of eight mutants (N62S, I69R, P78L, C84R, G98R, W174C, P260Q, and C271Y) was increased with Ipsen 5i treatment compared with the DMSO-treated control group. The cell surface expression of I69R was slightly (although significantly) increased in Neuro2a cells whereas it was not increased in HEK293 cells. The increase of cell surface expression of F261S was not statistically significant in Neuro2a cells. One mutant (Y157S) that was rescued by Ipsen 5i in HEK293 cells was not rescued in Neuro2a cells.
THE MAJORITY OF THE MUTANT MC4Rs RESCUED WITH Ipsen 5i COULD RESPOND TO AGONIST STIMULATION WITH INCREASED cAMP GENERATION
We next investigated whether Ipsen 5i-rescued mutant MC4Rs were functional in generating cAMP at the cell surface. HEK293 cells stably expressing WT or mutant MC4Rs were incubated with different concentrations of Ipsen 5i for 24 h, and then stimulated with 10 −6 M NDP-MSH. The intracellular cAMP accumulation was measured. As shown in Figure 5, the cAMP accumulation of WT MC4R was decreased by approximately 30% with 10 −6 Ipsen 5i treatment and by 80% with 10 −5 M Ipsen 5i treatment. We observed an increase in cAMP accumulation at 10 −9 M Ipsen 5i for C84R and W174C and a maximal increase at a concentration between 10 −8 and 10 −6 M for N62S, P78L, C84R, Y15YS, W174C, P260Q, and C271Y. Unlike most of the mutants that decreased cAMP accumulation at 10 −5 M Ipsen 5i, I69R had a maximal cAMP accumulation at that concentration. The signaling of S58C and G98R was not increased by Ipsen 5i.
In Neuro2a cells transiently expressing MC4Rs, we also observed an increase in cAMP accumulation at 10 −9 M Ipsen 5i and a maximal increase at 10 −6 M Ipsen 5i (or at 10 −5 M Ipsen 5i for I69R) in Neuro2a cells ( Figure 6A). As shown in Figure 6B, with 10 −6 M Ipsen 5i treatment in Neuro2a cells, seven mutants (N62S, I69R, P78L, C84R, W174C, P260Q, and C271Y) had significantly increased cAMP accumulation whereas signaling of three mutants (G98R, Y157S, and F261S) was not increased by Ipsen 5i. High concentration of Ipsen 5i did not decrease the cAMP accumulation of WT or mutant MC4Rs as dramatically as seen in HEK293 cells ( Figure 6A). Our results obtained from N1E-115 cells were similar with those obtained from Neuro2a cells (Figure 7).
Ipsen 5i DID NOT AFFECT MUTANT MC4Rs THAT ARE EXPRESSED AT THE CELL SURFACE
To investigate whether Ipsen 5i rescued the function of mutant MC4Rs that are expressed at the cell surface but have other defects, we studied four mutants that are defective in ligand binding [(∆88-92) (32) (Class III according the classification proposed by Tao (33)], signaling (D90N and I102S) (29, 34) (Class IV), or with no obvious defect (N274S) (28) (Class V). As shown in Figures 6B and 7, 10 −6 Ipsen 5i had no effect on the signaling of these four mutants.
SPECIFICITY OF THE MC4R MUTANT RESCUE
To investigate whether cell impermeable peptide ligands of the MC4R or cell permeable ligands of other receptors could rescue mutant MC4Rs, we studied the effect of two MC4R peptide agonists (NDP-MSH and α-MSH), one MC4R peptide antagonist (SHU9119), and one pharmacoperone of δ opioid receptor (naltrexone) (35) (Figure 8A) on C84R MC4R. As shown in Figure 8B, NDP-MSH and α-MSH decreased the signaling of WT MC4R by approximately 50%. However, none of the four ligands rescued the signaling of C84R MC4R.
To investigate whether Ipsen 5i specifically rescues mutant MC4Rs, we studied the effect of Ipsen 5i on one of intracellularly retained mutant MC3Rs (I335S) (27,36). As shown in Figure 8C, Ipsen 5i had no effect on the cell surface expression or signaling of I335S MC3R.
DISCUSSION
Most of the inactivating mutations in GPCRs causing human diseases result from protein misfolding and subsequent retention and degradation by the ER quality control system (22). Misrouted receptors may retain intrinsic function and become functional when correctly located (37). Pharmacoperones that can permeate plasma membrane specifically stabilize the conformation and correct the trafficking of misfolded receptors, thus, rescuing the receptor and curing human diseases (38)(39)(40)(41). In the current study, we identified Ipsen 5i, an antagonist of the MC4R, as a potent pharmacoperone specifically rescuing the cell surface expression and function of intracellularly retained mutant MC4Rs.
Our results showed that all 11 intracellularly retained mutants studied herein could be rescued to the cell surface by Ipsen 5i in at www.frontiersin.org least one cell line ( Table 1). Y157S could be significantly rescued in HEK293 cells but not in neuronal cells whereas I69R was partially rescued in neuronal cells but not in HEK293 cells. The effects of Ipsen 5i on eight mutants were similar between HEK293 and neuronal cell lines. The cell surface expression of most mutants treated with Ipsen 5i was increased to at least 50% of or even similar to that of the DMSO-treated WT receptor. The rescuing efficacies of Ipsen 5i were different for different mutations. I69R was the most difficult to rescue because it could only be maximally rescued with the highest concentration of Ipsen 5i (10 −5 M) and could not be rescued with another pharmacoperone of the MC4R, ML00253764 (unpublished observations). This suggests that I69R induces a large change in the receptor conformation that is difficult to be stabilized.
Eight of the 11 mutants rescued to the cell surface were functional in cAMP production (N62S, I69R, P78L, C84R, Y157S, W174C, P260Q, and C271Y) in HEK293 cells and seven mutants, with the exception of Y157S, in neuronal cells ( Table 1). These results suggest that these mutants, although misfolded and retained intracellularly, retain the ability to bind to agonist and initiate downstream signaling. Ipsen 5i treatment did not significantly increase F261S signaling. G98R, although rescued to the cell surface, did not respond to NDP-MSH stimulation with increased cAMP generation, suggesting that this mutant was also defective in ligand binding and/or signaling. We are also unable to rescue G98R functionally using a small molecule agonist as the pharmacoperone (42). Despite low cell surface expression, S58C has significant signaling (28), likely due to the presence of spare receptor (33). Ipsen 5i treatment significantly increased cell surface expression of S58C (Figures 2 and 3). However, its signaling was not increased and tended to decrease.
The signaling of WT MC4R was dramatically decreased when treated with 10 −5 M Ipsen 5i in HEK293 cells (Figure 5). The residual Ipsen 5i that had not been washed away presumably still occupied the binding site of the MC4R and therefore, antagonized the stimulation of NDP-MSH. Although Ipsen 5i has a highaffinity with the MC4R (K i , 2 nM), it has relatively low functional antagonist potency (77 nM) (24), minimizing its antagonizing effect on NDP-MSH. Indeed, in our study, 10 −6 and 10 −7 M Ipsen 5i, which already had significant pharmacoperone rescuing ability, only decreased the signaling of WT MC4R by approximately 30 or 20% in HEK293 cells, respectively ( Figure 5). Interestingly, we had not observed such dramatic decrease in WT MC4R signaling in neuronal cells (Figures 6 and 7), suggesting that it might be easier for Ipsen 5i to dissociate from the MC4R expressed in neuronal cells.
Ipsen 5i has low affinity for the MC3R (K i , 400 nM) and therefore, we investigated whether Ipsen 5i could rescue misrouted MC3R mutant I335S. We found that Ipsen 5i did not increase the cell surface expression or function of I335S MC3R, suggesting that Ipsen 5i was a pharmacoperone specific for the MC4R. Although Ipsen 5i was a potent pharmacoperone of the MC4R, it had no effect on mutant MC4Rs defective in ligand binding or signaling, suggesting that Ipsen 5i could only rescue the function of the intracellularly retained mutant MC4Rs. As expected, cell impermeable peptide ligands of the MC4R did not rescue the function of misrouted mutant MC4R C84R whereas cell permeable Ipsen 5i did. Consistent with previous reports on several other www.frontiersin.org GPCRs (35,(43)(44)(45), this observation suggests that only cell permeable small compound could act as a pharmacoperone and the rescuing action occurred intracellularly. Peptide ligands decreased the signaling of WT MC4R probably by inducing internalization and down-regulation (NDP-MSH and α-MSH) or by antagonizing NDP-MSH (SHU9119). Naltrexone, a pharmacoperone of δ opioid receptor (35), also did not correct the function of C84R MC4R, suggesting that only ligands for the MC4R could act as MC4R pharmacoperones. In summary, Ipsen 5i increased the cell surface expression of all 11 intracellularly retained mutant MC4Rs (100%) studied herein and eight of the 11 mutants (73%) were functional at the cell surface in at least one cell line. Ipsen 5i could rescue mutant MC4Rs at a concentration as low as 10 −9 M. To our knowledge, it was the most potent pharmacoperone of the MC4R identified so far. Future experiments aimed at demonstrating the in vivo efficacy of this ligand in transgenic animals will represent another important step toward personalized medicine for treating patients harboring these MC4R mutations.
|
2016-05-13T23:40:27.479Z
|
2014-08-04T00:00:00.000
|
{
"year": 2014,
"sha1": "057b63bd5a37570bf15e0ffd4aa3ebe3662b3e38",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2014.00131/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "057b63bd5a37570bf15e0ffd4aa3ebe3662b3e38",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
226307229
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness and safety of azithromycin 1.5% eye drops for mass treatment of active trachoma in a highly endemic district in Cameroon
Objective To evaluate the effectiveness and safety of azithromycin 1.5% eye drops under field conditions to reduce active trachoma in a highly endemic district in Cameroon. This is a follow-up of an initial report published in 2010. Methods and analysis Three annual campaigns were performed in 2008, 2009 and 2010 to treat the population (~1 20 000 individuals) of the Kolofata Health District with topical azithromycin 1.5% (one drop in each eye, morning and evening for three consecutive days). The effectiveness of this intervention against active trachoma was assessed in children aged 1–9 years in cross-sectional studies prior to each mass treatment using a systematic sampling procedure (in 2008, 2009 and 2010) and then 1 year (2011) and 3 years (2013) after the last intervention among the villages with previously high active trachoma prevalence or never tested. Results The prevalence of trachomatous inflammation—follicular (TF) dropped from 24.0% (95% CI 20.7 to 27.5) before treatment to 2.8% (95% CI 2.2 to 3.7) 1 year after completion of the 3 year campaign. Trachomatous inflammation—intense was present in only 4 (0.2%) children 1 year after the third round of treatment. Three years after the last campaign, the surveillance survey among the most prevalent villages and villages never tested before showed a prevalence of 5.2% (95% CI 3.6 to 7.2) of active trachoma. Tolerance was excellent, with no report of treatment interruption, serious ocular or systemic adverse events. Conclusion Annual mass treatment with azithromycin eye drops was shown to be effective in reducing TF to a level ≤5% one year after a 3-round annual mass treatment in an endemic region at the district level.
Results
The prevalence of trachomatous inflammationfollicular (TF) dropped from 24.0% (95% CI 20.7 to 27.5) before treatment to 2.8% (95% CI 2.2 to 3.7) 1 year after completion of the 3 year campaign. Trachomatous inflammation-intense was present in only 4 (0.2%) children 1 year after the third round of treatment. Three years after the last campaign, the surveillance survey among the most prevalent villages and villages never tested before showed a prevalence of 5.2% (95% CI 3.6 to 7.2) of active trachoma. Tolerance was excellent, with no report of treatment interruption, serious ocular or systemic adverse events.
INTRODUCTION
Trachoma is a chronic infective condition of the eye caused by the microorganism Chlamydia trachomatis. It is the leading infectious cause of blindness worldwide and was previously estimated to be responsible for visual impairment in 1.6 million individuals, of which 0.4 million were irreversibly blind. 1 The disease begins in early childhood. It is seen mostly in children in association with red, sticky eyes, with symptoms of itchy, painful eyes. In the WHO simplified system, the two defined signs of active trachoma are trachomatous inflammation-follicular (TF, defined as the presence of five or more follicles measuring at least 0.5 mm in diameter in the upper tarsal conjunctiva) and trachomatous inflammation-intense (TI, defined as pronounced inflammatory thickening of the tarsal conjunctiva that obscures more than half of the normal tarsal vessels). 2 Repeated inflammation from cycles of infection and reinfection causes entropion, trichiasis, corneal abrasion and corneal opacity which may lead to blindness. 3 4 The disease is associated with poor sanitation and inadequate water access. 5
Key messages
What is already known about this subject? ► Elimination of trachoma is a public health issue in endemic regions. ► Annual mass treatment should be repeated until a prevalence of trachomatous inflammation-follicular (TF) <5% in the impact survey. ► A surveillance survey at least 2 years after mass treatment cessation is necessary to validate the elimination of trachoma.
What are the new findings?
► Azithromycin 1.5% eye drops was effective to reduce the TF prevalence in a high-endemic district in Cameroon. ► The effectiveness was sustained 1 year and 3 years after the last treatment round.
How might these results change the focus of research or clinical practice? ► Azithromycin 1.5% eye drops could be proposed as an alternative to oral azithromycin or tetracycline ointment in endemic regions especially for treating young children particularly those under 6 months of age and not eligible for oral azithromycin or people unable or reluctant to take oral azithromycin.
Open access close contact at home, both directly (via contaminated hands) and indirectly (via clothing, other contaminated materials or the bodies of eye-seeking flies). 3 4 In 1993, the WHO endorsed a multifaceted strategy (SAFE) for the elimination of trachoma as a public health problem. 6 In 1996, WHO launched the WHO Alliance for the Global Elimination of Trachoma by the year 2020 (GET2020), a partnership which supports country implementation of the SAFE strategy and the strengthening of national capacity through epidemiological assessment, monitoring, surveillance, project evaluation and resource mobilisation. 7 The 'SAFE strategy' comprises Surgery for trachomatous trichiasis; Antibiotics to clear ocular C. trachomatis infection; Facial cleanliness and Environmental improvement to reduce transmission (particularly access to water and sanitation). The A, F and E should be delivered to entire endemic districts (usually populations of 100 000-250 000 inhabitants). The threshold for annual mass treatment with antibiotics was set at 10% prevalence. If TF prevalence was 10% or more, the whole district should be mass treated with antibiotics. If it was between 5% and 10%, then treatment should only be implemented at the community level. 8 The global recommendation was to conduct annual mass treatments for a minimum of 3 years. These treatments must not be stopped until the TF level among children aged 1-9 had fallen below 5%. In the report of the 3rd Global Scientific Meeting on Trachoma, WHO recommended trachoma assessment at the subdistrict or village level when the TF prevalence fell below 10% in 1-9 years old children. 9 In more recent years, the recommendation has been taken as permission to treat all residents of districts in which the TF prevalence is 5%-9.9%. 10 Mass treatment campaigns for the prevention of blindness due to trachoma have been run using oral azithromycin, and numerous community-based trials have provided evidence that such treatment reduces the prevalence of active trachoma and ocular chlamydia infection. 11 Using this strategy, several endemic countries have reported the elimination of trachoma as a public health problem during the last decade. 12 Africa remains the most endemic region in the world, and it was recently estimated that more than 117 million people in the WHO African region (87% of all cases in the world) warranted treatment with the antibiotics, facial cleanliness and environmental improvement. In the Republic of Cameroon, the prevention of blindness and visual impairment represents one of the public health priorities of the Ministry of Public Health. 13 In December 2006, a study assessing the prevalence of active and scarring trachoma in the Kolofata Health District (in Far North Cameroon) signalled the presence of endemic trachoma with significant blinding potential. 13 14 Subsequently, The National Blindness Control Programme decided to plan an elimination programme by implementing the WHO-SAFE strategy and performing mass treatment of the entire district population.
A randomised, controlled, double-masked, doubledummy study in children aged 1-10 years old with active trachoma previously showed that azithromycin 1.5% eye drops (one drop in each eye, morning and evening for three consecutive days) was as efficient as oral azithromycin to resolve active trachoma at 60 days. 15 It was thus decided to use topical azithromycin 1.5% to reduce the prevalence of active trachoma in the Kolofata Health District. First results of the impact surveys showed reduction of active trachoma below 5% 1 year after the second round of mass treatment with azithromycin eye drops. 16 17 Here, we additionally report the effectiveness and safety results 1 year and 3 years after completion of the 3-year mass campaign. An exhaustive door-to-door census of all residents of the District (about 112 000 people) was conducted by 250 local, community-trained health workers, each assigned to a village or neighbourhood of 400-500 residents. The objective was to treat the entire population, including children less than 1 year old, with azithromycin 1.5% eye drops donated by Laboratoires Théa, as one drop in each eye, morning and evening for 3 consecutive days. The treatment administration was performed by the same community health workers under the supervision of an ophthalmic nurse during a 2-week period. A briefing was organised in Kolofata Hospital each evening where ophthalmic nurses reported the assessment of the mass treatment, including questionnaires intended to document any side effects or symptoms of the eye drops.
Prevalence surveys
The objective of this study was to assess the effectiveness of community-based treatment with 3 days of azithromycin eye drops in reducing the prevalence of active trachoma in children aged 1-9 years (≥1 year and up to their 10th birthday). Five cross-sectional prevalence studies at 1-year intervals were planned to assess active trachoma in children aged 1-9 years in 2008 (prior to the first mass treatment), 2009 (prior the second mass treatment), 2010 (prior to the third and last mass treatment), 2011 (1 year after the last mass treatment) and 2013 (3 years after the last mass treatment).
Prevalence surveys before mass treatment (2008,2009 and 2010) From the 2006 survey data, 13 the internal cluster (ie, within village) correlation was estimated at 0.034. Assuming a prevalence of about 5% at the end of the third year, it was necessary to include 40 villages with at least 60 evaluable children per village, that is, 2400 children, to get a 95%
Open access
CI with a half-length of 1.5%. Thus, 40 villages (or neighbourhoods) were selected systematically using probability proportional to size. Within the selected cluster (village or neighbourhood), all households were identified and numbered, and all their members were registered based on the local census listing. Then, an initial house was randomly selected. A sampling interval (randomly chosen before the beginning of the study) identified the following houses to be included in the sample. The sampling continued until it achieved selection of 60 children aged between 1 and 9 years of age who had lived for at least 6 months in the village on the day of the survey. If a family had left the community for over 6 months and the house was empty on the day of the survey, the house was replaced by the nearest one. In the case of an empty household or of a missing child among the randomly selected, the survey team had to repeat their visit three times to check for the presence of the selected child. If after the third visit, the selected child could not be met, he or she was considered absent and not replaced. 9 18 Consequently, we used the two-stage sampling within each of two parallel sampling approaches in the same district: the previously high prevalence villages in the previous surveys, and the previously non-sampled villages. This was done based on the lot quality assessment sampling, a method previously proposed by Myatt et al for a rapid assessment of prevalence of active trachoma. 19 20 The other sample was composed of children from households never sampled, thus taking into account possible residual infection among villages never tested before.
Clinical assessments of active trachoma TF and TI were separately assessed using the WHO simplified grading system. 21 Active trachoma was assessed based on TF prevalence alone in line with the WHO recommendations. 8 The prevalence of TI was considered as a secondary criterion.
All children enrolled in the study were examined by a senior ophthalmic nurse and/or an ophthalmologist. The examiner everted the upper eyelid and inspected the conjunctiva by means of a 2.5× magnifying glass and a torch held by an assistant in charge of recording the data. Before examining the next child, the examiner verified that the assistant had filled out the study sheet in accordance with study protocol guidelines. After a 4-day training session, prior to the study, each examiner was tested on 50 children with and without trachoma. Following evaluation, the inter-grader variation for TF and TI was almost perfect comparing each examiner to the reference grader (kappa scores between 0.81 and 1.00).
Assessment of safety
Drug-related serious adverse events were assessed and recorded each day during the treatment administration and 7 days after the last administration for all treated subjects (children and adults).
Statistical methods
The prevalence of active trachoma was estimated based on the presence of TF alone. Data were compiled and analysed using EPIINFO V.6 software (Center for Disease Control and Prevention, Atlanta, Georgia, USA). 95% CI were estimated taking into account the composition of sample clusters. Prevalences at different time points were not compared with a statistical test because the methodology for village selection changed over time, and was specifically biassed in 2011 and 2013.
RESULTS
Mass treatment campaign using azithromycin 1.5% eye drops Mass treatment was performed for more than 100 000 inhabitants of the Kolofata Health District in 40 communities, representing a coverage of about 90% each year (table 1). (table 3).
Impact survey after three rounds of treatment
The impact survey showed a TF prevalence of 2.8% (95% CI 2.2 to 3.7) and a TI prevalence of 0.2% (95% CI 0.0 to 1.3) 1 year after the third (table 4).
Surveillance survey after treatment cessation
The surveillance survey showed a TF prevalence of 5.2% (95% CI 3.6 to 7.2) and a TI prevalence of 1.0% (95% CI 0.6 to 1.4), 3 years after the last administration of treatment (table 4).
Safety of azithromycin 1.5% mass treatment In this mass population study, more than 100 000 subjects including the youngest children (<1 year old) were treated each year for 3 years. No ocular or systemic serious adverse event related the study drug was reported in adults and children, during the 3 days of treatment administration and the seven following days. No treatment interruption was required in adults or children. Local and transitory symptoms including blurred vision and burning sensation, following eye drop instillation) occurred in some subjects but were not systematically recorded.
DISCUSSION
In 2008, the District of Kolofata was highly endemic for trachoma, with 24.0% of 1-9 years old children with TF and 7.5% with TI. The prevalence of active trachoma in this district justified the mass treatment of the entire population with 3 yearly rounds of azithromycin as part of the SAFE strategy. 8 This study is the first to use topical azithromycin in mass treatment to reduce the prevalence of active forms of trachoma in an endemic population and included a surveillance survey 3 years after treatment mass cessation. After two annual rounds of mass treatment with topical azithromycin covering more than 90% of the entire population, an estimated TF prevalence of 3.1% was reached, and the WHO objective for elimination of active trachoma (prevalence <5%) was met. This was maintained 1 year after a third annual round, during which children were chosen among the most prevalent villages and among villages never tested. In parallel, the presence of TI was detected in less than 1% of subjects after the second and third annual mass treatments, which is encouraging since TI subjects are those most likely to develop cicatricial complications as the disease progresses. 22 23 This study confirms previous results of a randomised clinical trial demonstrating that topical azithromycin 1.5% eye drops were at least as effective as the standard treatment in reducing the prevalence of active trachoma below 5%. 15 A major concern when implementing a programme for eliminating trachoma is still to determine when to stop antibiotic treatment and preventive interventions 24 25 and to determine the potential rebound in prevalence of active trachoma after interventions are stopped. As recommended in the WHO guidelines, a surveillance Open access survey should be conducted at least 2 years after the impact survey to show that elimination targets is maintained in 1-9 years old children, as an indicator of trachoma elimination. 26 Three to five years, or five to seven years of implementation of SAFE may be insufficient to achieve trachoma elimination as a public health problem in some endemic regions. Some severely affected districts in Ethiopia have been treated for a decade and have still not achieved the prevalence threshold of 5% for halting treatment. 27 In our study, although active trachoma seemed to be eliminated after three annual rounds of treatment, it persisted in a few communities after treatment cessation. Three years after the last round of treatment, the surveillance survey using a two-stage sample procedure including the most prevalent villages and villages never tested before showed a TF prevalence of 5.2%, just above the WHO prevalence determined as the threshold necessary for the complete elimination of trachoma as a public health problem. Although the effectiveness of the SAFE strategy using oral azithromycin distribution has been demonstrated in numerous endemic populations worldwide, 12 the effect of mass treatment at the village level is known to be heterogeneous. 28 29 In low-endemic countries such as Gambia, a single oral dose of mass antibiotic treatment was sufficient to control C. trachomatis infection when combined with environmental conditions, such as good water supply and sanitation, with no re-emergence 5 years after treatment cessation. 30 However, in more endemic regions, complete trachoma elimination in all communities may be difficult to achieve. Lakew et al showed that although trachoma prevalence was lowered to an average of 2.6% after four biannual treatments, prevalence had returned to 25.2% 2 years after the last treatment, indicating that if infection is not eliminated at the community level, it may return. 29 In Mali, a 3-round mass treatment with oral azithromycin reduced the prevalence of active trachoma from 17% to less than 5%, but 3 years later trachoma started to re-emerge. 31 Thus, the risk of re-emergence of trachoma once antibiotic pressure is removed is currently a major concern. 25 Factors affecting the success of a Mass Drug Administration programme have been recently identified using a mathematical model of disease transmission. 32 This included antibiotic treatment-related factors, such as coverage, dosing and frequency of distribution, and resistance. Antibiotic mass treatment coverage is an important issue since untreated individuals may serve as a source of community reinfection. WHO considered that coverage of 80% is acceptable, and increasing coverage above 90% in children does not appear to confer additional benefit. 33 34 Nevertheless low coverage rates (<60%) of oral azithromycin mass treatment was reported in some highly endemic districts in Ethiopia. 35 This can be due to a low acceptability of the oral azithromycin in some regions, in particular because of the fear of adverse events, as suggested previously. 36 Oral azithromycin is generally well tolerated during mass treatment distribution, 25 and has been associated with reduced all-cause and infectious childhood mortality. 37 Nevertheless, up to 10% of people may experience side effects, primarily gastrointestinal disorders (abdominal pain, nausea and diarrhoea). 38 Adverse effects in the first annual mass treatment round have been considered as a 'great public health concern' in some endemic regions which may compromise acceptability and treatment coverage. 35 36 Moreover, in endemic communities, some individuals may be suspicious of taking an oral medicine for an eye disease. By contrast, we confirmed that topical azithromycin 1.5% was safe even in the youngest children of less than 6 months, as previously reported in different studies. 15 39 Thus, topical azithromycin may be more easily accepted and could be proposed as an alternative when oral azithromycin is refused or contra-indicated. We assume that this could improve the coverage in some districts or communities where the trachoma elimination or control is difficult.
A recent meta-analysis showed that absence of latrines, dirty faces of children, and no reported use of soap for washing may be other important factors associated with active trachoma among children. 40 In the Kolofata District, re-emergence was shown in several villages with impaired access to water due to borehole pump dysfunction or dry wells during a part of the year. Re-emergence of active trachoma above 10% in 1-9 years old children was also reported in several Kolofata District villages in which face washing among children was notably deficient. 41 As in other endemic regions, while the S and A components have been widely implemented, evidence and specific targets are lacking for the F and E components, of which water, sanitation and hygiene are critical elements. 5 The current recommendation for antibiotic mass treatment is to treat all the district community including infants of less than 6 months. It is known that the probability of being infected by C. trachomatis is strongly influenced by age. Children aged less than 1 year have the highest bacterial load, and thus should be treated. 42 Oral azithromycin is not recommended in children under 6 months of age, and tetracycline ointment is typically used. In contrast, treatment with 1.5% azithromycin eye drops is possible since azithromycin 1.5% is well tolerated in infants from 1 day of age. 43 Two times per day administration for 3 days with topical azithromycin 1.5% is also more convenient than tetracycline ointment, which requires two times per day instillations for 6 weeks. 44 Thus, topical azithromycin may be proposed in place of tetracycline ointment to treat infants of less than 6 months. Open access Azithromycin 1.5% eye drops have other advantages compared with oral azithromycin. The possibility of inadequate and thus ineffective oral dose, when calculated on the height stick in children, may lead to insufficient dosing which may be an issue especially when the bacterial load is high. 25 In addition, eye drops avoid the potential issue of reconstituting an oral solution in a remote area. The use of topical azithromycin should also substantially reduce the risk of bacterial resistance. By contrast, repeated oral azithromycin mass distribution may be detrimental if it results in the selection of macrolide-resistant pathogens, and there is epidemiological evidence suggesting that pharyngeal carriage of macrolide-resistant Streptococcus pneumoniae increases following repeated annual mass treatments with oral azithromycin for trachoma control, as recently reviewed by O'Brien et al. 45 Oral azithromycin is generously donated through the International Trachoma Initiative for trachoma control programmes and topical azithromycin may be a promising alternative to oral azithromycin if treatment units are donated similarly, which was the case in the Kolofata Health District. In addition, beside the costs of the treatment units, a campaign to eliminate trachoma as a public health problem is probably more expensive initially when topical rather than oral azithromycin is used, since the former requires health resources over 3 days and the latter only 1 day. Further studies that take into account both short-term and long-term costs and benefits are necessary to determine the overall cost-effectiveness of topical versus systemic azithromycin mass treatment. To reduce costs in a campaign using topical medication, more community members could be trained to administer the final 2 days of drops; expanding community involvement might have the added benefit of increasing community commitment. Basic training of such community or family eye drop administrators could be done on site on the first day of administration: the health worker would train a member of each family or group of families and that person would administer drops on days 2 and 3. The feasibility and reliability of such a strategy remain to be investigated. In our situation, the administration of topical azithromycin to more than 100 000 inhabitants was successful and logistically and financially similar to other subsidised mass drug and vaccine administration activities held previously in the district.
The study has some limitations including the lack of a control group, meaning a comparison between topical and oral azithromycin mass treatment cannot be made directly. Other components of the SAFE strategy were also applied during the mass treatment campaign, including surgery of entropion-trichiasis, educational activities to promote individual (facial cleanliness) and collective hygiene, and environmental changes. During the mass treatment campaign, the Cameroon government installed borehole water pumps, while the OSF built some wells. However, it is not known to what extent these interventions helped in reducing trachoma prevalence. Persistence and transmission of trachoma is favoured where people live in poverty without safe water, sanitation and proper waste disposal, and the disease may return after antibiotic treatment if these conditions are not changed. 5 41 Moreover, the clinical grading was based on clinical observations in accordance with the WHO simplified grading system. 21 Specific biological tests using serological and PCR markers may be more reliable for testing ocular TF infection. 46 Furthermore, by selecting one part of the sample from the most prevalent villages of the previous years and the other part from villages never surveyed before, the prevalence of active trachoma in the follow-up surveys (2011 and 2013) was no longer representative of the district level. Finally, standardisation for age as recommended by the recent WHO guidelines in 2018 18 was not possible. Such analysis was not planned at time of data recording, and data for the available census performed in the Kolofata District at that time could not be retrieved.
In conclusion, mass treatment with azithromycin eye drops was shown to be effective to reduce TF to a level ≤5% one year after a 3-round annual mass treatment in an endemic region at the district level. Longitudinal studies in multiple environments using epidemiologically rigorous sampling techniques are needed to ensure that the risk of re-emergence of disease and infection is not more likely than with oral azithromycin. Annual mass treatment of active trachoma with azithromycin 1.5% eye drops is feasible under field conditions, although the cost-effectiveness of topical azithromycin needs to be determined. In the meantime such topical azithromycin treatment could be proposed as an alternative treatment to tetracycline ointment or oral azithromycin (1) for treating young children of less than 6 months, (2) for treating others unable to take oral azithromycin, (3) for mass drug administration where the oral azithromycin donation programme is unavailable and (4) where the population mistrusts oral azithromycin given for an eye condition.
Ethics approval The study was conducted in compliance with the ethical principles of the Declaration of Helsinki regarding biomedical research on human subjects. The study treatment plan received authorisation from the Cameroon Ministry of Public Health in February 2008. The National Ethics Committee of Yaoundé approved the study (Approval 098/CNE/DNM/07 and 086/CNE/DNM/08) and the method of obtaining informed consent. Informed consent was systematically requested from parents of minors. For people who were illiterate, the information sheet and informed consent were read to them. If they agreed to participate, the participant or a legally acceptable representative signed by fingerprint and a literate witness signed on behalf of the participant. Participants or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
|
2020-11-13T05:06:53.436Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "dd1906aef653990316a1dd821e11b4864cbc5f88",
"oa_license": "CCBYNC",
"oa_url": "https://bmjophth.bmj.com/content/bmjophth/5/1/e000531.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd1906aef653990316a1dd821e11b4864cbc5f88",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252866635
|
pes2o/s2orc
|
v3-fos-license
|
Elevated MFG-E8 in CSF in the Early Stage Indicates Rapid Recovery of Mild Aneurysmal SAH Patients
Background Aneurysmal subarachnoid hemorrhage (aSAH) can impair blood perfusion in brain tissue and cause adverse effects. Microglia, which are the inherent immune cells of the brain, significantly activate and play a role in phagocytosis, anti-inflammatory, proinflammatory, and damage repair in this process. Milk fat globule epidermal growth factor 8 (MFG-E8) is the bridging molecule of this process and mediates the activation and biological effects of microglia. Methods We obtained cerebrospinal fluid (CSF) from patients with aSAH at various times (the third day, seventh day, and ninth day) as well as from patients in the control cohort. MFG-E8 protein levels in CSF were measured by enzyme-linked immunosorbent assay (ELISA). Meanwhile, we evaluated the GCS and GOS of aSAH patients on admission and on the third day, seventh day, ninth day, and at discharge. Then, we analyzed the association between the levels of MFG-E8 and the changes in GCS and GOS. Results MFG-E8 expression rose in the early stage on the third day and reached equilibrium around day 7 and day 9. The levels of MFG-E8 on the third day were associated with the change in GOS on the seventh day (r = 0.644, p = 0.018) and ninth day (r = 0.572, p = 0.041) compared with admission but were not correlated with the change on day 3 or at discharge. The levels of MFG-E8 were not correlated with any change in GCS. Conclusions We found that aSAH resulted in an upregulation of MFG-E8 in CSF. Moreover, high MFG-E8 levels in the early stage indicated a rapid recovery of mild aSAH patients.
Introduction
Subarachnoid hemorrhage (SAH) is a type of blood oozing from damaged vessels into the subarachnoid space because of different kinds of brain damage. It is a destructive cerebrovascular disease that affects the cerebral blood perfusion state [1] and contributes to cerebral vasospasm [2], early brain injury [3], chronic cerebral ischemia [4], and various systemic complications [5]. These injuries often lead to a poor prognosis [6]. Among them, aneurysmal subarachnoid hemorrhage (aSAH) accounts for 85% of spontaneous SAH, and the mortality rate has reached 50% [7][8][9]. However, with the advancement of current treatment, the mortality and disability rates of aSAH are still high [10]. Therefore, it is increasingly important to explore the pathophysiology of aSAH, which will help the clinical treatment of aSAH.
According to relevant reports, as inherent immune cells in brain tissue, microglia are activated when aSAH occurs.
In this process, microglia release a series of cytokines, mediate the repair of damaged neurons, maintain the balance between proinflammatory and anti-inflammatory activities, activate phagocytosis, and so on [11,12]. It performs an integral function in the repair of SAH [13].
Milk fat globule epidermal growth factor 8 (MFG-E8) is a key cytokine that is secreted by phagocytes and mediates phagocytosis. As a bridging molecule among cells [14], it can promote macrophages to exert a series of biological effects, such as anti-inflammation and phagocytosis, and is also expressed in breast cells and endothelial cells [15]. Several research reports have illustrated that MFG-E8 is involved in a number of important physiological and pathological processes. For example, MFG-E8 expression is related to the downregulation of the inflammatory response in diabetes mellitus [16]. In the injury response, MFG-E8 expression can promote angiogenesis and the healing of skin wounds [17], and it also performs a crucial function in atherosclerosis [18] and autoimmune diseases [19]. Microglia are macrophages in the central nervous system (CNS). Some acute and chronic nerve injuries can trigger inflammatory responses and activate microglia that secrete MFG-E8 [20]. Activated microglia might play a protective role toward nerve cells by regulating cell apoptosis, oxidative stress, and the inflammatory response [15].
When aSAH occurs, microglia can be activated. As a bridging molecule, MFG-E8 is secreted to mediate a series of biological functions of microglia [21]. This process has been confirmed in animal models [15]. However, there has been little concern about the alterations in the levels of MFG-E8 in aSAH patients until now. Hence, in the current research, we aimed to observe the dynamic alterations of MFG-E8 in patients' cerebrospinal fluid (CSF) at different times after aSAH and tried to explore the correlation between MFG-E8 levels and the outcomes of aSAH patients.
Materials and Methods
All of the procedures conducted in human studies were designed in strict accordance with the Declaration of Helsinki and authorized by the medical institutional review board (No. 2020-041-01) at Affiliated Drum Tower Hospital, Medical School of Nanjing University. All clinical samples were acquired with the consent of the patients included in the present study.
Selection of Patients.
The following guidelines were used to select the experimental cohort: (1) Hunt-Hess grade I or II, (2) no other CNS disorders, (3) within 2 days following aSAH, individuals in the experimental cohort were hospitalized and underwent interventional treatment, and (4) no connective tissue illnesses, malignant tumors, diabetes, or other systemic diseases. Table 1 shows the health status of the patients in the experimental cohort. The control cohort comprised individuals who required lumbar subdural anesthesia for surgical procedures and did not have SAH or any other CNS condition.
Identification and Collection of Samples.
The CSF samples used in the present study were obtained from both the experimental cohort (n = 14) and the control cohort (n = 11). The CSF samples of the experimental cohort were collected from patients on the third, seventh, and ninth days following aSAH. Additionally, the CSF samples obtained from patients in the control cohort were utilized as normal controls. A sterile tube was used to centrifuge all clinical samples (3000 g, 5 min), and the supernatant was then collected and kept at -80 degrees Celsius.
After we acquired enough samples, we performed enzyme-linked immunosorbent assay (ELISA) to evaluate the MFG-E8 levels in the CSF. We utilized a commercial human ELISA kit (ab235638, Abcam, Shanghai, China) for this experiment according to the instructions stipulated by the manufacturer. Specifically, we started by equilibrating the samples and reagents and placing them at ambient temperature and setting up the blank well, control well, and sample well. The control well was supplied with 50 μl of the standard sample. We introduced 40 μl of sample dilution and 10 μl of sample into the sample well and subsequently mixed the samples together. Incubation was performed for 30 minutes at 37 degrees Celsius after the plate had been sealed. After sealing the plate, we dumped all of the liquid, added the washing liquid after drying, and rinsed it for 30 seconds five times. Each well, with the exception of the blank well, received 50 μl of enzyme-labeled reagent. Following incubation and rinsing (using the same procedures as before), we introduced the developer, stirred the wells, and allowed the color to develop for 15 minutes at 37°C in the dark. The reaction was stopped when the microplate became blue by adding 50 μl of stop buffer. After that, the microplate began to change to a yellow color. The optical density (OD) of each well was determined at a wavelength of 450 nm. Then, we generated a standard curve, setting the concentrations as the abscissa and the values of OD as the ordinates. Subsequently, we obtained the regression equation of the standard curve. Finally, we computed the MFG-E8 levels using the actual OD and dilution values obtained from the samples.
Evaluation and Categorization of Patients Based on
Their Recovery Status. We examined the patients' GCS and GOS on the day of admission, the third day, the seventh day, the ninth day, and at discharge. In this study, we conducted bedside scores for all patients. All clinical scores are scored by a single person throughout the process to ensure the consistency of the test. Within two days following aSAH, all patients were hospitalized and underwent intervention treatment. As a result, the GOS and GCS scores obtained upon admission were employed as the baseline status. The GOS of the patients at the time of discharge reached the full score (as shown in Table 2). For the GOS, we excluded patients with a score of 5. We grouped the patients according to whether GOS had increased compared with admission and compared the expression levels of MFG-E8 between the cohorts. Only one patient had an elevated GOS score on the third day, and all patients' GOS was elevated upon discharge. As a result, we could not group the patients on day 3 and at discharge because of the sample size. We evaluated whether there was an increase between day 7 and admission (cohort 1 with elevation and cohort 2 with no elevation), as well as that between day 9 and admission (cohort 3 with elevation and cohort 4 with no elevation). Then, we analyzed the difference in MFG-E8 levels between cohorts. As shown in Figures 1(a) and 1(b), the levels of MFG-E8 on the third varied substantially between the cohorts. We defined the rising period of MFG-E8 as an early stage. Then, we analyzed the association between the levels of MFG-E8 on day 3 and the changes in GOS after aSAH.
For the GCS scores, we excluded patients with a score of 15 each time. As illustrated in Table 3, only one patient's score did not increase on day 3 compared to admission, and all of them reached the full score on day 7. The patient's GCS scores were the same after day 7 following aSAH. We could not divide patients according to whether there was an increase in GCS compared with admission. Considering that MFG-E8 on day 3 might play a biological role, we 2 Disease Markers analyzed the correlation between the levels of MFG-E8 on the third day and the changes in GCS on the third day and seventh day.
Data Analysis.
Analysis of the data was conducted utilizing GraphPad Prism 7.0 and SPSS 24.0. An analysis of differences in continuous variables across cohorts was performed utilizing the unpaired Student's t test. In this case, p < 0:05 indicated a significant difference. Data were presented as the mean ± SD. When evaluating the simple correlation among continuous variables, the Pearson correlation coefficient was employed. Therefore, the Pearson correlation coefficient was utilized to assess the relationship between the MFG-E8 levels on the third day and the alterations in GOS and GCS at various time points following aSAH.
Changes of MFG-E8 Levels after aSAH.
We believed that the control cohort could reflect the normal level of cytokines in CSF. As illustrated in Figure 2, the patients' MFG-E8 levels in CSF were relatively low in the control cohort, and a considerable elevation in the MFG-E8 levels was observed on day 3 after aSAH (p < 0:05). The levels of MFG-E8 reached equilibrium around day 7 and day 9, which remained elevated as opposed to that of the control cohort (p < 0:05). The difference between day 7 and day 9 was not significant.
Relationship between MFG-E8 and the Changes of GOS.
The GOS scores of all patients at different times are recorded in Table 2. The GOS of each patient at discharge was full marks, indicating that the patients recovered well. Excluding patients who scored full marks each time, we grouped the patients according to the alterations in GOS on the seventh day and ninth day. On the seventh and ninth days, the MFG-E8 level on the third day was considerably enhanced in the cohort with an elevated GOS score in contrast with the cohort without elevation (p < 0:05). This validated our conjecture that the level of MFG-E8 could perform a crucial biological function in the early stage. The correlation was evaluated between the MFG-E8 level on the third day and the changes in GOS on the third day (r = 0:396, p = 0:180), seventh day (r = 0:644, p = 0:018), ninth day (r = 0:572, p = 0:041), and at discharge (r = 0:366, p = 0:219). The MFG-E8 level on the third day was strongly associated with the changes in GOS on day 7 and generally correlated with the changes on day 9 but not correlated with the changes on day 3 or at discharge.
Relationship between MFG-E8 and the Changes of GCS.
The GCS scores of all patients at different times are recorded in Table 3. On admission, 8 out of 15 patients exhibited a 3 Disease Markers GCS score of 15, with a minimum score of 10. All patients reached a full score of 15 on day 7, indicating that the patients' condition was mild and recovered well when discharged. Excluding patients whose scores were 15 each time, we evaluated the correlation between MFG-E8 levels on the third day and the alterations in GCS on the third day and seventh day and found that there was no statistical correlation (as shown in Figure 3).
Discussion
When cells are apoptotic, phosphatidylserine (PS), located in the inner layer of the cell membrane, will turn outward and appear in the outer layer of the cell membrane. This is a common biological process [22]. MFG-E8 is a bridging molecule that facilitates the biological effects of microglia. When different kinds of brain damage cause apoptosis of neuronal Disease Markers cells, MFG-E8 binds to PS, integrin α v β3/α v β5 [23], and vitronectin receptor [20]. Then, microglia can phagocytose apoptotic cells through the mediating effects of MFG-E8. Moreover, MFG-E8 can also inhibit apoptosis of neurons through the downstream FAK/PI3K/AKT pathway [15,24]. The levels of MFG-E8 in CSF increased significantly on day 3 and reached equilibrium on day 7 after aSAH, which indicated that MFG-E8 did gradually increase and mediated certain biological effects after the occurrence of aSAH in patients. MFG-E8 is mainly secreted by microglia in the brain and can mediate related activities, which indicates that microglia can be constantly activated and play important biological roles in the process of aSAH. In our current research, we found that MFG-E8 increased on day 3, indicating that it might begin to play a biological role in the early stage after aSAH. However, the effects required time to accumulate and were not immediately reflected on day 3. Therefore, MFG-E8 was not correlated with the changes in GOS on day 3 after aSAH. The levels of MFG-E8 on day 3 had a strong correlation with the changes in the GOS scores on the seventh day and were generally associated with the changes in GOS on the ninth day. This result indicated that the protective impacts of MFG-E8 in the early stage might be reflected on day 7, and the protective effect was the strongest. Although the positive effect of MFG-E8 could also be observed on day 9, this effect was weakened, and the correlation with the change in GOS decreased. On discharge, the positive effect weakened further, and the MFG-E8 levels on the third day were not associated with the changes in GOS. Therefore, the increase in MFG-E8 in the early stage brought timeeffective effects, which might promote the patients' rapid recovery following aSAH.
Although the levels of MFG-E8 can affect the changes in GOS, we did not find a similar effect on the GCS scores. Most of the GCS scores were 15 or reached 15 quickly, resulting in a small sample size that can be selected for statistics. In addition, we selected low-grade aSAH patients. If they had high-grade aSAH, GCS might present a larger difference after aSAH. In the current experiment, the GCS of low-grade aSAH patients increased rapidly. As a result, we speculated that positive effects of MFG-E8 might be observed if we increased the observation frequency of GCS in the early stage, which will be our future research directions. Moreover, GCS evaluates the patient's eye status, speech response, and limb movements, which has certain limitations. Compared with GOS, GCS focuses more on evaluating the condition than prognosis. In short, the level of MFG-E8 in the early stage may highly suggest the rapid recovery of patients with aSAH.
In the present research, we discovered that when aSAH occurred, the MFG-E8 levels in the CSF increased and then reached a plateau. MFG-E8 may perform a protective function in the early stage. Although it did not change the clinical outcomes of aSAH patients, it might promote rapid recovery and bring positive effects to mild aSAH patients.
To our knowledge, this is the first study regarding the changes and possible effects of MFG-E8 in CSF in aSAH patients. We consider that MFG-E8 is mainly secreted by microglia, which are mainly found in the nervous system. Compared with examining changes in MFG-E8 in blood, detecting changes in CSF would be more intuitive [25]. However, the limits of our study are also obvious. The sample size was small. We found the expression pattern of MFG-E8 until day 9 after aSAH, but we lacked follow-up studies to determine how MFG-E8 changed subsequently. We could also conduct a posthospital visit to determine how MFG-E8 affects the long-term prognosis of patients [26]. We will improve these defects in future research. In this research, we did find that high levels of MFG-E8 contribute to rapid recovery in mild patients. The reason for not including the Hunt-Hess score of 3-5 is that these patients have severe clinical symptoms and may undergo craniotomy, which may affect the microenvironment of the patient's central nervous system and the MFG-E8 levels. It may result in inconsistent patient baseline levels. In addition, for severe patients, we want to minimize lumbar puncture procedures to reduce the impact on their own status. However, for mild
Disease Markers
patients, lumbar puncture is not only an examination but also a treatment. Hemoglobin that dissolved in CSF can be excreted while the CSF is obtained.
Conclusions
As a bridging molecule, MFG-E8 can mediate the interaction among microglia and other cells so that microglia can perform a crucial function in damage repair and neuroprotection [11]. In this experiment, we observed changes in MFG-E8 levels in the CSF of 14 patients with aSAH and 11 control patients. We found that MFG-E8 in CSF gradually increased and then reached a plateau, indicating that microglia could be activated and play a biological role in patients after aSAH. High levels of MFG-E8 in the early stage highly suggested a rapid recovery of mild aSAH patients, which might bring new approaches for clinical treatments of aSAH.
Data Availability
The data used to support the findings of this study are available from the manuscript.
Conflicts of Interest
There are no personal or financial relationships in any of the authors that might have an impact on the outcomes of this research.
Authors' Contributions
Cong Pang, Zheng Peng, and Xiaojian Li contributed equally to this study. Disease Markers
|
2022-10-13T15:17:42.948Z
|
2022-10-11T00:00:00.000
|
{
"year": 2022,
"sha1": "32848205697f57aa2446976d890fcf50665c56eb",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/dm/2022/6731286.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cd73d892c9a9a3bd91cc1498ddc714598edb634",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
261445669
|
pes2o/s2orc
|
v3-fos-license
|
Parkinson’s Disease Detection Using Filter Feature Selection and a Genetic Algorithm with Ensemble Learning
Parkinson’s disease (PD) is a neurodegenerative disorder marked by motor and non-motor symptoms that have a severe impact on the quality of life of the affected individuals. This study explores the effect of filter feature selection, followed by ensemble learning methods and genetic selection, on the detection of PD patients from attributes extracted from voice clips from both PD patients and healthy patients. Two distinct datasets were employed in this study. Filter feature selection was carried out by eliminating quasi-constant features. Several classification models were then tested on the filtered data. Decision tree, random forest, and XGBoost classifiers produced remarkable results, especially on Dataset 1, where 100% accuracy was achieved by decision tree and random forest. Ensemble learning methods (voting, stacking, and bagging) were then applied to the best-performing models to see whether the results could be enhanced further. Additionally, genetic selection was applied to the filtered data and evaluated using several classification models for their accuracy and precision. It was found that in most cases, the predictions for PD patients showed more precision than those for healthy individuals. The overall performance was also better on Dataset 1 than on Dataset 2, which had a greater number of features.
Introduction
Parkinson's disease (PD) is a neurodegenerative disorder that affects millions of individuals worldwide.It is characterized by motor symptoms such as tremors, rigidity, bradykinesia (slowness of movement), and postural instability.PD not only impairs the quality of life for patients but also poses significant challenges for accurate and timely diagnosis.The presence of voice deficits, which are frequently defined by alterations in speech patterns, cadence, and tone, emerges as an important element of Parkinson's disease symptomatology.The study by Tjaden, Lam, and Wilding [1] revealed that speakers with PD displayed expanded peripheral and non-peripheral vowel space areas during articulate speech, accompanied by a reduction in speech rate and an increased vocal intensity.Furthermore, the study by Tsanas et al. [2] highlighted the feasibility of utilizing straightforward, self-administered, and non-intrusive speech tests as a potential strategy for regular, remote, and precise monitoring of PD symptom progression with the employment of the Unified Parkinson's Disease Rating Scale (UPDRS).These studies showcase the potential of voice-related changes to act as valuable indicators for the early detection of Parkinson's disease, despite receiving less recognition than motor symptoms.
Recent advances in machine learning techniques, as well as the availability of largescale datasets, have opened new avenues for the automated identification of PD utilizing various forms of data, including voice recordings.Furthermore, machine learning-based PD detection systems have the potential to be non-invasive, low-cost, and easily scalable.A voice recording can be collected easily through commonly available devices such as smartphones, making it a convenient and accessible tool for screening and monitoring PD.
A series of studies have delved into the domain of Parkinson's disease (PD) classification, harnessing voice data as a diagnostic indicator.However, one notable gap is the limited size and diversity of the datasets employed in many prior studies.This limitation raises concerns about the generalizability and reliability of the resulting classification models.This study makes a significant contribution to the field by decisively addressing this issue through the utilization of two distinct datasets.Another gap has been the lack of comprehensive feature selection methods employed in PD classification studies.While some efforts have been made to apply feature selection techniques, this study takes a step forward by introducing a novel combination of filter feature selection methods with ensemble learning and genetic selection.This fusion holds the promise of uncovering more relevant and discriminative features inherent in the voice data, potentially leading to a substantial enhancement in the accuracy of PD classification.Furthermore, the limited exploration of model ensemble techniques in prior studies has presented a significant gap, which this research effectively addresses.While several investigations have focused primarily on individual classification algorithms, the untapped potential of leveraging the strengths of various algorithms through ensemble methods has been underutilized.Ensemble learning methods have the inherent advantage of integrating the diverse strengths of different algorithms, thereby enhancing the overall predictive power and accuracy of the classification process.By exploring this avenue, this research provides a vital contribution to the field by demonstrating the potential of ensemble techniques to significantly elevate the performance and efficacy of PD classification models.
In this study, a combination of filter feature selection methods with ensemble learning and genetic selection was used to detect PD from voice clips.The filtered data was fed into different classification models, which were then evaluated based on their accuracy and precision.By evaluating the models on these diverse datasets with varying characteristics and complexities, the generalizability and scalability of the approach may be assessed.The outcomes of this study may enhance our understanding and augment the efficacy of early PD detection, ultimately leading to improved patient care and prognosis.
Related Work
Several studies have investigated the use of machine learning and statistical modelling techniques to extract discriminative features from voice recordings and to develop classification models for PD detection.These have been summarized in Table 1.In a recent study by Sheikhi and Kheirabadi [3], a voice dataset from the UCI Repository was utilized for the classification of PD.The dataset comprised voice recordings from 42 patients, totaling 5875 instances.They proposed a model that combined the Random Forest (RF) and Rotation Forest algorithms to classify the predictions into two categories: severe or non-severe.The accuracy results for the total Unified PD Rating Scale (UPDRS) and motor UPDRS using this model were found to be 76.09% and 79.49%, respectively.In another study conducted by Mohammed et al. [4], a multi-agent approach was employed to filter and identify the most relevant features that could enhance PD classification accuracy while reducing training time.They utilized a dataset consisting of 31 human voice recordings, 23 of which were diagnosed with PD.Initially, the dataset contained 22 features, which were reduced to 14 after the filtering process.Eleven different classification algorithms were applied to the selected features, and the results were evaluated.This approach achieved an accuracy of 96.6%.
Recently, Velmurugan and Dhinakaran [5] proposed an approach known as the Ensemble Stacking Learning Algorithm (ESLA) for PD classification.The ESLA method integrated the linear regression and Adaboost ensemble techniques with the RF and Extreme Gradient Boosting (XGBoost) algorithms to effectively identify individuals with PD.The dataset employed was collected from 188 PD patients.Initially, basic models were developed using the RF and XGBoost algorithms for prediction.Subsequently, the outputs of these prediction models were utilized as inputs in the next step to fine-tune parameters and create models with enhanced accuracy.The top models for the RF and XGBoost were then chosen.The RF model's accuracy increased from 84.21% to 84.86%, while the XGBoost model's accuracy increased from 88.15% to 88.85%.The proposed ESLA method leveraged the stacking technique to create four stacked models, combining RF, XGBoost, logistic regression, Adaboost, and multilayer perceptron (MLP), to further enhance the classification performance.This method outperformed the individual classifiers, yielding an accuracy of 90.13%.
The study by Sharma et al. [6] proposed a binary version of the Rao algorithm to overcome the problem of feature selection.The Rao algorithm was applied to four public PD datasets using the kNN classifier for PD classification.The highest accuracy of the classifications obtained from the four datasets was 99.25%.
In their study, Sabeena et al. [7] proposed a novel framework for feature selection and classification to identify individuals with PD.The dataset used consisted of speech samples from 188 PD patients and 64 healthy individuals.An optimization-based ensemble feature selection method was employed.It involved three different approaches for selecting the optimized subsets of features.The results from these approaches were combined using an ensemble technique.The selected features were then utilized in various classifiers, which yielded accuracies ranging from 83.66% to 98.77%.In another study by Ul Haq et al. [8], a dataset of 196 voice samples with 23 attributes was utilized.Among the 31 individuals in the dataset, 23 were diagnosed with PD, and eight were considered healthy.Relief-antcolony optimization (ACO), and Relief-ACO methods were employed to select subsets of features.The selected feature subsets were then used with the SVM classifier.The results showed that when the Relief-ACO feature selection method was combined with SVM using the radial basis function (RBF) kernel, an accuracy of 98.20% was achieved, outperforming other feature selection methods.Similarly, when used with SVM using the linear kernel, the Relief-ACO feature selection method achieved a high accuracy of 99.50% compared to other feature selection methods.
In the study conducted by Sarankumar et al. [9], a dataset of voice data collected from 42 patients was analyzed.The dataset contained a total of 5875 audio files.After preprocessing the dataset, a clustering process was performed using wavelet cleft fuzzy.Next, feature selection was carried out from the clustering step using the firming bacteria foraging algorithm.The selected features were then employed to predict PD patients using the Deep Brooke inception net classification algorithm, resulting in an accuracy of 99.88%.In another study by Pahuja and Nagabhushan [10], a free voice dataset of PD patients from the UCI repository was used.This dataset had six recordings for each patient.Classification algorithms ANN, SVM, and kNN, were employed and achieved accuracies of 95.89%, 88.21%, and 72.31%, respectively.
The research conducted by Yücelbaş [11] used a dataset comprising voice recordings of 252 individuals.The dataset employed 188 patients with PD and 64 healthy individuals, with three recordings for each person, resulting in a total of 756 recordings.The study proposed an information gain algorithm-based KNN hybrid model (IGKNN) for feature selection analysis.The proposed IGKNN method, using 22 selected features, achieved an accuracy of 98%.Pramanik et al. [12] used a publicly available dataset from the UCI machine learning repository in a different study.This dataset included 752 acoustic features for 252 people, including 188 PD patients and 64 healthy people.A total of 21 baseline features (BF), 22 vocal fold features (VFF), and 11-time frequency features (TFF) were extracted from this dataset.A collaborative feature bank was built to evaluate the performance of PD detection using three feature selection techniques: Correlated Feature Selection (CFS), Fisher Score Feature Selection (FSFS), and Mutual Information-based Feature Selection (MIFS).The Naïve Bayes classifier was used in the evaluation.The best accuracy obtained from utilizing the three feature selection strategies was 78.97%.
The study conducted by Salmanpour, Shamsaei, Saberi, et al. [13] aimed to categorize PD into its distinct subtypes.To achieve this, the researchers compiled 30 datasets over a period of four years from 885 individuals diagnosed with Parkinson's Progressive Marker and 163 healthy individuals.These datasets encompassed a combination of non-imaging, imaging, and radiomic features extracted from DAT-SPECT images.The study used 16 algorithms for feature reduction, eight algorithms for clustering, and 16 classifiers.The radiomics features aided in generating a consistent cluster structure, enabling the subdivision of PD into three distinct subtypes: mild, intermediate, and severe.
The study by Nahar et al. [14] was based on 44 acoustic features extracted from a dataset of 80 people, 40 of whom were PD patients and 40 who were healthy.The feature selection was performed using three different methods: Boruta, Recursive Feature Elimination (RFE), and RF.Gradient Boosting, Extreme Gradient Boosting, Bagging, and an Extra Tree Classifier were employed.The classifier results were examined using the original 44 features, and the Extreme Gradient Boosting classifier achieved a good accuracy of 78.08%.Furthermore, the classification results were analyzed after using the three feature selection methods, and an accuracy of 82.35% was achieved using the RFE feature selection method and the Bagging classifier.
While previous studies have explored PD diagnosis using voice analysis, significant gaps remain.Concerns related to generalizability and accuracy have been highlighted due to the inadequate dataset diversity and feature selection methodologies.This study tackles these limitations by combining two independent datasets and offering a fusion of filter feature selection with ensemble learning and genetic selection.
Datasets
Two distinct biomedical voice datasets were employed in this study for the assessment of PD.
The first dataset encompasses a compilation of biomedical voice measurements obtained from 31 individuals, 23 of whom were diagnosed with PD.Each row in the dataset corresponds to a voice recording from these individuals, while each column represents a specific voice measure.This dataset was expertly curated through a collaborative effort between Max Little of the University of Oxford and the National Center for Voice and Speech in Denver, Colorado, entailing the meticulous recording of speech signals [15].
The dataset contains 195 sustained vowel phonations, encompassing a range of time since diagnosis spanning from 0 to 28 years.The subjects' ages vary from 46 to 85 years, with a mean age of 65.8 and a standard deviation of 9.8.For each subject, an average of six phonations were captured, varying in duration from one to 36 s.These phonations were recorded within an IAC sound-treated booth, utilizing a head-mounted microphone (AKG C420) positioned 8 cm away from the lips.The calibration of the microphone involved a Class 1 sound level meter (B&K 2238) situated 30 cm from the speaker.The voice signals were directly recorded onto a computer through CSL 4300B hardware (Kay Elemetrics), sampled at 44.1 kHz, and with a 16 bit resolution.To ensure the robustness of the algorithms, all samples underwent digital amplitude normalization prior to the computation of the metrics.The details of the subjects are given in Table 2.The second dataset utilized in this study was built by Sakar et al. [16] for their study, which comprised a comparative analysis of speech signal processing algorithms for PD classification and the use of the tunable Q-factor wavelet transform.This dataset was collected at the Department of Neurology in the Cerrahpaşa Faculty of Medicine, Istanbul University.It entailed the comprehensive data of 188 PD patients (107 men and 81 women) spanning an age range of 33 to 87 years (mean age: 65.1 ± 10.9).Additionally, a control group consisting of 64 healthy individuals (23 men and 41 women) with ages ranging from 41 to 82 years (mean age: 61.1 ± 8.9) was included.During the data collection process, voice recordings were captured using a microphone set to a frequency of 44.1 KHz.Specifically, sustained phonation of the vowel /a/ was necessary to collect from each subject with three repetitions.Subsequently, a comprehensive set of speech signal processing algorithms, including Time-Frequency features, Mel Frequency Cepstral Coefficients (MFCCs), Wavelet Transform-based features, Vocal Fold features, and TWQT features, were diligently applied to the speech recordings of PD patients.
Filter Feature Selection
The goal of feature selection in machine learning and data mining is to identify and maintain a subset of important features from the original dataset.The motivation for feature selection stems from its ability to increase model performance, reduce computational complexity, and improve model interpretability.Filter methods have received substantial attention among the various approaches to feature selection due to their simplicity, efficiency, and capacity to evaluate feature significance independently of any specific learning algorithm.
Filter feature selection methods attempt to prioritize and choose features based on their unique properties and association with the target variable without considering the learning process of the specific model.In this study, we focus on the importance of filtering quasi-constant features as a crucial step in the filter feature selection process.Quasi-constant features refer to those with minimal variance or almost constant values across the dataset, providing limited or negligible discriminatory information.
Identifying and removing quasi-constant features reduces dimensionality and improves model generalization.By eliminating these features, we may reduce noise, improve computational performance, and promote more meaningful dataset exploration.However, to effectively filter out quasi-constant features, it is essential to set an appropriate threshold that determines the acceptable level of variance below which a feature is considered quasi-constant and subsequently removed.
Genetic Algorithm
Genetic Algorithms (GAs), members of the evolutionary algorithm family, have emerged as a popular and robust solution to addressing the limitations encountered by conventional optimization techniques in terms of efficiency and effectiveness.They are inspired by concepts of natural selection and genetics, imitating the process of evolution to find optimal solutions within a specific area.
The concept of a population-based search is at the heart of GAs, in which a set of potential solutions, referred to as individuals or chromosomes, undergo iterative refinement to explore the solution space.GAs enable the propagation of desirable features and the examination of new solution regions by utilizing genetic operators such as selection, crossover, and mutation.This population-centric method enables GAs to tackle complicated optimization problems with high dimensionality, non-linearity, and multimodality effectively.
GAs work by iteratively generating new populations, with each population being evaluated based on a fitness function that assesses the quality of individual solutions.They promote convergence towards optimal or near-optimal solutions across generations by repeatedly applying selection, crossover, and mutation operators.This repeated exploration and exploitation approach enables them to navigate the solution space with ease, exceeding local optima and delivering strong solutions.
Methods
Two distinct methods were employed in the experiments to evaluate the effectiveness of the filtering approach.
The selection of the quasi-constant threshold for filtering features was performed using a trial-and-error method.After careful evaluation of different threshold values, it was found that the best results were achieved when the threshold was set to 0.0001.However, both lower and higher threshold values yielded decreased accuracy in our experiments.
A combination of filter feature selection and ensemble learning methods was employed in the first method.First, quasi-constant features with a threshold value of 0.0001 were identified and subsequently removed from the dataset, resulting in a refined dataset.This refined dataset was then subjected to five different classification algorithms: Gaussian Naïve Bayes classifier, Support Vector Machine (SVM), Decision Tree, Random Forest, and XGBoost.
The performance evaluation revealed that among the tested classification algorithms, Decision Tree, Random Forest, and XGBoost exhibited the highest classification accuracy and predictive power.Building upon this finding, further analysis was conducted by employing ensemble learning methods: stacking and voting, using the three best-performing algorithms.Additionally, bagging was also applied to the three selected algorithms to explore potential performance enhancements and model robustness.The first method is summarized in Figure 1.
Diagnostics 2023, 13, x FOR PEER REVIEW 8 of 14 explore potential performance enhancements and model robustness.The first method is summarized in Figure 1.In the second method, after filtering out the quasi-constant features, a genetic algorithm was utilized to further optimize the feature selection process for the same set of classification algorithms: GaussianNB, SVM, Decision Tree (with entropy and Gini index), XGBoost, Random Forest, and additionally, logistic regression.A pictorial representation of the second method is shown in Figure 2. In the second method, after filtering out the quasi-constant features, a genetic algorithm was utilized to further optimize the feature selection process for the same set of classification algorithms: GaussianNB, SVM, Decision Tree (with entropy and Gini index), XGBoost, Random Forest, and additionally, logistic regression.A pictorial representation of the second method is shown in Figure 2.
The genetic selection was performed after 40 generations of populations with 50 individuals.The crossover probability was 0.5, and the mutation probability was 0.2.The crossover independent probability was set to 0.5 and the mutation independent probability to 0.05.The tournament size was set to three, and the number of generations after which the optimization is terminated when the best individual has not changed in all the previous generations (n_gen_no_change) was set to 10.
In the second method, after filtering out the quasi-constant features, a genetic alg rithm was utilized to further optimize the feature selection process for the same set classification algorithms: GaussianNB, SVM, Decision Tree (with entropy and Gini inde XGBoost, Random Forest, and additionally, logistic regression.A pictorial representati of the second method is shown in Figure 2.
Results and Discussion
With the quasi-constant threshold set to 0.0001, five of the twenty-four features in Dataset 1 and 188 of the 754 features in Dataset 2 were identified as quasi-constant and subsequently eliminated.This was a significant number of features in both datasets and enabled streamlining the feature space to enhance the efficiency and effectiveness of subsequent modeling tasks.
Different classification models were then tested on the filtered datasets.The accuracies of these models are given in Table 3.Among the various models tested, Decision Tree, Random Forest, and XGBoost demonstrated notably higher accuracies compared to the others on both datasets.Therefore, ensemble methods, namely voting, stacking, and bagging, were applied to these three models.Both hard voting and soft voting were employed.The models were stacked to leverage the strengths of multiple classifiers.Bagging was applied independently to each of the three models, utilizing 5-fold cross-validation and a total of 500 trees.The resulting accuracies obtained from these ensemble approaches on Dataset 1 and Dataset 2 are presented in Tables 4 and 5, respectively.
Voting with both hard and soft voting classifiers attained perfect accuracy (100%) on Dataset 1. Stacking also displayed good results, with a 96.2% accuracy on Dataset 1 and a 90.06% accuracy on Dataset 2. Bagging had lower accuracy than voting and stacking.
Ensemble approaches take advantage of the diversity and complementary features of individual models, resulting in higher accuracy.The perfect accuracy achieved by voting on Dataset 1 suggests strong agreement among the models, contributing to accurate classification.The relatively high accuracy of the stacked models verifies the efficiency of combining the predictions of the base models to produce greater performance.Bagging decreases the variance and instability of classification models by training individual models on diverse subsets of the dataset and aggregating their predictions.The slight decrease in accuracy from bagging could have resulted from the intrinsic randomness introduced during the resampling process, which may result in a minor trade-off between accuracy and model stability.In the second method, the filtered dataset was subjected to further feature refinement using genetic selection.A genetic algorithm investigates several feature combinations to determine an optimal subset that achieves the maximum classification accuracy.The genetic selection process begins with the generation of an initial population of potential feature subsets, each of which represents a unique combination of features.These subsets were then analyzed using the same classification models in addition to logistic regression.The results thus obtained are summarized in Table 6.It can be observed that the accuracy of the Guassian Naïve Bayes classifier improved to 91.83% for Dataset 1 and 77.63% for Dataset 2 after genetic selection.This indicates that genetic algorithms were effective in choosing the relevant features that boosted the performance of the classifier.However, with the SVM classifier, the accuracy declined to 81.63% for Dataset 1 and improved only slightly for Dataset 2 with 77.63% accuracy.The accuracy of the decision tree model, measured using both entropy and the Gini index, also failed to improve significantly with genetic selection.The same was true for random forest and XGBoost classifiers.Logistic regression also produced similar results to the rest, with an accuracy of 89.79% with Dataset 1 and 76.97 with Dataset 2. In summary, genetic selection had varying effects on the accuracy of the different classification models.This implies that the effectiveness of genetic algorithms may also be dependent on the properties of the classification model.
Precision may also be an important evaluation metric for the detection of PD.Precision is a performance metric that quantifies the accuracy of a classification model's positive predictions.It determines the proportion of true positive predictions (positive instances correctly identified) out of all predicted positive instances (true positives + false positives).By focusing on precision, we can ensure that the models accurately identify actual PD patients while also reducing the risks of misclassifying individuals in good health as having the disease, as that can lead to unnecessary fear, stress, and even medical interventions.A high precision score gives reliability and greater confidence to employ the models in PD diagnosis.The precision of the predictions made by the models with filter feature selection and genetic selections on both datasets is given in Tables 7 and 8, respectively.It is notable that the decision tree and XGBoost classifiers achieved perfect precision in identifying both PD patients and healthy individuals.The Gaussian Naïve Bayes and random forest classifiers attained perfect precision in identifying PD patients in Dataset 1, whereas the SVM classifier showed perfect precision in detecting healthy individuals in both datasets.It is also noteworthy that SVM was the only classifier that achieved perfect precision in identifying at least one category (PD patients or healthy individuals) in Dataset 2.
After genetic selection on Dataset 1, all the models achieved relatively high precision in identifying PD patients, ranging from 84% to 94%.The Gaussian Naïve Bayes classifier was 100% precise in identifying healthy individuals.However, all the other models showed less precision in identifying healthy individuals (ranging from 62% to 80%) than PD patients.The same can also be observed in the case of Dataset 2 after genetic selection.All models showed higher precision in identifying PD patients (ranging from 74% to 79%) than in identifying healthy individuals (ranging from 0% to 78%).This suggests that identifying PD patients may be easier than identifying healthy people from the selected datasets.One possible reason for this could be the unequal distribution of PD patients and healthy individuals in both datasets.Both datasets contained information from a higher number of PD patients than healthy people, which made the models more proficient in learning the patterns and characteristics associated with PD.This imbalance in class distribution may have led to a bias towards PD patients during the process, potentially resulting in higher precision in identifying PD cases.
The overall results for Dataset 1 were better than those for Dataset 2. This disparity may be due to the difference in the number of features between the two datasets.Initially, Dataset 1 had only 24 features, which is substantially fewer than Dataset 2, which had 754 features.Even after applying the filter feature selection technique, a relatively large number of features (566 features) were preserved in Dataset 2 compared to Dataset 1.The presence of a larger feature space in Dataset 2 might have introduced additional complexity and made it more challenging for the models to discern the meaningful patterns associated with PD.This demonstrates that having a greater number of features may not necessarily translate to better results and may even generate noise or redundancy, resulting in poor model performance.
While previous research has established the efficacy of ensemble techniques [5,7], a comparative analysis with the current literature demonstrates a remarkable outperformance of ensemble learning methods, as exemplified by the perfect accuracy (100%) achieved by both hard and soft voting on Dataset 1.Moreover, the hard voting classifier achieved an accuracy of 91.53% on Dataset 2, surpassing the performance reported in the related literature [5].The introduction of genetic selection is a novel approach.While certain models responded differently to genetic selection, this nuanced approach illustrates the complicated interplay between feature selection strategies and classification outcomes.Following genetic selection, the GaussianNB classifier achieved the best accuracy for both datasets, with an accuracy of 91.83% for Dataset 1 and 77.63% for Dataset 2. The emphasis on precision ensures that PD patients are accurately identified while minimizing the risk of misclassifying healthy individuals, a crucial aspect for real-world clinical applications.Filter feature selection led to perfect (100%) precision in the predictions of decision trees and XGBoost classifiers.With genetic selection, there was an average precision of 88.42% in identifying PD patients and 72% in identifying healthy individuals in Dataset 1.In Dataset 2, these values were 77.14% for PD patients and 55.43% for healthy individuals.This holistic viewpoint illustrates the depth and breadth of this research, effectively establishing its relevance and impact on improving patient care and prognosis.Overall, this research not only benchmarks favorably against the prior literature but also offers a novel strategy for enhancing the accuracy and reliability of PD detection through voice data analysis.
Conclusions and Future Scope
This study aimed to develop an efficient method for the detection of PD from voice clips.A combination of filter feature selection, ensemble learning, and genetic selection was employed.The results of the study demonstrated the effectiveness of filter feature selection in streamlining the feature space and enhancing the efficiency of subsequent modeling tasks.By eliminating quasi-constant features, a significant number of irrelevant features were successfully removed, leading to high model accuracy.The application of ensemble learning techniques, such as voting, stacking, and bagging, further explored the classification performance of these models.Additionally, the genetic selection approach analyzed the precision of the classification models in identifying PD patients and healthy individuals.The models exhibited relatively high precision in identifying PD patients, while the precision in identifying healthy individuals was comparatively lower.Moreover, the comparison between Dataset 1 and Dataset 2 demonstrated the effect of feature space on model performance.Dataset 1, with a smaller number of features, yielded better results compared to Dataset 2, which had a larger feature space even after filter feature selection.
While this study contributes significantly to the field of PD detection, a few limitations warrant careful consideration.The precision analysis performed in this study reveals a potential bias toward recognizing PD patients more accurately than healthy people.This bias stems from the inherent class imbalance within the datasets, where PD patients are overrepresented compared to healthy individuals.This discrepancy could lead a skewed learning process, affecting the models' generalizability when applied to larger, more balanced populations.Furthermore, the variation in performance between Dataset 1 and Dataset 2 underscores the sensitivity of model outputs to the dimensionality of the feature space.The larger feature set of Dataset 2, even after filter feature selection, suggests the possibility of increased noise or redundancy, thereby affecting model robustness and performance.
Future studies could explore the use of sampling techniques, such as oversampling or undersampling, to balance the datasets.This would help in achieving better performance and addressing the bias towards the majority class.The current study utilized specific datasets for model development and evaluation.Future research could involve testing the developed models on external datasets or real-world data to assess their generalizability and robustness.This would provide insights into the practical applicability of the proposed methods and their performance across different populations.By addressing these future research areas, we can further advance the field of PD detection from voice data and contribute to the development of accurate, reliable, and clinically applicable diagnostic tools.
Figure 1 .
Figure 1.Method 1, where filter feature selection was applied to five different classification algorithms followed by ensemble learning methods.
Figure 1 .
Figure 1.Method 1, where filter feature selection was applied to five different classification algorithms followed by ensemble learning methods.
Figure 2 .
Figure 2. Method 2, where filter feature selection is followed by a genetic algorithm before applying to the classification models.
Table 1 .
Summary of studies that utilized machine learning for PD detection.
Table 2 .
List of subjects with sex, age, Parkinson's stage, and number of years since diagnosis1.Entries labeled "n/a" for healthy subjects for whom Parkinson's stage and years since diagnosis are not applicable."H&Y" refers to the Hoehn and Yahr PD stage, where higher values indicate a greater level of disability.
Table 3 .
Results on applying filter feature selection.
Table 4 .
Results on applying ensemble learning methods to Dataset 1.
Table 5 .
Results on applying ensemble learning methods to Dataset 2.
Table 6 .
Results from applying genetic selection.
Table 7 .
Precision in applying filter feature selection.
Table 8 .
Precision in applying genetic algorithms.
|
2023-09-02T15:21:19.476Z
|
2023-08-31T00:00:00.000
|
{
"year": 2023,
"sha1": "a8e2db460c347ffda4be2d0b34291c25f19f696b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/13/17/2816/pdf?version=1693452008",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "564f36b433a1347a7aae5abb306ca46ea6d6baa4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9750068
|
pes2o/s2orc
|
v3-fos-license
|
Lithium ameliorates rat spinal cord injury by suppressing glycogen synthase kinase-3β and activating heme oxygenase-1
Glycogen synthase kinase (GSK)-3β and related enzymes are associated with various forms of neuroinflammation, including spinal cord injury (SCI). Our aim was to evaluate whether lithium, a non-selective inhibitor of GSK-3β, ameliorated SCI progression, and also to analyze whether lithium affected the expression levels of two representative GSK-3β–associated molecules, nuclear factor erythroid 2-related factor-2 (Nrf-2) and heme oxygenase-1 (HO-1) (a target gene of Nrf-2). Intraperitoneal lithium chloride (80 mg/kg/day for 3 days) significantly improved locomotor function at 8 days post-injury (DPI); this was maintained until 14 DPI (P<0.05). Western blotting showed significantly increased phosphorylation of GSK-3β (Ser9), Nrf-2, and the Nrf-2 target HO-1 in the spinal cords of lithium-treated animals. Fewer neuropathological changes (e.g., hemorrhage, inflammatory cell infiltration, and tissue loss) were observed in the spinal cords of the lithium-treated group compared with the vehicle-treated group. Microglial activation (evaluated by measuring the immunoreactivity of ionized calcium-binding protein-1) was also significantly reduced in the lithium-treated group. These findings suggest that GSK-3β becomes activated after SCI, and that a non-specific enzyme inhibitor, lithium, ameliorates rat SCI by increasing phosphorylation of GSK-3β and the associated molecules Nrf-2 and HO-1.
Introduction
Spinal cord injury (SCI) is characterized by both mechanical and inflammatory response-induced damage [1,2]. The mechanical forces that impact the spinal cord at the time of antioxidant capacity or purging of reactive oxygen species (ROS) can protect against SCI.
Lithium, a non-selective inhibitor of glycogen synthase kinase (GSK)-3β, has been used as a mood stabilizer in patients with bipolar disorder, and possibly acts as a neuroprotectant [6]. Lithium has also been used to ameliorate chronic experimental autoimmune encephalomyelitis (EAE) in mice expressing anti-myelin oligodendrocyte glycoprotein antibodies [7], and acute monophasic EAE in the rat [8]. In addition, lithium is used to protect neurons after SCI, reducing post-injury inflammation [9]. Lithium promotes the production and release of neurotrophins, stimulates neurogenesis, enhances autophagy, and inhibits apoptosis [10]. However, any neuroprotective effect of lithium in terms of activating antioxidative systems, such as the nuclear factor erythroid 2-related factor 2 (Nrf-2)/heme oxygenase-1 (HO-1) mechanism requires further study in models of SCI.
Oxidative stress-induced cell damage is attributable to an imbalance between reactive oxygen free radical production and the efficacy of the anti-oxidant system [11]. An increase in cellular antioxidant capacity or ROS removal can ameliorate various diseases and injuries, including SCI [12]. Nrf-2 and its downstream target (HO-1), along with other antioxidant enzymes including superoxide dismutase and glutathione peroxidase play important roles in protecting various tissues and cells against oxidative stress both by regulating the expression levels of cytoprotective and antioxidant genes [12] and reducing inflammation [13]. Normally, Nrf-2 is associated with the Kelch like-ECH-associated protein 1 (Keap1) in the cytoplasm; upon stimulation, Nrf-2 is translocated to the nucleus where it plays essential roles in the transcription of various phase II and/or antioxidant enzyme genes [14]. Targeting of Nrf-2/HO-1 after SCI suppresses oxidative stress and exerts a neuroprotective effect [15]. However, any relationship between GSK-3β and Nrf-2 expression in the SCI rat model remains unclear.
In the present study, we investigated the neuroprotective effects of lithium, a non-selective inhibitor of GSK-3β, in rats where SCI was induced by clip compression. We explored the underlying protective molecular mechanisms via semiquantitative analysis of Nrf-2 and HO-1 levels.
Animals
We used female Sprague-Dawley rats (200-250 g, 7-8 weeks of age) (OrientBio Inc., Seongnam, Korea). All experimental procedures were conducted in accordance with the Guidelines for the Care and Use of Laboratory Animals of Jeju National University. The animal protocols also conformed to current international laws and policies National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals, NIH Publication No. 85- 23, 1985, revised 1996. Every effort was made to minimize the number of animals used and their suffering.
Surgical procedures
Clip compression injury was inflicted using a modification of previously published methods [1,16,17]. Animals were anesthetized via intramuscular injection of Zoletil 50 (Virbac, Carros, France) and subjected to laminectomy at T9/ T10. Immediately thereafter, the spinal cord was compressed with a vascular clip (Stoelting, Wood Dale, IL, USA) applied vertically to the exposed spinal cord at an occlusion pressure of 15-20 g for 1 minute. After compression, the muscles and skin layers were closed. Sham-operated control rats underwent laminectomy only. Spinal cord tissues from the surgical sites were harvested and either fixed in 4% (v/v) paraformaldehyde in phosphate-buffered saline (PBS; pH 7.2) for histological examination or stored at −80°C prior to Western blot analysis.
Lithium treatment
To assess the effects of lithium on SCI, rats were divided into the following three treatment groups (10 animals/group): sham control, vehicle, and lithium. To rapidly elevate the lithium level, the first dose of lithium chloride (80 mg/kg/day, Sigma-Aldrich) was intraperitoneally injected into the lithiumtreated group 30 minutes after surgery; identical doses were given on each of the next 3 days. Lithium is nontoxic to rats at Anat Cell Biol 2017;50:207-213 209 www.acbjournal.org this level; the serum levels are equivalent to those in human patients [7]. As in our previous study, serum lithium concentrations were measured using a lithium assay kit (catalog number LI01ME, MG Metallogenics, Chiba, Japan) [8]. The body weights and behavioral features of all rats were checked daily.
Behavioral tests and histological examination
Locomotor function after SCI was examined using the Basso, Beattie, and Bresnahan (BBB) rating scale. All evaluations were performed in a double-blinded manner; average scores were calculated for each group and used to compare the severity of hind-limb paralysis.
Immunohistochemistry
To assess early responses to treatment, we compared the microglial features of the vehicle-and lithium-treated groups at 4 DPI, as microglial reactions are prominent within the first week after SCI [18][19][20]. We immunostained the spinal cord for Iba-1 (a marker of activated cord microglia and macrophages) as described previously [13]. Briefly, after incubation with matched blocking serum (10% [v/v] normal goat serum in PBS, Vectastain Elite ABC kit, Vector Laboratories, Burlingame, CA, USA), the samples were incubated with rabbit anti-Iba-1 (Iba-1, 1:800, Wako Pure Chemical Industries Ltd.) for 1 hour at room temperature. After three washes in PBS, we proceeded as recommended by the the manufacturer; the peroxidase reaction was developed using a diaminobenzidine substrate kit (Vector Laboratories).
Western blot analysis
We performed Western blotting as described previously [21]. Briefly, spinal cord tissue was homogenized in TNN lysis buffer containing protease and phosphatase inhibitors (1 mM Na 3 VO 4 , 1 mM phenylmethanesulphonylfluoride, 10 μg/ml aprotinin, 10 μg/ml leupeptin), centrifuged at 10,900 ×g for 20 minutes at 4°C, and the supernatant was harvested. The cytosolic and nuclear fractions were separated using NE-PER Nuclear and cytoplasmic extraction reagents as recommend-ed by the manufacturer (Thermo Scientific, Rockford, IL, USA). Proteins (40 μg) were subjected to 10% (w/v) sodium dodecyl (or lauryl) sulfate polyacrylamide gel electrophoresis and transferred to nitrocellulose membranes (Schleicher and Schuell, Keene, NH, USA). The membranes were blocked by incubation with 5% (v/v) skim milk in Tris-buffered saline for 1 hour and then incubated with primary antibodies (antip-GSK-3β, 1:1,000 dilution; anti-GSK-3β, 1:1,000 dilution; anti-Nrf-2, 1:1,000 dilution; and anti-HO-1, 1:1,000 dilution) for 2 hours. After washing, the membranes were incubated with the appropriate secondary antibodies for 1 hour. Bound antibodies were detected using a chemiluminescent substrate (in the WEST-one kit, iNtRON Biotech, Seongnam, Korea) according to the manufacturer's instructions. After imaging, the membranes were stripped and reprobed using an antiβ-actin antibody (1:10,000 dilution). The optical density (OD)/mm 2 of each band was measured using ImageJ software (NIH, Bethesda, MD, USA). To detect Iba-1, we used the Wes system (ProteinSimple, San Jose, CA, USA) as instructed by the simple western user manual [22]. All electrophoresis and immunoblotting steps were performed using a fully automated capillary system.
Statistical analysis
All measurements are averages of three independent experiments. All values are presented as means±standard error of the mean. The results were analyzed using a one-way analysis of variance (ANOVA) followed by Student-Newman-Keuls post hoc testing for multiple comparisons. A P-value of < 0.05 was considered to reflect statistical significance.
Lithium-mediated behavioral changes
Locomotor function began to recover commencing on 3 DPI. By 8 DPI, the BBB scores was significantly higher in the lithium-treated group (10.4±0.52, P<0.05) compared with those of the vehicle-treated group (6.78±0.55); the improvements were maintained until 14 DPI (Fig. 1).
Histological findings
The sham-operated group exhibited no mechanical change in the core region of the spinal cord (data not shown), as found previously [1,17]. The cords of vehicle-treated rats exhibited reduced cellularity and edema in longitudinal sections of core lesions (Fig. 1A) (Fig. 2C). In contrast, severe edema and hemorrhage were evident in the lithium-treated group (Fig. 2B). Additionally, accumulation of round-type inflammatory cells (Fig. 2D); activated microglia; and small, round vacuoles were evident in the core regions of spinal cords of the lithium-treated group by 4 DPI (Fig. 2D).
Microglial reactions and infiltration of inflammatory cells
To assess microglial reactions and inflammatory cell infiltration, we immunohistochemically stained for Iba-1 and used a simple system to quantify the protein levels. Iba-1-positive microglial cells and macrophages were evident in all cord regions, including the white and gray matter (Fig. 3). www.acbjournal.org tive OD) was significantly less than that in the vehicle-treated group (50.93±6.39%, relative OD; P<0.05) (Fig. 3C).
Discussion
Many types of SCI models have been developed using rodents [4]. To obtain reliable data associated with impact power, computerized data analyses have been applied using the New York University impact device [23], Ohio State University impact device [18], and MASCIS impactor [19]. Alternatively, for the present study, a clip compression technique was employed to induce SCI because there is a progressive improvement in BBB scores in rats following the use of this technique [2,21,24]. Additionally, the clip compression technique is an alternative choice for the induction of SCI in the absence of digital analysis systems, such as MASCIS [23].
We first showed that lithium exerted significant antiinflammatory and anti-oxidative effects by inhibiting inflammatory cell infiltration, microglial activation, and Nrf-2 translocation (thus enhancing HO-1 synthesis), significantly improving functional recovery in rats. Earlier, we showed that lithium treatment reduced the extent of the Iba-1-positive area of the spinal cord, and reduced the serum tumor necrosis factor α level, in an experimental rat model of EAE [8]. Lithium suppresses both the level of circulating pro-inflammatory mediators and the number of central nervous system microglial cells, and enhances locomotor function in rats with SCI. A previous report indicated that GSK-3β, a negative regulator of Nrf-2, influenced the relative Nrf-2 proportions in the cytosol and nucleus [19]. Here, we show that lithium-mediated inhibition of GSK-3β induced nuclear Nrf-2 accumulation, thus activating HO-1, in rats with SCI.
Lithium chloride reduces the disruption in the bloodspinal cord barrier and promotes the recovery of neurological function after SCI. This occurs partly due to decreases in the activation of endoplasmic reticulum stress, which plays an important role in SCI by inhibiting GSK-3β activation [20]. In the present study, lithium inhibited GSK-3β phosphorylation and thus may have reduced inflammation in the spinal cord samples subjected to clip compression. Additionally, the regulation of GSK-3β activity is generally mediated by phosphorylation of the amino-terminal domain (at Ser9) by any of several kinases, including Akt, protein kinase A, and/or protein kinase C, which inactivate the enzyme [21]. Furthermore, toll-like receptors mediate GSK-3β phosphorylation at Ser9 via the regulation of pro-and anti-inflammatory cytokines [22]. Thus, it is possible that the inhibition of GSK-3β relieves the clip compression SCI in rats.
Various drugs exert antioxidative neuroprotective activities in the spinal cord by activating Nrf-2 such as asiatic acid, rosmarinic acid and resveratrol [12,[25][26][27]. In Wistar rats, carnosol protects against SCI-induced oxidative stress and inflammation by modulating nuclear factor-κB, cyclooxygenase-2, and Nrf-2 levels [26]. Several drugs associated with Nrf-2 activation have been used to treat rat SCI; however, this is the first report to show that lithium-mediated inhibition of GSK-3β protects against SCI by regulating Nrf-2 translocation and subsequent activation of the HO-1 target gene. Recent evidence indicates that GSK-3β plays a critical role in regulating and degrading Nrf-2 in a Keap1-independent manner [28]. Furthermore, it was postulated that lithium exerts neuroprotective effects via the activation of Nrf-2 in spinal cordinjured rats.
In conclusion, our findings suggest that lithium ameliorates rat paralysis caused by SCI, and that the molecular mechanism involves inhibition of GSK-3β, increased nuclear translocation of Nrf-2, and subsequent upregulation of HO-1.
|
2018-04-03T00:17:10.172Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "7fde7a2c23dfca22c0e4e6be4dff2d9cefaf8ecb",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5639175?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fde7a2c23dfca22c0e4e6be4dff2d9cefaf8ecb",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
258133710
|
pes2o/s2orc
|
v3-fos-license
|
Giant pilomatrixoma in the infraclavicular region following an insect bite
Abstract Pilomatrixoma is a benign skin tumor typically presenting as a hard, slow-growing mass arising from hair follicle matrix cells. While most encountered in children, giant pilomatrixoma seldomly presents in adults. In the present case, a large subcutaneous, nonpainful and slow-growing mass was discovered in the infraclavicular region of a 52-year-old male. Biopsy confirmed the diagnosis of giant pilomatrixoma. Despite its benign nature, tumor size and location can result in significant morbidity and cosmetic deformity. This case highlights the importance of considering pilomatrixomas in patients with a slow-growing mass, especially after an inciting event, such as an insect bite. Timely diagnosis and proper management can result in successful tumor removal with minimal cosmetic compromise.
INTRODUCTION
Pilomatrixomas are benign skin tumors originating from hair follicles, with minimal malignant potential in sporadic cases [1]. Some cases are linked to genetic syndromes like Gardner syndrome, myotonic dystrophy and Rubinstein-Taybi syndrome. Although the exact etiology is unknown, somatic mutations in the CTNNB1 gene have been reported in the most isolated pilomatrixomas [1]. In addition, factors such as trauma, surgery and vaccines have been associated in some cases [2].
Typically, pilomatrixomas present as an asymptomatic, firm, solitary, well-circumscribed tumor localized to the dermis or subcutaneous tissue, which usually arise in the head and neck regions. Tumors are described as giant when the tumor diameter exceeds 5 cm in its widest dimension [3][4][5]. Incisional biopsy remains the gold standard for diagnosis followed by histopathological analysis [4,6,7]. Once the diagnosis is established, surgical excision is most reliable with low recurrence rates [8]. Wide excision margins (i.e. 1-2 cm margins) are recommended in aggressive cases to reduce the risk of relapse [9].
The following case presents a giant pilomatrixoma arising in the infraclavicular region, after an insect bite, in an adult patient with successful surgical excision.
CASE PRESENTATION
A 52-year-old male presented for a mass in the left infraclavicular region which had been gradually increasing in size for the past two years. The patient reports that a few weeks prior to noticing the mass, a mosquito, described as having legs with white stripes, bit him in the same region. Upon physical exam, a large mass was palpated in the infraclavicular area. The mass was nontender, mobile and firm. The overlying skin was mildly erythematous with small, partially healed ulcers (Fig. 1). There were no bruits or pulsations. Treatment options were discussed with the patient who elected to proceed with surgery based on size and cosmetic deformity.
An en bloc resection of the mass was undertaken as follows. The patient was placed under general anesthesia, and the incision site was prepped with a local anesthetic. An elliptical incision was performed, followed by subcutaneous dissection. There was no infiltration past the subcutaneous layer as there was no need for fascial or muscular dissection. The mass was resected with 2 cm-wide margins and sent to pathology (Fig. 2). The incision edges were approximated using a two-layer closure. There were no complications in the early postoperative period.
Macroscopically, the mass measured 20.1 * 13 * 5.8 cm. When incised, it showed a combination of yellowish areas with a friable calcified appearance and solid uncalcified whitish areas (Fig. 3). Histologically, sections were characterized by abundant ghost cells mixed with basophilic cells that imitate the cells of the basal layer of the hair follicle in the dermis (Fig. 4). There were extensive areas of calcification, necrosis and foreign body reaction, but no evidence of malignancy. These findings were most consistent with a giant pilomatrixoma.
DISCUSSION
Our case matched the epidemiological profile of pilomatrixomas reported in the literature, which indicates a bimodal age distribution for this tumor, with the highest peak in the first and second decades and the second between 50 and 65 years of age [8]. According to Guinot-Moya et al., pilomatrixomas are slightly more common in males [8,10], contrary to the findings of other studies [11,12]. In addition, some studies have observed a familial pattern of presentation [13] However, in this case, no familial antecedent was reported.
The most significant case of a giant pilomatrixoma measured 34 cm × 21 cm × 17 cm [14]. It was resected from the posterior thorax of a healthy pediatric patient Another giant pilomatrixoma presented in a 52-year-old man on the posterior thorax and measured 24 cm × 21 cm × 9 cm [15]. In contrast, the mass in our case measured 20 cm × 13 cm and was located on the anterior thorax, precisely in the infraclavicular region. Despite these cases reporting masses in the thorax, pilomatrixomas usually develop on the head and neck regions [1,14,15].
Our patient has a history of a mosquito bite a few weeks prior to noticing the mass. According to the patient's description, the mosquito could have been Aedes spp, the vector responsible for transmitting dengue, Chikungunya, Zika and other arboviruses. Like most cases described in the literature, the tumor in our case consisted of a solitary lesion. On the other hand, pilomatrixomas may be associated with Gardner syndrome, myotonic dystrophy, Steinert syndrome, xeroderma pigmentosum, Turner's syndrome or sarcoidosis [2,8,13].
Pilomatrixomas lack specific clinical symptoms, which delay the diagnosis, primarily when it arises in unusual areas, especially following an inciting event regarded as innocuous. The accuracy of the clinical diagnosis of pilomatrixoma was 28.9% and 46%, according to Ciucā et al. and Pirouzmanesh et al., respectively [2,12]. Imaging can help in the differential diagnosis by identifying calcifications, thus ruling out lymphatic and vascular tumors. However, biopsy remains the gold standard for diagnosis. The pilomatrixoma presents typically as a lobulated pattern, conformed of three cell populations arranged in a circular configuration with basaloid cells in the periphery, then transitional cells and enucleated shadow cells (ghost cells) in the center. Other characteristics include calcifications primarily found in the center of the mass and foreign body giant cells arising following a granulomatous response to ghost cells [3].
Giant pilomatrixoma in the infraclavicular region | 3
Surgical excision is the treatment of choice due to a low relapse rate of 0.3% and appreciable cosmetic results [12]. In addition, malignant transformation is extremely rare and has been reported in elderly subjects with a history of multiple excision attempts [6,8].
Pilomatrixomas are benign, slow-growing tumors; however, they can grow to giant sizes leading to skin lesions and cosmetic deformity, as in our case. Therefore, a high index of suspicion is required to make an accurate diagnosis, especially when encountered in unusual locations and with an associated indolent history.
|
2023-04-15T05:11:05.214Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "c690f232dadaa4eeb4e9679b4c496a18fff88dc8",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jscr/article-pdf/2023/4/rjad182/49865480/rjad182.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c690f232dadaa4eeb4e9679b4c496a18fff88dc8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
20079812
|
pes2o/s2orc
|
v3-fos-license
|
New Delhi metallo-β-lactamase-1-producing Klebsiella pneumoniae isolates in hospitalized patients in Kashan, Iran.
Background and Objectives: New Delhi metallo-ß-lactamase (NDM) is a newly emerging metallo-ß-lactamases, which can destroy all β-lactams including carbapenems. Therefore, this study aimed at evaluating New Delhi metallo-ß-lactamase-1–production in clinical isolates of Klebsiella pneumoniae in Kashan, Iran. Materials and Methods: In a cross-sectional study, 181 K. pneumoniae isolates were collected from clinical samples of patients, who referred to Shahid Beheshi hospital in Kashan during November 2013 and October 2014. Antimicrobial susceptibility patterns were determined using disk diffusion method, according to CLSI guidelines. Metallo-ß-lactamase (MBL) production was identified among imipenem-resistant K. pneumoniae isolates using imipenem-EDTA double disk synergy test (EDTA-IMP DDST). PCR method and sequencing were used to detect integron Class 1 and blaNDM-1 gene. Statistical analyses were performed using SPSS software Version 16. Results: Of the 181 K. pneumoniae isolates, 36 (19.9 %) were imipenem-resistant strains. A total of 28 out of 36 (77.7%) imipenem-resistant K. pneumoniae isolates were identified as MBL producer strains. Also, 150 (82.9%) K. pneumoniae isolates carried intI1 gene, and 20 (11.1%) K. pneumoniae isolates harbored blaNDM-1 gene. Conclusion: Our study revealed a high frequency of MBL production and the presence of blaNDM-1 among K. pneumoniae strains, especially among hospitalized patients, which is alarming. Moreover, the presence of Class 1 integrons in all multi-drug resistant K. pneumoniae isolates highlights the risk of rapid spread of the resistance genes, especially in clinical settings.
INTRODUCTION
Carbapenems includes a Class of ß-lactams that can kill most bacteria and are recommended for treatment of infections caused by extended-spectrum β-lactamase (ESBL)-producing Enterobacteriaceae, mainly K. pneumoniae (1,2). Carbapenem resistance due to the carbapenemase enzymes is one of the complicated health issues worldwide because carbapenemase producing clinical isolates simultaneously show resistance to carbapenems and to all other ß-lactam antibiotics (3). Metallo-ß-lactamases (MBLs) belong to Class B ß-lactamase and need zinc for their activity (4). MBLs have a wide spectrum of β-lactamase activity and can affect a wide range of β-lactam antibiotics including carbapenems (4). Bulk of the resistance genes including carbapenemases in K. pneumoniae are carried on Class 1 integrons. It has been documented that metallo-β-lactamases are associated with gene cassettes carried on integrons (5). Integrons as transferable genetic elements facilitate the transfer of resistance genes among different bacteria (6).
Among the newly identified metallo-β-lactamases, New Delhi metallo-ß-lactamase (NDM) is a recently described enzyme conferring resistance to all ß-lactams, especially carbapenems except monobactams (7). Since its first identification in New Delhi, India, in 2008, NDM has been reported by different countries around the world as an important health concern (8).
The bla NDM-1 gene, responsible for producing NDM in clinical isolates, is carried on transferable genetic elements, leading to rapid dissemination of these genes (7). There is limited data on NDM production and carriage of Class 1 integrons in clinical isolates of K. pneumoniae in our regain. Thus, the present study was conducted to identify the bla NDM metallo-β-lactamase gene and the presence of Class 1 integrons in clinical isolates of K. pneumoniae in Kashan, Iran.
RESULTS
Of the 181 K. pneumoniae isolates of hospitalized patients, 78 (43.1%) were collected from male and 103 (56.9%) from female patients. The patients' age ranged from 1 to 97, and their mean age was 50.36 years.
The antibiotic resistance profile is demonstrated in FARZANEH FIROOZEH ET AL. isolates carried intI1 gene and were Class 1 integronpositive isolates. Moreover, 20 (11.1%) K. pneumoniae isolates harbored bla NDM-1 gene and were recognized as NDM-producing isolates. The bla NDM-1 gene carriage was found in all hospital wards except Critial Care Unit (CCU), while the most frequent ward, which harbored the bla NDM-1 gene, was intensive care unit (ICU) ( Table 2). The nucleotide sequencing of the PCR products of bla NDM-1 (the GenBank accession number: KP340793.1) and intI1 genes were equal to those deposited in the GenBank. The statistical analysis revealed a correlation (P < 0.05) among the hospitalized patients' ward, sample type, and patient admission with NDM production ( Table 2).
DISCUSSION
Carbapenem resistant K. pneumoniae strains are increasing, and infections due to these strains are accompanied with higher mortality, length of hospitalization, and cost of treatment (11).
Our results revealed that 77.7% of imipenem-resistant K. pneumoniae strains produced MBL. Different frequencies of MBL production have been reported among K. pneumoniae strains (8). In contrast with our results, in a study conducted by Fazeli et al. in Isfahan, 10.2% of carbapenem-resistant K. pneumoniae isolates have been reported to be MBL producers (12). In another study in Greece, the prevalence of metallo-beta-lactamases in K. pneumoniae isolates from blood was 50% (13). The reason for the diverse prevalence of MBL production among K. pneumonia strains in different studies may be due to the use of different methods or different clinical samples. In addition, the discrepancy of phenotypic and genotypic features of bacterial isolates and factors such as cultural-economic status in diverse geographical areas could also be the reason. The results of our PCR assays demonstrated that 11.1% of K. pneumoniae isolates carried bla NDM-1 gene and produced NDM. In accordance with our findings, in a study conducted in Isfahan, 12% of carbapenem-resistant K. pneumoniae isolates were expressed New Delhi metallo-beta-lactamase (12), whereas, the prevalence of bla NDM-1 among Enterobacteriaceae in countries such as India and Kuwait has been reported to be higher (7,14). Although the bla NDM-1 carrying K. pneumoniae strains are not very common in Iran, the results of this study is alarming. In most studies, NDM producer K. pneumoniae strains are resistant to most generally used antibiotics including β-lactams, β-lactamase inhibitors, fluoroquinolones, aminoglycosides and carbapenems (9,15). The analysis of antibiotic resistance profiles of NDM producer K. pneumoniae strains in this study revealed that all NDM-producing K. pneumoniae isolates were multi-drug resistant strains, with resistance to almost all tested antibiotics; and this is in agreement with the results of other studies (12,15). According to the literature, New Delhi metallo-beta-lactamase is a kind of beta-lactamase, which confers resistance to carbapenems and all β-lactam antibiotics except monobactams, such as aztreonam.
In this study, in agreement with other reports, NDMproducing K. pneumoniae strains showed resistance to aztreonam along with other tested antibiotics (7,12). The resistance to aztreonam in these bla NDM-1 positive K. pneumoniae strains may probably be due to other mechanism of resistance. The association between metallo-β-lactamase related genes including bla NDM-1 and mobile genetic elements, such as plasmid and integrons, has been documented (12,13). We found that all multi-drug resistant K. pneumoniae isolates carried Class 1 integrons. The concomitance of NDM genes and mobile genetic elements, especially Class 1 integrons, facilitates their widespread propagation, which is a serious threat to the management of hos-pital-acquired infections. Furthermore, in this study, all NDM positive K. pneumoniae strains were isolated from hospitalized patients, and this is similar to reports by Jamal et al. indicating a nosocomial acquirement (7). Also, half of our NDM-producing K. pneumoniae strains were identified among hospitalized patients in ICU, where patients commonly have underlying diseases and experience long-term hospitalization and prolonged treatment with antibiotics, which facilitate the selection and spread of these resistant strains.
CONCLUSION
This study revealed that the high frequency of MBL production and presence of bla NDM-1 among K. pneumoniae strains, especially among hospitalized patients, are highly alarming. Also, the presence of Class 1 integrons in all multi-drug resistant K. pneumoniae isolates highlights the risk of rapid spread of the resistance genes, especially in clinical settings.
|
2018-04-03T05:13:57.776Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "cde85dc0996985e9a8d892acf06554e072fbad11",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "29d533cf1a776195031d1069bc7c182c27724965",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
235700783
|
pes2o/s2orc
|
v3-fos-license
|
Resuspension, Redistribution, and Deposition of Oil-Residues to Offshore Depocenters After the Deepwater Horizon Oil Spill
The focus of this study was to determine the long-term fate of oil-residues from the 2010 Deepwater Horizon (DwH) oil spill due to remobilization, transport, and re-distribution of oil residue contaminated sediments to down-slope depocenters following initial deposition on the seafloor. We characterized hydrocarbon residues, bulk sediment organic matter, ease of resuspension, sedimentology, and accumulation rates to define distribution patterns in a 14,300 km2 area southeast of the DwH wellhead (1,500 to 2,600 m water depth). Oil-residues from the DwH were detected at low concentrations in 62% of the studied sites at specific sediment layers, denoting episodic deposition of oil-residues during 2010–2014 and 2015–2018 periods. DwH oil residues exhibited a spatial distribution pattern that did not correspond with the distribution of the surface oil slick, subsurface plume or original seafloor spatial expression. Three different regions were apparent in the overall study area and distinguished by the episodic nature of sediment accumulation, the ease of sediment resuspension, the timing of oil-residue deposition, carbon content and isotopic composition and foram fracturing extent. These data indicate that resuspension and down-slope redistribution of oil-residues occurred in the years following the DwH event and must be considered in determining the fate of the spilled oil deposited on the seafloor.
INTRODUCTION
A large percentage of the oil released during the 2010 Deepwater Horizon (DwH) spill in the northern Gulf of Mexico (GoM) was either chemically or naturally dispersed (Lehr et al., 2010;Lubchenco et al., 2012) and/or settled to the seafloor associated with the Marine Oil Snow Sedimentation and Flocculent Accumulation event (MOSSFA) Daly et al., 2016). MOSSFA consisted of oil residues mixed with organic and inorganic particles, including bacteria, phytoplankton, microzooplankton, zooplankton fecal pellets, detritus, and terrestrially derived lithogenic particles (Daly et al., 2020). The observed MOSSFA event transported a significant portion of the released oil to the seafloor in a short time (Passow et al., 2012;Joye et al., 2013;Ziervogel et al., 2016;Romero et al., 2017). Increased short-term (months) sedimentation rates reflected the rapid transport of sediments to the seafloor associated with the MOSSFA event Romero et al., 2015Romero et al., , 2017Yan et al., 2016;Larson et al., 2018). Layers of detectable oil-contaminated sediment were deposited on the seafloor directly below the extent of the surface slick or under the southwest deep sea plume Brooks et al., 2015;Chanton et al., 2015;Romero et al., 2015Romero et al., , 2017Larson et al., 2018). However, a sampling bias toward seafloor locations under the surface slicks and the deep plumes (Camilli et al., 2010;Diercks et al., 2010) exists, and sediments were sampled at only few of the locations outside these areas . Lehr et al. (2010) estimated that 11-30% of the released oil was unaccounted for or listed as "other" (i.e., difficult to measure/quantify including oil on beaches, in tar balls, in shallow subsurface mats, and deep-sea sediments). While the sedimentation of oil is often discussed, for example, Jernelöv and Lindén (1981) speculated that 25% of the 475,000 metric tons of oil released from the 1979 Ixtoc spill went to the seafloor, its role has never properly been assessed. Valentine et al. (2014) estimated that 1.8 to 14.4% of the DwH oil remained in subsurface plumes and was deposited on the seafloor around the wellhead, while Chanton et al. (2015) indicated that between 0.5 and 9.1% of the total oil released by the DwH spill reached the seafloor in continental shelf to deep-sea areas. Covering a larger area from the coast to deep-sea, Romero et al. (2017) using 158 hydrocarbon compounds calculated that 21 ± 10% of the total oil released and not recovered by the DwH spill was deposited on ∼110,000 km 2 of the seafloor, in which 32,648 km 2 corresponded to offshore deep-sea areas. Similarly, Schwing et al. (2017) indicated that increased sediment deposition occurred throughout an area of 12,805-35,425 km 2 in deep-sea sediments. In contrast, the surface oil slick covered a more extensive area, from 141,581 km 2 for 1 day, to 42,023 km 2 for 10 days and 14,357 km 2 for 30 days (Crowsey, 2013). All together, these studies define the spatial footprint of the MOSSFA event, including areas that lacked detectable oil. However, up to date, no study has addressed the potential role of post-depositional processes in changing the initial distribution of MOSSFA in sediments.
Natural heterogeneity of bottom topography and circulation processes are key drivers in redistributing materials to deeper areas in the GoM by erosion, transport and deposition of contaminated sediments beyond the surface extent of the once existing oil surface slick or the subsurface plumes(s). The northern GoM deep seafloor depositional environment is of high diversity resulting in non-homogeneous distribution and burial of material arriving from the overlying water column. Seafloor sedimentation is affected by currents, bottom morphology, and physical forcing events of different temporal and spatial scales that rework deposited material within the Bottom Nepheloid Layer (BNL) (Lampitt, 1985;Turnewitsch et al., 2004Turnewitsch et al., , 2013Turnewitsch et al., , 2017. Large-scale gravity flow events (e.g., turbidity currents) can move large quantities of sediment downslope, regardless of overlying currents, following gravitational pull and the path of least resistance on the seafloor. Mobilized sediments flow along pathways based on the highly variable seafloor morphology, with its hills, slopes and channels, allowing for erosion and deposition beyond the spatial extent of the once existing oil slick or subsurface plumes(s). These processes can lead to a redistribution of sediments and potential sediment focusing in deep-sea depocenters that were targeted in this study. Previous studies in the deep eastern Gulf of Mexico show redistribution of sediments by gravity flow processes to be common, specifically on the Mississippi Fan, adjacent to our study area Cremer and Stow, 1986;Normark et al., 1986;Stow et al., 1986;Thayer et al., 1986). Based primarily on sedimentary structures, dominant processes are interpreted to be low-density with fine-grained turbidity currents deposited very rapidly in channel and overbank settings, as well as slides and slumps. Turbidity currents likely occur up to 5-6 times per year, possibly in pulses, and accumulation lasts hours to days before overlying sediments are deposited, all of which may explain the lack of extensive bioturbation, exceptionally high rates of accumulation, and excellent preservation .
We tested the hypothesis that DwH impacted sediments that were initially deposited on the seafloor beneath the surface oil slicks and plumes, were subsequently remobilized and transported down-slope in the years following the DwH spill. Sediment accumulation on the seafloor is not a simple one-time process during which material settles to the seafloor, and not all the material accumulates (i.e., is buried) at the location of the initial deposition. Sediment cores were collected to determine if the sediments at the coring sites were impacted by the DwH spill. The distribution of contaminated sediments mapped in the years following the spill is likely an underestimation (e.g., Valentine et al., 2014;Chanton et al., 2015;Romero et al., 2017) due to resuspension and redistribution processes (e.g., Diercks et al., 2018) following their initial deposition on the seafloor. Characterization of the spatial distribution of oil-contaminated sediments along the continental slope at depths >1,500 m is critical for understanding the long-term fate of the spilled oil from the DwH in deep-sea areas.
Watershed Modeling and Site Selection
Based on the recently published high resolution bathymetry by Kramer and Shedd (2017), a high resolution watershed drainage model was created, restricted to vertical bin resolution of 5 m to guide the site selection for in situ sediment coring. Stream lengths of <1 km were excluded in the graphic representation of the model data (Figure 1) to allow for better visibility in the larger scale maps of the study area. Sampling sites were determined based on the geomorphology of the seafloor, slope angle and the flow direction from the watershed drainage model. Selection of sampling sites was based primarily on areas on the seafloor where we expected sediments to be remobilized from and where they eventually would be deposited to (i.e., depocenters). We classified these areas into channels, lee FIGURE 1 | Location of coring sites with watershed model results (blue shaded lines). Included are geomorphological and sedimentological data from the Bureau of Ocean Energy and Management (BOEM). These data include locations of flows, fans, and terrigenous mass wasting areas, slumps, and channels. depocenters, and bathymetric depressions ( Table 1). Channels can be erosional and/or depositional in nature. Channels act as conduits for the downslope mass movement of sediments, and although commonly erosional when active, are most often depositional over the long term. Lee depocenters are located in the lee or "down-slope shadow" of morphologic highs. They become centers of deposition as the energy level of the transport mechanism decreases in the lee of the morphological high (Turnewitsch et al., 2013). Similar depositional conditions are expected over large flat areas which are characterized as low energy environments. Isolated valleys or bathymetric depressions, valleys surrounded by morphological highs isolating them from any horizontal gravitational outflow, are similar in that the energy level tends to decrease in these features, which promotes deposition
Collection of Sediment Cores
Sediment samples were collected during the RV Point Sur cruise 18-25 in May of 2018, using a MC-800 multi corer (10 cm diameter × 70 cm long tubes) at the studied sites ( Figure 1 and Table 1). The Ocean Instruments MC-800 multi corer collected eight cores simultaneously and each core was used for a separate analysis (e.g., one core for bulk isotope analyses, one for short-lived radioisotope geochronology and sedimentology, one for XRF scanning, two for flume studies, one for hydrocarbon biomarkers, one for foraminiferal analyses, and one archived).
Sediment Resuspension Flume
To analyze resuspension behavior of the core samples, a linear closed loop flume was constructed. The resuspension flume (Supplementary Figure 1) was designed based on the Sedflume published by Borrowman et al. (2006). A 95 cm long 15 cm wide and 5 cm high closed channel was connected to a reservoir, through 5 cm diameter PVC pipes. In the upstream section of the flume, a DIGITEN model FL-1608 Hall Sensor flowmeter was installed in line before the flow diverter that converted the 5 cm PVC pipe into the rectangular test section of the flume. This flow diverter together with 147 cm of rectangular channel before the core insertion point allowed for full development of turbulent ND indicates no data if core tops were too disturbed or not enough multicores collected on that site to allow flume analysis. Labels: A, sub-parallel laminae/wavy bedding; B, color banded units; C, inclined beds ( Figure 2). flow within the rectangular channel. At 147 cm beyond the flow diverter, a 10 cm diameter core opening was located at the base of the flume, allowing for insertion of the core into the flume. At the end of the rectangular section, another diverter reduced the diameter back to a 5 cm pipe through which the water was directed into a filtration system and the filtered, clean water was returned by the pump into the flume channel, thus creating the closed loop system. A core extruding mechanism secured the core tube in the bottom of the flume during the test and permitted smooth controlled and undisturbed insertion of the core sediment into the flume. Cores had been collected and stored with a large amount of original seawater in the core tube overlying the sediment water interface. The flume was filled with filtered artificial salt water (salinity ∼ 35) prior to each test, and drained and cleaned after each experiment. The closed loop system provided an instantaneous movement of water within the entire flume when the pump was turned on.
During each test, a Sony 4k camcorder, mounted 20-cm beyond the core, with its focal point set at the center of the channel, recorded video data of calibrated particle size distribution of the material being eroded from the core top and transported down-channel. Time synchronized video footage was post processed using the GNU software FFMPEG into individual tagged image format files (TIF). For every 30 frames (video recorded at 30 frames per second), four images were extracted evenly spaced for every second of video time. Given the dimensions of the camera to the flume setup, this allowed for particles to be imaged in at least three consecutive images at the highest flow speed. Once all TIF images were extracted, they were analyzed using the Image Pro Plus R software. All images were normalized into 8-bit gray scale images. A standard area of interest (AOI) with calibrated known dimensions was extracted from the image. The AOI was saved as separate file. The next image in time was loaded normalized, had the AOI applied and saved. During the following step, the first AOI was subtracted from the second AOI, leaving in the resulting image only particles that had shifted (moved within the channel) in position. Every stationary object was removed in the image subtraction process. All visible particles in the resulting image were counted and grouped into 10 size bins ranging from 0.2 mm 2 to >1.8 mm 2 in 0.2 mm 2 steps.
Core Splitting and Extrusion
With the exception of the flume cores, sediment cores were either split longitudinally or extruded upward from the core base for sampling. Sediment cores were extruded at 2 mm intervals for surficial sediment (2-10 mm) to ensure the highest possible resolution of recently deposited sediments, and subsequently at 5 mm intervals to the base of the core. Cores were volumetrically extruded according to the method described in Schwing et al. (2016). One core from each deployment was split longitudinally, photographed and described visually. This included assessment of stratigraphic integrity and a variety of sedimentary structures that are indicators of down-slope transport mechanisms.
Short-Lived Radioisotopes
Short-lived radioisotope analyses were performed to provide age control and accumulation rates over the past ∼100 years. Samples were analyzed for short-lived radioisotopes by gamma spectrometry on Series HPGe (High-Purity Germanium) Coaxial Planar Photon Detectors for activities of total Lead-210 ( 210 Pb Tot ) at 46.5 kiloelectron volts (keV), Lead-214 ( 214 Pb) at 295 keV and 351 keV, Bismuth-214 ( 214 Bi) at 609 keV, and total Thorium-234 ( 234 Th Tot ) at 63 keV. Samples were also analyzed for Cesium-137 ( 137 Cs) at 661 keV, and Berilium-7 ( 7 Be) at 477 keV, but these radioisotopes were below detection in all samples and therefore will not be discussed. Data were corrected for emission probability at the measured energy, counting time, sample mass, and converted to activity (disintegrations per minute per gram, dpm/g), using the International Atomic Energy Association (IAEA) organic standard IAEA-447 for calibration (Kitto, 1991;Larson et al., 2018).
The activities of the 214 Pb (295 keV), 214 Pb (351 keV), and 214 Bi (609 keV) were averaged as a proxy for the Radium-226 ( 226 Ra) activity of the sample or the "supported" Lead-210 ( 210 Pb Sup ) that is produced in situ (Smith et al., 2002;Baskaran et al., 2014;Swarzenski, 2014). The 210 Pb Sup activity was subtracted from the 210 Pb Tot activity to calculate the "unsupported" or "excess" Lead-210 ( 210 Pb xs ), which is used for dating within the last ∼100 years.
The Constant Rate of Supply (CRS) algorithm was employed to assign specific ages to sedimentary intervals within the 210 Pb xs profile. The CRS algorithm is appropriate under conditions of varying accumulation rates (Appleby and Oldfield, 1983;Binford, 1990). Mass accumulation rates (MAR) were calculated for each data point (i.e., "date"), based upon the CRS model results. The use of MAR corrects for differential sediment compaction down core, thereby enabling a direct comparison of 210 Pb xs accumulation rates throughout a core (i.e., over the last ∼120 years). MAR were calculated as follows: MAR (g/cm 2 /year) = dry bulk density × LAR Where: Dry bulk density (g/cm 3 ) = sample dry mass (g) 210 Pb xs profiles were evaluated for the number of plateaus, the accumulation (mm) that was associated with plateaus e.g., accumulated as a pulse events, and the % of the total accumulation (mm) that occurred as a pulse. To quantify the accumulation for each core from episodic sedimentation a "Pulse Index" (P.I.) was calculated with the number of plateaus contributing 15% toward the index, the % of total accumulation that occurred with pulses (plateaus) contributing 50% toward the index, and the average MAR contributing 35% toward the index (Supplementary Table 1
Sedimentology
Sediment texture and composition analyses were conducted on extruded samples and included bulk density, grain size, and composition. One core per site was used to calculate pore water content and bulk density. Sample volume was calculated using the inner diameter of the core barrel and sampling interval (i.e., height). Samples were weighed immediately after extrusion to provide the wet mass required for determining pore water content. Each sample was then freeze-dried and weighed for dry mass to calculate dry bulk density (Binford, 1990;Appleby, 2001).
Grain size was determined by wet sieving the sample through a 63 µm screen. The fine-size (<63 µm) fraction was analyzed by pipette (Folk, 1965) to measure the relative percentage of silt (%silt) and clay (%clay). The sand-size (>63 µm) fraction was volumetrically too small to analyze further and is reported here as %sand. %Gravel and %sand were determined by dry sieving the >63 micron fraction. Carbonate content (%carbonate) was determined by the acid leaching method according to Milliman (1974). Total organic matter (%TOM) was determined by loss on ignition (LOI) at 550 • C for at least 2.5 h (Dean, 1974). The non-carbonate and non-organic fraction is reported here as %terrigenous. Although, technically this fraction may include non-terrigenous components such as biogenic silica, glauconite and volcanic ash, they are only found in trace amounts in this general area , as Mississippi River input is such a dominant sediment source.
Carbon Isotopes
Samples for bulk isotopic analyses were treated with 10% HCl to remove carbonates, rinsed with DI water, freeze dried, and then ground with mortar and pestle prior to isotope analyses. Stable carbon (δ 13 C, %C) was measured using a Carlo-Erba elemental analyzer coupled to an isotope ratio mass spectrometer at the University of Maryland Center for Environmental Science Chesapeake Biological Laboratory. Samples were prepared for measurement of natural abundance of radiocarbon at the National High Magnetic Field laboratory. Acid treated sediment was combusted in quartz tubes at 850 • C for 4 h and then the pure CO 2 was collected on a vacuum line using a series of cold traps to remove water vapor and non-condensable gases following the methods of Choi and Wang (2004). The purified CO 2 was flame sealed in a 6 mm ampoule and sent to Woods Hole National Ocean Sciences Accelerator Mass Spectrometry where the samples were prepared as graphic targets and analyzed by accelerator mass spectrometry (Vogel et al., 1984). The radiocarbon signatures are reported in 14 C notation as described by Stuiver and Polach (1977). The 14 C blanks were generally between 1.2 and 5 µg of carbon, producing a negligible effect on samples, which were over 1200 µg of carbon. The analysis of 22 replicate sediment samples yielded an average analytical reproducibility of ±6.8 for 14 C and 0.2 for δ 13 C. Forty coal samples, representing fossil 14 C dead carbon, were analyzed to access our procedural blank of combustion, graphitization, and target preparation, over the course of this study. The average 14 C value was −995 ± 7 . We also ran 25azalea leaf standards collected in Tallahassee, Florida in 2013; the average 14 C value was 31 ± 8 .
Benthic Foraminifera
Following extrusion (methods provided above), five sub-samples from the surface of the sediment cores (0-2, 2-4, 4-6, 6-8, and 8-10 mm) and one sub-sample from down-core (20-22 mm) were used for benthic foraminifera analysis following methods similar to Schwing et al. (2018). Briefly, sub-samples were weighed and washed with a sodium hexametaphosphate solution through a 63µm sieve to disaggregate detrital particles from foraminiferal tests (Osterman, 2003). The fraction remaining on the sieve (>63µm) was dried in an oven at 32 • C for 12 h, weighed again, and stored at room temperature (Osterman, 2003). Between 200 and 400 individuals from each subsample were identified to the species-level and counted. The fraction of the sample that was identified was then weighed. It was necessary to count between 200 and 400 individuals per sample to distinguish 2% significant variability in density and relative abundance between sample intervals (Patterson and Fishbein, 1989). Multiple taxonomic references were used (d 'Orbigny, 1826'Orbigny, , 1839Williamson, 1858;Jones and Parker, 1860;Parker and Jones, 1865;Brady, 1878Brady, , 1879Brady, , 1884Cushman, 1922Cushman, , 1923Cushman, , 1927Stewart and Stewart, 1930;Phleger and Parker, 1951;Parker et al., 1953;Parker, 1954). The number of specimens with visibly fractured tests was counted and reported as the fracture percentage versus the total number of specimens identified as a taphonomical indicator of turbulent flow redistribution (Ash-Mor et al., 2017).
Hydrocarbon Analyses
Samples were kept frozen until freeze-dried at the Marine Environmental Chemistry Laboratory (MECL, College of Marine Science, University of South Florida). Approximately 1.0 g of freeze-dried and homogenized sediment was extracted using an Accelerated Solvent Extraction system (ASE R 200, Dionex) under high temperature (100 • C) and pressure (1500 psi) with hexane:dichloromethane (9:1 v:v). Deuterated standards were added to samples prior to extraction to monitor matrix effects and correct for losses during extraction (d10-acenaphthene, d10-phenanthrene, d10-fluoranthene, d12-benz(a)anthracene, d12-benzo(a)pyrene, d14-dibenz(ah)anthracene, d 50 -tetracosane, d 15 -pentadecane, d 32 -dotriacontane, d 4 -cholastane). For extraction in the ASE, we applied a one-step extraction and clean up procedure using a predetermined packing of the extraction cells (Kim et al., 2003;Choi et al., 2014;Romero et al., 2018) using a 11 ml extraction cell with glass fiber filter (pre-combusted at 450 • C for 4 h), 5 g silica gel (high purity grade, 100-200 mesh, pore size 30A, Sigma Aldrich, United States; pre-combusted at 450 • C for 4 h, and deactivated 2%), and sand (pre-combusted at 450 • C for 4 h). Sediment extracts were concentrated to ∼2 ml in a RapidVap (LABONCO RapidVap R Vertex TM 73200 series) and further concentrated to about 100-300 µl by gently blowing with a nitrogen stream. An internal standard was added (d14-terphenyl; Ultra Scientific ATS-160-1) to all samples prior to GC/MS analysis. All solvents used were at the highest purity available. Two extraction control blanks were included with each set of samples (18 samples).
We followed modified EPA methods and QA/QC protocols (8270D, 8015C). Targeted compounds included aliphatics (C12-C37 n-alkanes, isoprenoids), PAHs (2-6 ring polycyclic aromatic hydrocarbons including alkylated homologs), and biomarkers [C27-C35 hopanoids, C27-C29 steranes, C20-C28 triaromatic steroids (TAS)]. Hydrocarbon compounds were quantified using GC/MS/MS (Agilent 7680B gas chromatograph coupled with an Agilent 7010 triple quadrupole mass spectrometer) in multiple reaction monitoring mode (MRM) to target multiple chemical fractions in one-run-step. Molecular ion masses for hydrocarbon compounds were selected from previous studies (Romero et al., , 2018Sørensen et al., 2016;Adhikari et al., 2017) (Supplementary Tables 2a,b). All samples were analyzed in splitless injections, inlet temperature of 295 • C, constant flow rate of 1 ml/min, and a MS detector temperature of 250 • C using a RXi-5sil chromatographic column. The GC oven temperature program was 60 • C for 2 min, 60 • C to 200 • C at a rate of 8 • C/min, 200 • C to 300 • C at a rate of 4 • C/min and held for 4 min, and 300 • C to 325 • C at a rate of 10 • C/min and held for 5 min. Source electron energy was operated at 70 eV, and argon was used as the collision gas at 1 mTorr pressure.
For accuracy and precision of analyses we included laboratory blanks for every 12-14 samples, spiked controls for every 14-18 samples, tuned MS/MS to PFTBA (perfluorotributylamine) daily, checked samples with a standard reference material (NIST 2779) daily, and reanalyzed sample batches when replicated standards exceeded ±20% of relative standard deviation (RSD), and/or when recoveries were low. Recovery ranged within QA/QC criteria of 50-120%. PAH concentrations are reported as recovery corrected. Each analyte was identified using certified standards (Chiron S-4083-K-T, Chiron S-4406-200-T, NIST 2779) and performance was checked using a 5point calibration curve (0.04, 0.08, 0.31, 1.0 ppm). Quantitative determination of compounds was conducted using response factors (RFs) calculated from the certified standard NIST2779.
Hydrocarbon compounds are expressed as sediment dry weight concentrations.
Hydrocarbon Source Identification
We used a tiered analytical approach for the identification of hydrocarbon sources in the sediment samples collected. First, we determined the concentration of hydrocarbon groups in the sediment samples (aliphatics, PAHs, and biomarkers (hopanoids, steranes, and TAS) to establish temporal (as profiles) and spatial (surface maps) changes of hydrocarbon concentrations and composition. Second, we determined and compared diagnostic oil ratios between potential sources and the samples (for details see below). Third, we compared the distribution pattern of n-alkanes and PAHs among samples.
Diagnostic oil ratios were calculated for all crude oil standards (NIST Macondo oil: MC252; Southern Louisiana Sweet crude oil: LC; southern GoM Akal Bravo oil: AB) and samples to discriminate hydrocarbon sources. Careful should be taken when applying diagnostic ratios to deep-sea sediments because of multiple weathering processes affecting the composition of hydrocarbons during sinking through the water column and after reaching the seafloor. For example, low molecular weight PAHs (LMW, containing 2-3 ring PAHs) are abundant in petrogenic sources while high molecular weight PAHs (HMW, containing 4-6 ring PAHs) are abundant in pyrogenic sources. However, HMW PAH can become more abundant due to the loss of LMW PAHs during weathering processes (e.g., dissolution, biodegradation). Alkane ratios were used to identify natural vs. oil sources (carbon preference index: CPI (C25-C33) = odd Cn/ even Cn) if samples were weathered (low molecular weight alkanes: % n-alkanes = C14-C24/sum (C12-C37) * 100). Samples with CPI < 2.0 and % n-alkanes < 25% indicate a weathered petrogenic source (Xing et al., 2011;Romero et al., 2015Romero et al., , 2021Herrera-Herrera et al., 2020).
Other compounds, more resistant to weathering, can be used to identify specific oil sources (hopanes, steranes, triaromatic steroids). However, diagnostic oil biomarker ratios (using hopane, sterane, and triaromatic steroid compounds) known to fingerprint DwH oil or other crude oils have mostly been tested in samples collected from coastal environments Fingas, 2003, Wang et al., 2006;Mulabagal et al., 2013;Aeppli et al., 2014) and a few in deep-sea sediments (Romero et al., , 2017Stout et al., 2016). In addition, some of these biomarker compound groups (e.g., steranes) have been shown to degrade in the marine environment years after an oil spill (Wang et al., 2001;Prince et al., 2002;Gros et al., 2014). Therefore, we tested if biomarker ratios used in previous studies Fingas, 2003, Wang et al., 2006;Mulabagal et al., 2013;Aeppli et al., 2014;Romero et al., 2015Romero et al., , 2017 can be used to fingerprint deep-sea sediments, where organic matter and oil residues are naturally exposed to long-term weathering processes. Specifically, we compared the MC252 oil standard (NIST 2779) with samples collected at the DwH site (DwH-01), an area known to contain weathered oil residues from the DwH spill (Chanton et al., 2015;Romero et al., 2015;Stout et al., 2016). Also, samples from the DwH-01 site, the closest site to the DwH wellhead, sampled from 2011 to 2013, showed the presence and preservation of oil residues in the sediments from the DwH spill (Romero et al., 2017).
Only diagnostic ratios with a difference within ±20% between samples from site DWH-01 and the average of MC252 oil standard (N = 60) were used. In previous fingerprinting studies, this criteria for the difference between samples and an oil standard has been established based on the relative standard deviation value (RSD: 14-20%) of a standard analyzed over a period of time (analytical uncertainty) Meyer et al., 2017). Our analysis of biomarkers using GC/MS/MS-MRM had an RSD of 4.8% over a time period of 6 months using the MC252 oil standard. The application of GC/MS/MS-MRM increases selectivity, improved baseline and signal-tonoise ratio, and successfully separates target compounds from interferences compared to the conventional GC/MS-SIM method . Altogether, the GC/MS/MS-MRM method improves the analytical uncertainty in the analysis of biomarker compounds. However, environmental sample replicates collected in deep-sea areas of the GoM have shown that RSD varies between 4% to 22% for biomarker compounds, indicating natural variability in the area . Therefore, diagnostic biomarker ratios of DwH oil residues at depth were selected using the RSD value of 20%, to account for the natural variability in deep sea environments. Ratios with a difference within ±20% between samples from site DWH-01 and the average of MC252 oil standard seem resistant to weathering and other natural processes at depth in the GoM, and are suitable for fingerprinting. The matching ratios were then calculated for all sites studied and analyzed using a Principal Component Analysis (PCA; JMP R Pro 14.0). Also, cross plots of alkane diagnostic ratios were done to identify oil vs. natural sources in the region. In addition, the relative abundance (%) of hydrocarbon groups was plotted for each site studied to identify areas with high content of LMW PAHs (2-3 rings) due to inputs from natural seeps.
Only samples indicating oil content (using alkane ratios), no natural seep inputs (with low %LMW PAHs), a distinct distribution pattern of n-alkanes and PAHs indicating weathered oil residues, and a match with the MC252 oil standard (using biomarker ratios) were identified to contain DwH oil residues as the most probable oil source.
RESULTS
All sites yielded sediment cores with active deposition over the past ∼100 years. Core analyses revealed a range of sediment textures, composition and accumulation rates indicating that sedimentation processes varied throughout the study area. Downcore data were utilized for some analyses to provide a longer time-scale context (Figure 2) and to assess the historical prevalence of down-slope sediment transport as indicated by various sedimentary structures present throughout the study area. All other analyses focused on the surficial sediments (0-20 mm) to identify recent patterns that may be associated with deposition and redistribution of DwH contaminated sediments. We have provided maps in figures and supplementary figures as a mechanism to visualize the data and to communicate the observed variations in the data. Interpolations of data in the areas between the individual coring locations were dependent on parameters set in the contouring program (Golden Software Surfer 9.0) and were thus not discussed in this manuscript.
Flume
Flume experiments showed that sediment particle erosion behavior was not uniform across the various sedimentary environments on the seafloor ( Table 1). The uppermost layer of sediment (top 2 mm) from each core eroded at different flow speeds, producing maximum peaks in total number of particle counts at flow speeds from 5.72 cm s −1 to 16 cm s −1 (Table 1 and Figure 3). Several sediment cores exhibited a second peak at higher flow speeds (sites 10, 11, 14, 17, and 20). In all cores (sites 8, 9, 10, 17, 19, and 20) that had a first peak in particle resuspension at low flow speeds, a decrease in resuspended particle concentration was observed following the initial peak resuspension of particles. A second larger peak, produced by a complete collapse of the surface sediment layer, occurred in all cores at flow speeds exceeding 13.7 cm s −1 . Sediments from cores that did not have the initial peak in resuspended particles (sites 4,5,6,12,13,15,16,21,30, had low numbers of particles resuspended until a rapid disintegration of the surface sediment layer occurred >13.7 cm s −1 . Distinct peaks in total volume with increased flow speed were recorded for each core analyzed. Average area of individually eroded particles varied only slightly between 0.39 and 0.63 mm 2 in all cores, however, total eroded volume of particles varied from 4.4 to 535 cm 3 ( Table 1).
Short-Lived Radioisotopes
Short-lived radioisotope analyses yielded 210 Pb xs activity profiles utilized to provide age control and sedimentation rates at multicore sites over the past ∼120 years. 210 Pb xs profile shape and MAR, averaged over various time periods, were used to characterize the spatial patterns of sediment distribution and accumulation. 210 Pb xs activity profile shapes varied with some sites exhibiting exponential profiles and others non-exponential profiles (Figure 2 and Supplementary Figures 3a-c). Nonexponential profiles ranged from small mm-scale plateaus in activity to profiles with large cm-scale plateaus. Some profiles contained few plateaus, and some contained multiple plateaus. These profiles were utilized to assess the relative magnitude and frequency of episodic or pulsed accumulation (plateaus in activity in profiles) vs. areas with stable consistent accumulation (exponential profiles).
The spatial pattern of the pulse index reveals variability in the prevalence of sediment accumulation from episodic events FIGURE 4 | Heat map of weighted pulsed input in study area. The three regions identified in this study are marked by black lines and are labeled Region 1, 2, and 3 form the NW to the SE. Scale describes the percentage of pulsed inputs in the study area. over the past ∼120 years (Figure 4). In the northwestern portion of the study area (sites 1, 2, 3, 4, and DwH-01) a high index, reflecting frequent episodic sedimentation events, was observed. The central portion of the study area consists of low index values (sites 5, 6, 12, 13, 14, and 30) reflecting a more stable, consistent sediment accumulation history. The southeastern portion of the study area shows high index values (sites 7, 8, 9, 10, 11, 15, 16, 17, 18, 19, 20, and 21) but with the highest episodic sedimentation events observed in the study area, in terms of magnitude and frequency. Sediment MAR's were determined using 210 Pb xs age dating. Differing time periods were evaluated to characterize accumulation rates from a long (1950-2018) to short periods for characterizing events pre-spill (2006-2009; Figure 5A), during and after the spill related to the MOSSFA event (2010-2013; Figure 5B), and post-spill associated to redistribution of sediments (2014-2018; Figure 5C). Spatial patterns of MAR's from 1950 to 2018 are consistent with the areas of episodic sedimentation identified by the 210 Pb xs pulse index, further corroborating the spatial variability in the influence of episodic sedimentation events (Figure 4). For the three recent time periods compared in this study, there are potential shifts in "hot spots" of higher accumulation with site 18 consistently being a "hot spot" for accumulation in all three periods.
Sedimentology
All cores exhibited intact detailed stratigraphy, contained numerous, well-preserved primary sedimentary structures, and very few secondary sedimentary structures (e.g., bioturbation). The most common structures were thin, mm-scale, sub-parallel laminae and wavy bedded units with no clearly defined lower boundaries (Figure 2). These structures were found throughout the entire study area in all depocenter types ( Table 1). Inclined and color banded beds (Figure 2) were less common, and sparsely dispersed throughout all but the northeastern-most portion of the study area (Table 1). Generally, sedimentary structures were better defined in the upper few 10's of cm of cores (Table 1).
Sediment grain size and composition was averaged over the surficial 0-4 mm and 10 mm intervals and varies throughout the study area. Grain size is reported as %gravel, %sand, %silt, %clay, and %mud (%silt + %clay), and composition is reported as %carbonate, %TOM (total organic matter by LOI), and %terrigenous ( Table 2 and Supplementary Figures 4-8). %Carbonate ranged from 45% to 60% with lowest concentrations in the NW portion of the study area and increasing to the SE. %TOM ranged from 4.4% to 7.1% with highest values in the NW and decreasing to the SE. Sediment grain size showed more spatial variability than composition. %Gravel was zero in all sites except site 03, which contained large pteropod fragments. %Sand ranged from 0.3% to 12.9% with highest values in the SE, lowest values in the N, and moderate values in the W portion of the study area. %Silt ranged from 35.3% to 58.5% with highest values in the NW, lowest values in the NE and SE, and modest in the SW portion of the study area. %Clay ranged from 39.9% to 63.3% with highest values in the NE, modest through the central to S, and lowest in the NW and SE areas. Highest %carbonate values were consistently associated with the highest %sand, which consisted of sand size biogenic carbonate particles (Supplementary Figures 4-8).
Carbon Isotopes
The sedimentary organic carbon content in percent carbon by weight, %C, the stable carbon isotopic composition, δ 13 C , and the radiocarbon content, 14 C , of the surface 0-2 mm layer and the 0-10 mm layer are reported in Supplementary Table 3. Values for the 0-10 mm depth interval were calculated as the average of the five 2 mm slices subsampled within that interval. The %organic carbon of the uppermost interval (0-2 mm) varied from 1.3 to 2.9% and generally decreased from the northwest to the southeast as the water deepened (Figure 6). The δ 13 C of surface (0-2 mm) organic matter varied from −20.7 to −22.5 and was more depleted to the northwest and exhibited a minimum in the center of the study area (Figure 7). There was a significant correlation of increasing δ 13 C with decreasing %organic carbon (p = 0.01, r = 0.512, n = 24). Radiocarbon content in the surficial layer (0-2 mm) varied from −167 to −319 across the study area (Figure 8). The most depleted values were observed at about 2,300 m depth on the north east and western sides of the study area. The central portion of the study area exhibited 14 C enriched surface sediments downslope of the DwH site, in water depths ranging from 1,700 to 2,100 m, grading to more depleted values to the southeast. None of these parameters correlated with depth.
Benthic Foraminifera
Benthic foraminifera density and fracture percentage are presented in Supplementary Table 4. Benthic foraminifera density ranged from 15 to 74 individuals/cm 3 and generally Core intervals 0-4 mm and 0-10 mm. *Interval is 0-2 mm not 0-4 mm.
Frontiers in Marine Science | www.frontiersin.org increased from the northwestern (e.g., site 1 mean density: 19 individuals/cm 3 ) to the southeastern (e.g., site 24 mean density: 38 individuals/cm 3 ) portion of the study area. Benthic foraminifera fracture percentage ranged from 7.5 to 25.1% with the highest mean fracture percentage in the north-central portion of the study area (e.g., site 12 mean: 20.6%, site 14 mean: 16.0%). Maxima in fracture percentage were typically found at 6-10 mm depth (sites 1, 9, 12, and 16). These maxima were often coincident with variability in other parameters consistent with resuspension (Figure 9).
Hydrocarbon Analyses
We identified 10 biomarker ratios that can be used as diagnostic biomarker ratios of DwH oil residues at depth (Figure 10). These biomarker ratios determined for samples from DWH-01 site lie within ±20% of the MC252 standard ratios. The fewer biomarker ratios that matched with the MC252 standard compared to coastal studies may indicate that more compounds are susceptible to multiple weathering processes at depth (e.g., dissolution, degradation, dispersion). Source apportionment of hydrocarbons in the study area after the spill was determined using principal component analysis (PCA) of the biomarker ratios identified in Figure 10. The PCA results in Figure 11 show that 46% of the sites in 2010-2013 and 58% of the sites in 2014-2018 contain oil-residues similar to the MC252 oil standard (all located in the same PCA space) and distinct from other reference oil samples from the GoM. PCA results are supported by cross plots of alkane diagnostic ratios (Figure 12) and %abundance plots of hydrocarbon compounds groups (Supplementary Figures 9a,b). Caution should be taken when interpreting deep-sea data, because oil-residues deposited at depth, regardless of the source, are highly weathered. For example, Figure 12 shows that several sites in both time periods -20132014 contain oil-residues (CPI < 2.0 indicates oil-residues; %C14-C24 < 25% indicates heavily weathered samples) but with a dominant hydrocarbon source different than DwH oil, as shown by the PCA results (Figure 11). Also, some sites that were designated by the PCA to potentially contain mostly DwH oil-residues (shown in orange color sites 7 and 18 for 2010-2013, and sites 9 and 10 for 2014-2018) are shown in Figure 12 to be mixed with other sources different than oilresidues (e.g., terrestrial), therefore we have classified these sites to contain mixed sources of hydrocarbons (shown in orange color in Figures 11-13 and Supplementary Figures 10a,b). In addition, the results from the PCA analysis and cross plots were supported by distinct distribution patterns of n-alkanes and PAHs (Supplementary Figures 11, 12). The samples identified to contain DwH oil-residues as the potential major source for hydrocarbons in Figures 11, 12, have a distinct n-alkane composition with short-chain compounds (C12-C23) less than 2% abundance, while long-chain compounds (>C30) are more than 10% abundance (Supplementary Figure 11 and in agreement with Stout et al., 2016). This pattern is expected from severely weathered oil residues, in which longchain n-alkanes are preserved due their lower susceptibility to dissolution and biodegradation. In contrast, other sources (as mixed or unknown in Supplementary Figure 11) show a strong odd-to-even carbon preference for long-chain n-alkanes, with C27, C29 and C31 as the most abundant. The observed oddto-even carbon preference for long-chain n-alkanes is typical of terrestrial plants. For PAHs, the difference in distribution pattern among sources is as well observed (Supplementary Figure 12). The samples identified to contain DwH oil-residues as the potential major source for hydrocarbons in Figures 11, 12 have less %abundance of molecular markers of incomplete combustion such as Re and BeP (Ramdahl, 1983;Wang et al., 1999;Tobiszewski and Namieśnik, 2012), while 4-6 ring PAHs were more abundant due to their higher resistant to weathering processes such as dissolution and biodegradation. This pattern is more notorious by comparing the abundance of PAH compounds grouped by ring number (Supplementary Figure 13) showing a distribution difference among sources within each year. Also, larger changes between time periods is observed indicating potential additional weathering processing affecting mostly 5-6 ring PAHs (e.g., transformation processes, White et al., 2016). Overall, we found that 15 sites of the 24 studied sites contain DwH oil-residues as the potential major source for hydrocarbons (Figure 13). The results generated using multiple oil diagnostic ratios indicate the significance of using ratios from multiple compound groups (e.g., hopanes, steranes, alkanes, PAHs) for source apportionment of hydrocarbons in deep-sea areas, like in the northern GoM. Hydrocarbon concentrations, the sum of all compounds analyzed [n-alkanes, isoprenoids, polycyclic aromatic hydrocarbons (PAHs), hopanes, steranes, and triaromatic steroids (TAS)] ranged from 0.2 µg/g to 11.4 µg/g, with decreasing concentrations toward southeast of the study area, with some exceptions observed at specific depth layers in the sediments (e.g., site 21) (Figure 13). Specifically, hydrocarbon averages >2.0 µg/g were observed in the northeast of the study area (sites DwH, 1-6, 12, 13, and 30), concentrations in the range of 0.9-2.0 µg/g were detected at the center of the study area (sites 7, 8, 11, 14, 15, 16, and 17), and concentrations <0.9 µg/g were observed in the southeast area (sites 9, 10, 19, 20, 21). This general trend in hydrocarbon concentration followed sediment grain size distribution, with higher concentrations where sediments have low carbonate content. In addition, downcore profiles show a large variability in hydrocarbon concentrations in the study area, with enhanced concentrations at specific sediment intervals (Figure 12). Also, specific hydrocarbon compound groups did not show a clear trend with depth (e.g., decrease with sediment depth); therefore, compound relative abundances mostly represent changes in hydrocarbon sources rather than solely biodegradation of hydrocarbons after buried (Supplementary Figures 9a,b). Most abundant compound groups were n-alkanes and high molecular weight PAHs (HMW, 4-6 rings). Profiles of HMW PAHs and LMW PAHs (low molecular weight, 2-3 rings) showed large variability in some sites in all sediment depth layers (i.e., site 10) or at specific sediment layers (i.e., sites 1, 11, 17, and 19), but in all cases HMW PAHs were more abundant.
Spatial Distribution
All sites were depositional over the past ∼100 years and to various degrees contained sedimentary structures, but there were distinctive variations in accumulation patterns and sediment characteristics throughout the study area. These variations define three distinctive geographic regions with implications for the degree of influence of down-slope sediment transport on sediment accumulation patterns (Figure 4). Bathymetric characteristics, geographic location, and water depth also play a role in the characteristics of each region. Defining characteristics of the regions include episodic vs. stable sediment accumulation patterns as well as surficial sediment characteristics (upper 0-10 mm of cores). Using 210 Pb xs age dating, cores were assessed , numbers indicate sites, red circles denote sites with DWH oil-residues as the potential major source, green circles indicate sites with mixed hydrocarbons sources including DWH oil-residues, and blue circles indicate other unknown sources. The diagnostic ratios used are described in the method section.
for accumulation patterns from 1950 to 2018 for longer term reference and in greater detail for three time periods, -2009(pre-spill), 2010-2013(DwH spill/post-spill), and 2014 to investigate the DwH spill and potential down-slope redistribution in subsequent years.
(a) Region 1 is in the NW portion of the study area directly surrounding the DwH spill site and includes our station DwH-01, and sites 1, 2, 3, and 4 (Figure 4). (b) Region 2, includes sites, 5,6,12,13,14,30, to the SE of Region 1, consisting of a "belt" running from the NE to SW (Figure 4).
Region 1, in the NW portion of the study area, received the highest inputs of contaminated sediments during and shortly after the spill as previously documented (Chanton et al., 2012(Chanton et al., , 2015Passow et al., 2012;Joye et al., 2013;Brooks et al., 2015;Yan et al., 2016;Ziervogel et al., 2016;Romero et al., 2017) (Figure 4). Varying seafloor morphology with steep slopes and valleys was prevalent in this area (Figure 1), providing grounds for seafloor instability and higher potential for down-slope sediment transport. 210 Pb xs data and sedimentary structures indicate pulsed downslope transport accompanied FIGURE 12 | Cross plots of diagnostic ratios for the deep-sea sites studied in the northern GoM. Numbers indicate sites, red circles denote sites with DwH oil-residues as the potential major source, green circles indicate sites with mixed hydrocarbons sources including DwH oil-residues, and blue circles indicate other unknown sources.
by episodic accumulation of sediments as being the main mechanism in sediment accumulation in this Region (Figure 2). Sedimentology reflects a dominance of fine-grained terrigenous source sediments with the highest %silt in the study area. The source of pulsed sediments accumulating in Region 1 likely lay upslope to the N and NW. The first peak in particle resuspension over time with increasing flow speed occurred above 10 cm s −1 in this Region followed by a total collapse of the sediment structure and complete erosion occurred above 13 cm s −1 . These high initial flow speeds needed to erode the surface indicate that this material was relatively new material that had arrived from the sea surface.
Region 2 is a transition zone from steep slopes (>10 • ) on the salt domes with deep incised smooth valleys between these domes (Figure 1) spreading out into an open plain on the seafloor with a decrease in slopes to near 0 • in the more distal, SE portions. Region 1 and Region 2 had similar surface sediment characteristics. The water depth in this Region ranges from 1,250 to 1,900 m. 210 Pb xs data indicate more consistent stable accumulation with lower MAR's indicating that downslope sediment transport is not dominantly accumulating and is likely bypassing this Region (Figures 5A-C). Evidence for sediment resuspension in this area by near inertial currents as well as tropical storm induced events has been reported in independent studies at these water depths (Gardner and Sullivan, 1981;Isley et al., 1990;Diercks et al., 2018), which may also limit deposition of sediments associated with down-slope transport. Sedimentology shows a decrease in %terrigenous sediments as compared to Region 1, which is expected with increased distance from the Mississippi River. In this Region, peaks in volume of particles resuspended occurred at flow speeds above 13 cm s −1 indicating that the cores were missing the surface layer of loose material. These sites had the highest %carbon and the most enriched 14 C values indicating deposition of younger material originating from the sea surface and not from resuspension.
The NW boundary of Region 3 can be visualized by a line drawn just north of sites 7, 12 and 15, with sites 12 and 15 being on the boundary between the two Regions (Figure 4). Region 3 encompasses the largest part of the study area on the seafloor with depths greater than 1,900 m. Most of the sites in Region 3 do not contain DwH oil-residues as the potential major source for hydrocarbons (eight out of eleven studied sites in this region). The majority of sites in Region 3 have higher relative abundance of C12-C25 n-alkanes in most sediment layers (Supplementary Figures 9a,b), indicating a potential larger presence of bacterial alkanes. Seafloor slope angles in this area are in general <2 0 , sloping from the NW to the SE. Region 3 has the highest MAR's ( Figures 5A-C and Supplementary Figure 3) and sediment records indicate a strong prevalence of episodic (pulses) sediment accumulation (Figure 4), likely associated with the down-slope transport events. The NE portion of Region 3 has the highest %clay, which may indicate accumulation of resuspended fine-grained sediments from upslope. The highest %sand values were found in the SE portion of Region 3 associated with more sand sized biogenic grains as shown by the concurrent increase in %carbonate. This was likely due to a decrease in fine-grained terrigenous sediment with further distance from the Mississippi River source. The uppermost 2 mm of sediments from cores collected in this area were easily resuspended at current speeds of 5-8 cm s −1 . Figure 3 indicating that these sediments consisted of unconsolidated material. %Carbon content (Figure 6) for these samples were low (1.25 to 1.75%), FIGURE 13 | Profiles of hydrocarbon concentrations in the study sites located in the northern GoM. Hydrocarbons refer to the sum of aliphatics, PAHs, hopanes, steranes, and TAS. Graph shows shaded areas corresponding to the time period 2010-2013, areas above and below the gray area corresponds to 2014-2018 and pre-spill periods, respectively. Red circles denote samples with DwH oil-residues as the potential major source, green circles indicate samples with mixed hydrocarbons sources including DwH oil-residues, and black circles indicate other unknown sources.
indicating older (Figure 8) reworked material being deposited, as discussed elsewhere (e.g., Diercks et al., 2018). We argue that these sediments either have been exposed to prolonged bacterial decomposition and remineralization after deposition on the seafloor or were transported from higher up on the slope as a result of resuspension and lateral transport. This lateral transport and redeposition of material would result in a loosely consolidated sediment layer, easier to be resuspended at lower flow speeds, and presenting the characteristics of an old sediment. This transport and redeposition of older material matches the results from the watershed model of the study area, which presented a general E to SE flow direction with a confluence of gravitational flow channels in the SE part of the Region (Figure 1). The model used the high-resolution seafloor morphology to determine gravity driven downslope flow and overlying water column currents were not considered.
Sedimentary structures indicative of sediment re-deposition by gravity flow processes were detected in almost all cores throughout the study area (Figure 2 and Table 1). Thin, mmscale, sub-parallel laminae and wavy-bedded units, by far the most common, were found in Regions 1, 2, and 3. The less common inclined beds and color banded units were confined to Regions 2 and 3 (Figure 4). All of these structures are common in adjacent Mississippi Fan deposits, and have been attributed to low density, fine-grained turbidity currents, slides and/or slumps Cremer and Stow, 1986;Normark et al., 1986;Stow et al., 1986;Thayer et al., 1986). Locations classified as depocenters exhibited two peaks in particle counts with increased flow speeds, one at low flow speeds and a second peak at higher flow speeds. This second peak coincided with the peaks from locations (erosional sites) that were missing the low flow speed peaks, indicating that the surface sediments in the depocenters were comprised of two distinctly different materials, loosely compacted material at the surface and a more consolidated layer of material below the surface. Once flow speeds in the flume reached 13 cm s −1 the exposed sediment in all cores eroded and disintegrated rapidly. Our results fall well within the range of prior data published in the literature. Lampitt (1985) reported that 6-8 cm s −1 can move low-density aggregates of phytodetritus (mm to cm in size) in the field and similar values have been reported for flume studies. Beaulieu (2003) compiled a list of theoretical, flume, and field measurements for critical erosion velocities of bioturbated silty sediments, which (Gardner et al., 2017) further summarized and concluded that resuspension of the fine silt fraction would occur as low as 11-12 cm s −1 and sand size fraction being resuspended at 25-30 cm s −1 .
Relative to levels of concern of toxic compounds analyzed in this study, we found that even though PAHs were abundant at the studied sites, most of their concentrations were lower than levels of concern for marine biota (Long et al., 1995;Bejarano and Michel, 2010) (Supplementary Figures 10a,b). Exceptions were found for LMW PAHs at site 1 (sediment interval 0-2 mm with concentration 0.6 µg/g), and for HMW PAHs at site DwH (sediment interval 16-25 mm with concentrations about 2.6 µg/g) and site 17 (sediment interval 16-18 mm with 2.6 µg/g concentration).
Time Periods
To better understand the role of major natural (e.g., downslope movement of particles) and anthropogenic (i.e., MOSSFA) depositional processes on the fate of DwH-derived hydrocarbons in deep-sea environments of the GoM, data were assessed for three time-periods (pre-spill 2006-2009, spill/post-spill 2010-2013, and post-spill 2014-2018). The pre-spill (2006The pre-spill ( -2009 time interval represents sediment data from before the DwH spill and MOSSFA event and is characterized by having lower concentrations of hydrocarbons compared to the other time periods (Figure 13). The 2010-2013 time interval, includes the time immediately after the oil spill as well as the MOSSFA event as defined by areas sampled during and immediately after the spill (Passow et al., 2012;Brooks et al., 2015;Daly et al., 2016;Larson et al., 2018). In this time period, eleven sites of the 24 studied sites likely contain DwH oil-residues as the potential major source for hydrocarbons (Figure 13). Most of the eleven sites containing DwH oil-residues are in the Region 1 and 2, and only one site in Region 3 (site 15). The 2014-2018 interval represents the time post MOSSFA event, in which we identified lateral and downslope movement of material into the deeper sections of the GoM. These time periods provided the basis to test our hypothesis of a general down-slope, SE transport of material that was initially deposited during the DwH oil spill. DwH oilresidues that were deposited under the surface expression of the oil spill and the submerged oil plumes, may have been moved downslope through common processes like resuspension and gravity flows (Diercks et al., 2018).
In the time period between 2014 and 2018, MARs had returned to pre-spill levels (Larson et al., 2018), however, there was large variability between the three time periods observed in total concentration of hydrocarbons (Figure 13), as well as for specific hydrocarbon compounds (Supplementary Figures 9a,b). This variability is highlighted by the relative composition among time-periods, which indicate a highly dynamic sedimentation regime in the study area (Supplementary Figures 5a,b). In this time period, 14 sites of the 24 studied sites contain DwH oil-residues as the potential major source for hydrocarbons (Figure 13). The 14 sites containing DwH oil-residues are located in all regions and only four sites (#1, 7, 8, 21) contain DwH oil-residues in the period 2014-2018 (Figure 13). Mapping the most recalcitrant compounds, such as biomarkers for each time period, presents the general spatial patterns of hydrocarbon concentrations decreasing toward the southeast of the study area (Figures 14A-C). Most elevated concentrations were observed only post-spill (2010-2013 and 2014-2018), mostly at sites located closer to the DwH site (Figures 14A-C). Millimeterscale events were identified within the upper two centimeters (Figure 13). These events were typically dated between 2014 and 2016 (post-DwH), are consistent with redeposition of resuspended material and were characterized by relatively high MAR, homogenous bulk organic δ 13 C, and maxima in both organic biomarkers and benthic foraminifera fracture percentage. The uppermost 2 mm of the cores were dominated by benthic foraminifera (e.g., predominantly Bolivina lowmani, Eponides turgidus, Trochammina inflata, Uvigerina peregrina), planktic foraminifera (e.g., Globigerinoides ruber, Globorotalia menardii, Orbulina universa) and some deep-sea sediments and date to the 2016-2018 depositional period, providing an average sedimentation rate of 0.10 to 0.14 g cm 3 year −1 as determined by MARs from 210 Pb xs measurements.
CONCLUSION
Overall, our results indicate an increased spatial footprint of deposition of DwH-derived hydrocarbons for which we offer two explanations. The deposition of contaminated sediments was not previously identified due to a lack of sampling in our study area and down-slope redistribution of sediments over the 8 years following the DwH oil spill and initial MOSSFA deposition to the seafloor. We based our conclusions on the characteristics of three regions as defined by their morphological and sedimentological features as well as the likelihood of sediments accumulating in these regions through episodic down-slope transport mechanisms. The nature of the organic compounds found in some of the cores, as well as sediment composition from these regions allowed us to trace them back to the DwH oil spill (in 15 out of 24 studied sites). Region 1 showed the presence of episodic sediment accumulation and had a potential for longer term accumulation and sequestration of redistributed sediments from up-slope areas. Region 2 was defined by stable and consistent sediment accumulation with low magnitude events in down-slope sediment accumulation being very subtle in the sedimentary record. We determined that this area is less likely to accumulate and sequester redistributed sediments. Region 3 had an increased presence of episodic sediment accumulation through larger magnitude pulse events. These events lead to a higher potential for accumulation and sequestration of redistributed sediments from upslope areas.
Our findings presented in this paper, provide evidence that the footprint of the residues from the oil spill on the seafloor changed over the time span from 2010 to 2018 expanding to the SE beyond the previous areal extent reported. In the sedimentary record, DwH oil-residues were found in sites during the post-spill periods studied ( -20132014sites: DwH-01, 2, 3, 4, 5, 6, 12, 13, 15, 30) as well as in sites only during the 2014-2018 period (sites: 1, 8, 21). Specifically, sites located approximately 45 km (site 8; 28.4166 • N, 88.0838 • W) and 96 km (site 21; 28.3306 • N, 87.5042 • W) to the SE of the wellhead, indicate a larger area affected by DwH oil residues due only to down slope redistribution of organic matter by natural process at depth in the GoM. Our data thus suggest that a much larger area on the seafloor contained residues of DWH oil than previously then previously recognized and published in the literature (Lehr et al., 2010;Chanton et al., 2012;Lubchenco et al., 2012;Romero et al., 2017).
|
2021-07-02T13:18:07.251Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "29ba8a3d4c58f849f2f1dd6e7a8260721d771e0e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2021.630183/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "29ba8a3d4c58f849f2f1dd6e7a8260721d771e0e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
259776512
|
pes2o/s2orc
|
v3-fos-license
|
FACTORS STUDY OF NEW MEDIA LITERACY IN INDONESIA ON ENVIRONMENTAL AWARENESS CHARACTER TO PROTECTING THE ENVIRONMENT
___________________________________________________________________ The purpose of this research is to analyze new media literacy for the intelligence of society on Instagram regarding information on protecting the environment. The concepts used in this study are environmental communication, development communication, new media. The method used is quantitative research with analysis factors. Our respondets supported our research. The instrumen of validity and reliability are good. So we could continued to next factor analysis. Instagram as the object of research studies. Respondents of this study are commentators on Instagram's government in protecting the environment from October to December 2018. Conclusion our research is all dimension of literacy new media in Indonesia are positive. And suggestion of this research is that the community must be able to be given information on messages that can be immediately digested. Research impact is people must keeping how significant of content new media.
INTRODUCTION
Background of these research are new media could positive side for people especially for students. Students are helped by new media for solving subjects in their school. For instance math, scientific knowledge and updated kind of materials of senior high school in Indonesia. But gap of new media. From the facts of student senior high school in Indonesia, these research hope could be give positive side for students who have to learn about environmental awareness character (Saadah et al, 2017).
These research found some previous research analysis for these. Gaines, (2014) explained that recycling of ion battery could be changing. De Gisi, et al. (2016) added about new founding is about Characteristics and adsorption Capacities of Low-Cost Sorbents for waste water treatment: A Review. World can not progress about the water waste again. But from Sabiro's research could press low cost of treatment water waste. Bramantoro (2018) said that maintaining the environment should be a human habit. Not a new thing for this community. But people are increasingly unconscious in protecting the environment. The following 2018 data on Plastic waste is the most popular material in the world. Its use has increased 20-fold in the last 50 years. Although demand continues to increase, according to the World Economic Forum (WEF) report, only 5% of plastic is recycled effectively, while 40% ends up in waste landfill (TPA), and the rest ends up in ecosystems such as the ocean. Waste management if it does not start now, it is predicted that in 2050 there will be more plastic waste in the ocean than the fish that live in it. also said against Plastic waste pollution can be done by optimizing the potential economic value, one of which is through a recycling model. From this data, it can be analyzed that there are more and more plastics in our environment that make people familiar with the conditions resulting from the accumulation of garbage in their area.
The plastic recycling industry has now developed in Indonesia, especially for the types of plastics that have economic value such as PET and PP. Both recycling rates reach above 50%. Trash has economic value if it is managed well (Damayanti, 2010). The importance of recycling as a stage of implementing a circular economic model that is seen as being able to fight plastic waste. The recycling chain is the main key in implementing a circular economy. By recycling plastic waste, reuse recycled products so that it can reduce the accumulation of waste in the landfill. This model also has economic value for the community and can support waste processing industries.
The presentation of municipal solid waste in Indonesia, as much as 60% is organic waste, 14% is plastic waste, 9% is paper waste, 4.3% is metal and 12.7% is other waste (glass, wood and other materials) . SWI also maps waste management carried out in a number of cities, in collaboration with a number of associations and community communities, including the Association of Indonesian Scavengers (IPI), the Indonesian Waste Bank Association (ASOBSI) and the Indonesian Plastic Recycling Association (ADUPI). In strengthening the analysis, SWI conducted a field study in Jakarta as a representation of large cities, and Ambon as a representation of small cities in Indonesia, accompanied by interviews at a number of second-hand shops (Andarani & Goto 2014).
Waste management in Indonesia itself has been regulated in Law No. 18/2008. However, there are still obstacles in the implementation of waste management. As part of regional autonomy, waste management is under the jurisdiction of local governments at both the city and district levels. However, imperfect management will have a national impact and even become a global problem such as the finding of plastic waste in the ocean (Bramantoro, 2018).
Waste management by the community and the state has been regulated by the government and the community well. According to data from more than 43 thousand media outlets throughout Indonesia, only less than 5,000 online media and online sites are officially registered and have information accuracy that is recognized. The truth of the dissemination of information or news on social media needs to be questioned because many of the news only contain the opinions of people who make it with a specific purpose and goal, even with the purpose of discordance. According to the mechanism of disseminating good and true news, the arrival of news to consumers must go through several stages and strict screening. In mainstream media, redactor and editors are gatekeepers before the news reaches to the reader. However, the importance of early media literacy in society is the main and most important gatekeeper when readers receive news from various media (Rianto, 2016).
The new Indonesian media ownership community is very updated. However, the understanding of new media users has not been optimal in understanding the contents of messages on Instagram, especially on messages to protect the environment. The close proximity of digital media to students is not only a positive impact but also a negative impact, among others, positive impacts are (1) easier and more efficient in finding sources of information, (2) to assist the learning process. (3) facilitate transactions in the economic field. The negative impact are (1) having dependence on the digital world and almost all of its time absorbed with the digital world, or more commonly referred to as addiction (2) the existence of pornography (3) is used as a place for fraud (Adiarsi et al., 2015).
The effects of new media literacy have positive and negative things for users. As mentioned above the positive and negative effects of new media literacy for the community.
From the research problems above, the problem of this research is the most dominant factors in media literacy for the intelligence of the people on Instagram about environmental security. The purpose of this research is to study the most dominant factors in media literacy for people's intelligence on Instagram about ensuring the environment. The benefits of academic research are the results of research that can explain which factors are most important for people's intelligence in media literacy on Instagram on environmental safety information. The benefits of this social research are the results of research that can provide an explanation of what are the most dominant factors of media literacy that most influence people's intelligence in messages to maintain the environment.
The concept of this research is new media literacy, new media literacy the ability to access, analyze, evaluate and communicate information in various forms of media. Media literacy is a set of perspectives that are used actively when accessing mass media to interpret the message at hand. (literasipublik.com). according to Silverblatt, the elements of media literacy are (1) awareness of the influence of the media on individuals and social, (2) understanding of the mass communication process, (3) developing strategies for analyzing and discussing media messages, (4) awareness that media content is text which describes the culture and ourselves at this time, (5) develops pleasure and respect for media content (Suyatna, 2018;Susongko&Afrizal, 2018;. The difference between this research and Suyatna (2018) is that researchers see the development of learning media with old learning media. While this study looks at what factors are the most dominant in media literacy variables in educating the public, using messages to protect the environment on Instagram media. The learning media used are new media. Instagram, despite social media, is very effective in influencing society.
The next difference with other studies conducted by Susongko&Afrizal (2018) with research is to look at public awareness in maintaining the environment. This study looks at what factors are the most dominant in Instagram media literacy regarding information about protecting the environment. The equation with this study is to use the same statistical analysis, namely the analysis factor. Because you want to see which factors are the most dominant in media literacy in educating the public through Instagram on information about protecting the environment.
The similarity of research conducted by with the title of representation of media literacy in the dimensions of social life in Indonesia is to focus on new media literacy. The novelty of this research is to look at the different factors with previous research on the influence variable, namely the intelligence of the people in using new media, namely Instagram.
The difference with Corbin et al (2018) research is how to study the field of medicine in reading and improving smart in improving existing data. The field of medicine requires media literacy as well. They have special symbols for medical results. And this must be done for the medical profession. the novelty of this research is that researchers use Instagram, many people today use this media in many ways. However, researchers want to see an explanation of which factors are the most dominant in media literacy on people's intelligence in information supported by Instagram.
Previous research from Brook et al (2014) found about nuclear could be sustainable is part of our live. Research Objective is to analyze dominan factors of literacy new media in Indonesia. The urgency of these research is to analyze factors of literacy new media in Indonesia. Because Indonesian people just have the gadget but for the literacy content of media, involved or moderate. Research novelty is analyze literacy new media for educated Indonesian people.
METHODS
The population of this research is accounts that provide comments on messages, photos and videos that voice the environment. The population is somebody who give commented of messages in new media. New media is Twitter, Instagram and Youtube. The sample is 219 repondents. Who was carried out by pursposive sampling. With the criteria for commenting both its nature builds positive and negative messages.
The dimensions to be measured are (1) awareness of the influence of the media on individuals and social, (2) understanding of the mass communication process, (3) developing strategies for analyzing and discussing media messages, (4) awareness that media content is text that describes culture and ourselves at this time, (5) developing pleasure and respect for media content. While human intelligence is measured by (1) Intellectual Intelligence or Intelligence Quotient (IQ): is a form of an individual's ability to think, process, and master his environment to the maximum and act in a directed manner. This intelligence is used to solve logical and strategic problems, (2) Emotional Quotient (EQ): is the ability to recognize, control and organize one's own feelings and other people's feelings deeply so that their presence is pleasing and coveted by others. This intelligence gives us awareness about one's own feelings and those of others, gives empathy, love, motivation, and the ability to respond to sadness or excitement appropriately (3) Spiritual Quotient (SQ): is an inspiring source and catapult someone's spirit by attaching themselves to the values of truth indefinitely. This intelligence is used to distinguish good and bad, right and wrong, and understanding moral standards.
The following table is validity and reliability of the dimensions of media literacy in this study (Table 1). The validity and reliability of this research is very important in continuing this research. All measurements of this research instrument are valid and reliable. Then this measurement can be carried out at the next multivariate analysis stage, namely the analysis factor. To find the dominant factors in media literacy variables in increasing the intelligence of Instagram users in environmental messages.
RESULTS AND DISCUSSION
The results of descriptive research on media literacy variables and community intelligence are as follows. The variable of media literacy and community intelligence is on the positive value that respondents respond to. Following is the frequency table of both media literacy variables and community intelligence.
Uses and gratification theory used of these research. Because what consumer need, the result need same of their need. Uses and gratification needed by consumer who depend on media. Media actively receiver for what consumer need (Wang, 2012). In the media literacy variable, the dimension of awareness of the influence of the media on individuals and social is dominated by positive responses from respondents. As said by Wu (2016) positive responses from respondents are very dominant in this dominance. Users of Instagram users say they are aware of the magnitude of Instagram's influence on messages to protect the environment. However, because many respondents' habits are also influenced by the absence of models or role models that can motivate them in maintaining the environment. So they will go back and forth to their habit of not taking care of their environment.
The dimension of understanding of the same mass communication process is strongly dominated by positive responses. The respondent understood that Instagram was mass media. And knowing the respond they will receive will be slow. But the ethical consequences of protecting the environment are not recognized by many people, so they do not care about the messages in maintaining their environment.
The dimensions of developing strategies for analyzing and discussing media messages were also responded positively by respondents. Respondents did not understand what was meant by developing a strategy and discussed media messages. But after being directed by researchers they understand what they should do (Picazo-Vela et al., 2016).
The dimension of awareness that the content of the media is a text that describes culture and ourselves at this time is also responded positively by the respondents. But they realized they were not doing what they were supposed to do. They still carry out what they should not do. For example, you have to plant trees if you cut down trees. The community is very aware of that. But people don't do it. It is not permissible to dispose of garbages, they are aware of the danger of littering. But they do not dispose of garbage in its place.
The dimensions of developing the pleasure and appreciation of media content, this is the most positively responded by respondents. Respondents know and feel the benefits of using Instagram directly. Users do not have to understand the message conveyed by the sender of the message. But if the user is satisfied with the use of Instagram then the benefits of the media are finished. In the concept of Uses and Gratification where communicants actively seek media messages to fulfill their needs. Currently Instagram users are oriented to fun. So that this was very positively responded by respondents.
The variable intelligence of the community on the dimensions of intellectual intelligence was responded positively by Instagram users. They are very proficient and skilled in using new media. But they don't understand Instagram's message well. As stated by this study. They are proficient in intellectual intelligence but have not understood the contents of the message regarding protecting the environment.
The dimensions of emotional intelligence were also responded positively by respondents, but the community was not invited to go off to meet other individuals directly. Through Instagram they are required to like the message first. Even though they do not necessarily understand the message. So Instagram users easily get emotional. Because of interactions with other individuals, it may be said not too often.
The dimensions of spiritual intelligence were also positively received by respondents. But rarely do respondents look for sites that can guide users to directions according to their respective beliefs. They prefer to see sites on Instagram on something they like. Not what they need. As in previous studies by Susongko (2018), the dominant factor in his research was that respondents' awareness in maintaining the environment was not very good. The most dominant factor in this study is the dimension of developing pleasure and respect for media content. Previous research was also stated by that the most dominant factor in her research was the dimension of pleasure and appreciation. Because when they respond to every photo and video regarding the environment, they will be interested in things other than the message conveyed by the site to protect the environment.
CONCLUSION
Based on the results of this study the dominant factor in this study on media literacy variables is the dimension of developing pleasure and respect for media content. Other dimensions already look positive. But need to be given a stimuli again in order to increase the results of the dominant factors in this variable. Context in other studies can also be developed. Not only new media. But on more interactive media.
Conclusion of the research is presented briefly in conclusion part. A conclusion might elaborate on the importance of the work or suggest applications and extensions.
|
2023-07-12T16:41:31.502Z
|
2020-02-29T00:00:00.000
|
{
"year": 2020,
"sha1": "f662078776ecc8538204dc34120804c8b62efa6e",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/sju/index.php/usej/article/download/23525/16936",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "86aae1cabc68d7ae48a5cfc4b138b0f70ce73caf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
234448833
|
pes2o/s2orc
|
v3-fos-license
|
Influence of L. thermotolerans and S. cerevisiae Commercial Yeast Sequential Inoculation on Aroma Composition of Red Wines (Cv Trnjak, Babic, Blatina and Frankovka)
: Even though Saccharomyces cerevisiae starter cultures are still largely used nowadays, the non- Saccharomyces contribution is re-evaluated, showing positive enological characteristics. Among them, Lachancea thermotolerans is one of the key yeast species that are desired for their contribution to wine sensory characteristics. The main goal of this work was to explore the impact of L. thermotolerans commercial yeast strain used in sequential inoculation with S. cerevisiae commercial yeast on the main enological parameters and volatile aroma profile of Trnjak, Babić, Blatina, and Frankovka red wines and compare it with wines produced by the use of S. cerevisiae commercial yeast strain. In all sequential fermented wines, lactic acid concentrations were significantly higher, ranging from 0.20 mg/L in Trnjak up to 0.92 mg/L in Frankovka wines, while reducing alcohol levels from 0.1% v/v in Trnjak up to 0.9% v/v in Frankovka wines. Among volatile compounds, a significant increase of ethyl lactate and isobutyl acetate, geraniol, and geranyl acetate was detected in all wines made by use of L. thermotolerans . In Babić wines, the strongest influence of sequential fermentation was connected with higher total terpenes and total ester concentrations, while Trnjak sequentially fermented wines stood up with higher total aldehyde, volatile phenol, and total lactone concentrations. Control wines, regardless of variety, stood up with higher concentrations of total higher alcohols, especially isoamyl alcohol. The present work contributed to a better understanding of the fermentation possibilities of selected non- Saccharomyce s strains in the overall red wine quality model-ing.
Introduction
Wine quality is influenced by many factors starting from the geographical origin of the grapes, varietal grape must composition, vinification process, and microbial activity of yeast species used. Wine is a complex mixture of chemical compounds that contribute differently to overall quality. Among them, volatile aroma compounds that can be divided according to their origin into varietal (grape) aromas, fermentative aromas, and aging aromas are some of the most important contributors to flavor perception. Between grape varieties, there is a notable sensory difference in aroma composition that is usually not really perceptible at pre-fermentative stages but strongly influenced by microbial activity during wine production [1]. Nowadays in winemaking, Saccharomyces cerevisiae commercial starter cultures are still largely used with the main goal being the assurance of more predictable and desired final wine quality results. However, some evidence suggests that the continuous use of commercial yeast can significantly reduce the variability of autochthonous yeasts as well as aromatic complexity and uniqueness of the wine [2,3]. In the last decade, the contribution and important role of non-Saccharomyces wine yeasts were reevaluated in many works [2,[4][5][6][7] showing positive enological characteristics that are more or less absent in S. cerevisiae. Among them, Lachancea thermotolerans is one of the key yeast species that is desired for their positive contribution to wine sensory characteristics [8]. According to Gobbi et al. [9], the association of L. thermotolerans and S. cerevisiae significantly reduced ethanol levels from 0.7 to 0.9% v/v, especially when fermentation was carried out at lower temperatures. In the work by Binati et al. [7] the highest potential to reduce ethanol content was achieved by the use of L. thermotolerans strains. The possibility to increase lactic acid concentrations and at the same time reduce volatile acidity was confirmed by [4,10], while increased production of 2-phenylethyl alcohol was described as a characteristic of L. thermotolerans by Beckner et al. [11]. The same authors noted significantly higher production of terpenes nerol and terpine-4-ol as well as 3-methylthio-1-propanol. A study evaluating the impact of several non-Saccharomyces yeasts in sequential inoculation with S. cerevisiae showed that an L. thermotolerans-S. cerevisiae combination had the most potential for increased chemical complexity of the Shiraz volatile profile [12]. In the work by Whitener et al. [13], L. thermotolerans fermentation showed a higher amount of acetate esters and certain terpenes but also the lowest amount of both total acidity and malic acid, which is in agreement with previous data that had indicated S. cerevisiae as a poor L-malate metabolizer compared to non-Saccharomyces yeasts. Nowadays, based on previously published results, commercial non-Saccharomyces starter cultures have been developed for use in wine production, but compared to S. cerevisae, little work has been done with commercial starter cultures that can point out what specific chemical profile to expect based on grape variety and overall fermentation conditions. The varieties Trnjak, Babić, and Blatina are native red grapevine varieties grown in the Dalmatia wine region (Croatia) used for the production of high-quality red wines. Typically, they have lower levels of total acidity and higher pH values in grape juice and wine, and this is especially expressed in years with extremely high temperatures. Frankovka (syn. Blaufraenkisch) is a variety mostly distributed in the continental part of Croatia and Istria but also in neighboring regions of Slovenia, Hungary, and Austria. Usually, it is used for the production of fresh and fruity red wines, which are also hard to obtain in years with elevated temperatures, which have become more and more frequent in the last decades. The aims of the present study were to explore the impact of the L. thermotolerans commercial yeast strain (Laktia, Lallemand Inc., Montreal, QC, Canada), used in sequential inoculation with S. cerevisiae commercial yeast (Uvaferm BDX, Lallemand Inc. Montreal, QC, Canada), on the main enological parameters and volatile aroma profiles of Trnjak, Babić, Blatina, and Frankovka red wines and to compare it with control wines produced by use of an S. cerevisiae commercial strain. The present work contributes to a better understanding of the fermentation possibilities of selected commercial non-Saccharomyces strains in overall red wine quality modeling.
Yeast Strains
The commercial S. cerevisiae and L. thermotolerans strains were provided from Lalle-mandInc., Montreal, QC, Canada as active dry yeasts. Both yeast strains were precultured in the same grape must at 25 °C for 72 h. Each yeast strain was added at approximately 1 × 10 7 cells/mL, and fermentations were carried out at 20 °C according to the manufacturer's instructions. The cell concentrations were determined by counting under a light microscope (Zeiss Axioscope2-Plus microscope (Carl Zeiss Ltd., Oberkochen, Germany).
Fermentation Trials
Grape varieties Trnjak and Blatina were grown in the Mostar vineyard, Bosnia and Herzegovina, while the other two grape varieties were grown in Croatia, namely Babic in the Jadrtovac vineyard (located near Šibenik) and Frankovka in the experimental Jazbina vineyard (located in Zagreb). For each grape variety (Blatina, Trnjak, Babić, Frankovka), 150 kg of grapes harvested in 2019 was destemmed, crushed, and distributed evenly into three 50 L stainless steel fermenters. Basic chemical composition of the grapes was as follows: for Blatina, initial sugar 220 g/L, total acidity 6.05 g/L as tartaric acid, yeast assimilable nitrogen 240 mg/L, and pH 3.39; for Trnjak, 205 g/L, total acidity 7.03 g/L as tartaric acid, yeast assimilable nitrogen 270 mg/L, and pH 3.52; for Babić, initial sugar 235 g/L, total acidity 7.60 g/L as tartaric acid, yeast assimilable nitrogen 220 mg/L, and pH 3.30; for Frankovka, initial sugar 230 g/L, total acidity 7.75 g/L as tartaric acid, yeast assimilable nitrogen 245 mg/L, and pH 3.32. In all variants, sulfur dioxide (SO2), in a concentration of 50 mg/L, was added to prevent oxidation and inhibit indigenous bacterial or fungal growth. The control variants were inoculated by S. cerevisiae Uvaferm BDX (control culture), while the sequential variants were inoculated with L. thermotolerans LAKTIA strain with the addition of the S. cerevisiae Uvaferm BDX after 2 days of fermentation. The maceration process, at 20 °C, lasted for 7 days, and during that period, mash aeration and cap management were carried out by mechanical mixing. Alcoholic fermentation finished by the end of the maceration process, and at that moment wines were separated from the pomace, and the solid pulp left behind was pressed by use of a hydropress (Lancman VS-A 80, Gomark d.o.o., Vransko, Slovenia). Free run wines and pressed wines were mixed. The course of fermentation was monitored by sugar consumption, and it was considered complete when the residual sugar concentrations were under 1.5 g/L. In all variants, fermentation started 24 h after inoculation and lasted between 10 and 12 days. In that period, fermentation kinetics was monitored by the decomposition of sugars showing no marked difference. The final wines were bottled in 750 mL glass bottles with screw caps and transported to the laboratory of the Department of Viticulture and Enology, Faculty of the Agriculture University of Zagreb, for chemical analysis.
Physicochemical Analysis
Basis wine parameters including alcohol content (%, v/v), pH values, and total and volatile acidity were quantified applying methods recommended by the International Organization of Vine and Wine (OIV, 2016) [14].
Organic Acids Analysis
Analysis of individual acids (malic and lactic acid) was done by an Agilent Series 1100 HPLC system equipped with a diode array detector (Agilent, Palo Alto, CA, USA). In brief, the determination was performed isocratically with the flow rate set to 0.6 mL/min with 0.065% phosphoric acid (p.a. Merck, Darmstadt, Germany) as a mobile phase. An Aminex HPX-87H column, 300 × 7.8 mm i.d. (Bio-Rad Laboratories, Hercules, CA, USA), was heated at 65 °C, while the detector was set to 210 nm [15].
Volatile Compounds Determination
Volatile compound analysis of wine samples was performed according to the described method [15]. Isolation of analytes was performed by solid-phase extraction (SPE) on LiChrolut EN cartridges (200 mg/3 mL, Merck, Darmstadt, Germany). First, 50 mL of sample was loaded to the column that was previously conditioned by successive washing with 3 mL dichloromethane (UHPLC gradient grade J.T. Baker, Deventar, Netherland), methanol (UHPLC gradient grade J.T. Baker, Deventar, Netherland), and 13% aqueous ethanol (LiChrosolv, Merck, Darmstadt, Germany) solution. After the passage of the sample through the column, residual sugars and other polar compounds were washed out with 3 mL of water. The column was dried by the passing of air. The evaluation of analytes was done by 1 mL of dichloromethane. As a quality control, 50 mL of water was loaded to the SPE column instead of the sample. Quantitative and qualitative analyses were performed on a Thermo Scientific Trace 1300 system coupled with ISQ 7000 mass spectrometer with a ZB-WAX column (60 m × 0.32 mm i.d., with 0.5 µm film thickness, Phenomenex, Torrance, CA, USA). The temperature program was as follows: 40 °C for 15 min, from 40 to 250 °C with increments of 2 °C per minute, and 250 °C for 15 min. The transfer line was set to 250 °C, and the flow rate of helium was 1 mL/min. The MS was operated in electron ionization (EI) mode at 70 eV with total ion current (TIC) monitoring. Identification was done by comparing retention times and mass spectra with those of standards. A list of used standards, linear retention indices, and other parameters for identification and quantification are presented in Table S1. Quantification was done by calibration curves. The curves (based on quantification ions) were constructed with Chromeleon™ Chromatography Data System (CDS) software. For all available standards (Table S1), six different concentrations were prepared. For two compounds (Terpendiol I and II) semiquantitative analysis was performed. Their concentrations were expressed in equivalents of similar compounds, with the assumption that a response factor was equal to one.
Determination of Odor Activity Values and Relative Odor Contributions
Each chemical substance can have a specific influence on the wine aroma. It can be presented by the odor activity value (OAV) and relative odor contributions (ROCs). Thus, they can be used as a markers in determining the role of a specific compound in the sample aroma composition. OAV is calculated as the quotient of its concentration (c) and corresponding odor detection threshold (t) reported in the literature [16]. Volatile aroma substances with an OAV ≥ 1 can have a direct impact on aroma, and they are usually marked as one of the most significant volatile substances or the most active odors [17]. Volatiles with OAVs < 1 can also positively influence the wine aroma complexity and aromatic intensity of other compounds through synergistic effects [18]. The ROC of each aroma compound is calculated as the ratio of the OAV of the respective compound to the total OAVs of each wine [19].
Statistical Analysis
Means and standard deviations were calculated for all parameters related to physicochemical properties of wines as well as for all the volatile organic compounds obtained after analyses. One-way ANOVA was performed for all parameters separately due to the significant differences among the four cultivars studied; to define common effects of L. thermotolerans yeast in sequential fermentation with S. cerevisiae against control wine, data for volatile organic compounds were standardized within cultivars using z-score normalization. One-way ANOVA and two-sided Dunnett test were performed using standardized data to compare the treatment (L. thermotolerans) with control for data from all four cultivars. The analysis was carried out with XLSTAT software v.2020.3.1. (Addinsoft, New York, NY, USA).
Physicochemical Composition
The results of basic physicochemical analysis of wines are presented in Table 1 showing that the use of L. thermotolerans yeast in sequential fermentation with S. cerevisiae can be used as one useful tool for alcohol content reduction in wines by the production of lactic acid, thus leading to biological acidification. Previous studies [7,20] have already pointed out that the use of non-Saccharomyces yeasts can reduce the alcohol content of wine, which is in accordance with our data. Reducing alcohol levels ranged from 0.1% v/v in Trnjak wines up to 0.9% v/v in Frankovka wines. In the work by Sgouros et al. [21], the alcohol reduction by use of the high lactate-producing L. thermotolerans strain (P-HO1) in sequential inoculation with S. cerevisiae, produced the highest levels of lactic acid ever recorded in mixed fermentations (10.4 g/L), increasing thereby the acidity and reducing ethanol by 1.6% vol. In our work, lactic acid concentrations were also significantly higher in all sequential fermented wines, not depending on variety, ranging from 0.20 mg/L in Trnjak wines up to 0.92 mg/L in Frankovka wines. Natural S. cerevisiae strains produce only traces of D-lactic acid during alcoholic fermentation, and levels between 100 and 500 mg/L have been reported in final wines [22]. Higher lactic acid concentrations had a positive effect on total acidity and pH values of sequential fermented wines, ensuring better wine stability as well as aging potential and overall quality. This is especially important nowadays with global climate change influencing grape composition and resulting in lower acidity and increasing sugar concentrations [23]. Volatile acidity is one of the important parameters influencing wine quality, and it is also strongly dependent on the type of yeast conducting alcoholic fermentation. In the past, non-Saccharomyces yeasts were considered undesired and one of the reasons was higher acetic acid production. Nowadays, published studies have generated highly variable results, showing that some of them can have desirable enological properties connected with low production of volatile acidity [4]. Among non-Saccharomyces yeasts, L. thermotolerans stood out as a low acetic acid producer, which has been shown in our work, with volatile acidity not differing compared to values achieved in fermentation conducted by S. cerevisiae commercial yeast. Differences observed in malic acid concentrations could be connected with the esterification process, resulting in diethyl malate presence (Table 1) or the weak but possible ability of S. cerevisiae to metabolize L-malic acid during wine fermentation [22].
Volatile Compound Composition
In Table 2 one hundred and twenty-one individual volatile compounds are presented, quantified, and classified into several chemical classes (aldehydes, higher alcohols, volatile phenols, terpenes, C13-norisoprenoids, lactones, esters, fatty acids, sulfur compounds, other compounds, other alcohols), showing significant difference among red wines produced by the use of pure S. cerevisiae commercial yeast and the combination of L. thermotolerans and S. cerevisiae commercial yeast within four cultivars. Significant varietal effects were obtained for the majority of volatile compounds except for trans-3-hexene-1-ol, tyrosol, 1,8-terpin, 8-hidroxylinalool, neralidol, menthol, β-ionone-5,6-epoxide, nonanoic acid, 1,4-butanediol, and acetoin. For this reason, standardized data were used to define the common effects of L. thermotolerans and S. cerevisiae on volatile compounds. In Figure 1, results of the two-sided Dunnett test using standardized data (z-scores) are presented only for volatile compounds with significant differences against the control for all cultivars. cerevisiae against control wines expressed as the difference of z-score from control (presented as 0 value) for all four cultivars using z-score standardization within cultivars for volatile aroma compounds with significant effect only; significance level: * p < 0,05, ** p < 0.01 and *** p < 0.001 with two-sided Dunnett test.
Aldehydes
Aldehyde concentration is connected with the degree of ripeness, treatments before fermentation, enzymatic oxidation, and breakdown of grape lipids, as well as variety. Comparing Babić, Blatina, Frankovka, and Trnjak total aldehydes concentrations, the highest ones were detected in Trnjak wines, while there were no marked differences between the others. Trnjak wines were also the only ones with a positive influence of sequential fermentation on total aldehyde concentration as well as 5-hydroxymethylfurfural and furfural concentrations. In order to protect themselves, yeasts reduce both furfural and HMF to their furyl acid or alcohol derivatives through NAD(P)H-dependent reductive pathways that utilize a range of aldehyde dehydrogenases involved in glycolysis and ethanol fermentation. Under aerobic conditions, S. cerevisiae transforms furfural to furoic acid, while under anaerobic fermentation, the primary product is furfuryl alcohol [24]. These detoxification processes lead to a lack of NADH, suggesting that furfural reduction competes for NADH and results in a decrease in cell growth and ethanol formation [25,26]. Accordingly, L. thermotolerans may have a stronger ability to reduce these aldehydes, even though there was no significant difference in furfuryl alcohol production between control and sequential fermentation wines. Decanal, as the only individual aldehyde with OAV > 1 in Blatina, Frankovka, and Trnjak wines produced with L. thermotolerans, was significantly higher compared to control wines with a notable odor contribution.
C13-Norisoprenoids and Terpenes
These two groups of chemical compounds primarily generate the varietal odor profile of wines that are characterized by floral and fruity aromas and are mainly translocated from the grape to the must during the crushing, pressing, and settling process in free volatile form or bound to sugars. Thus, higher enzymatic activity by the action of endogenous or exogenous glycosidase enzymes during the winemaking process can influence their release. Previous works [27][28][29] have shown those non-Saccharomyces yeasts and among them also certain strains of L. thermotolerans can have high β-glucosidase activity. Only in Babić wines was total terpene concentration significantly higher in sequentially fermented wines due to the higher concentrations of linalool, 8-hydroxylinalool, tetrahydrolinalool, farnesol, neral, geraniol, and geranyl acetate. Significantly higher concentrations of geraniol and geranyl acetate were present in all sequentially fermented variants, not depending on variety, which is in accordance with data published by Beckner Whitener et al. [13]. Farnesol has also been positively connected with L. thermotolerans activity [13], while in the work by Whitener et al. [12], linalool was indicated as a key compound in Shiraz wines with higher amounts in L. thermotolerans-S. cerevisiae sequential fermentation. In Blatina and Trnjak wines, no significant difference was detected in total terpene concentrations among variants, but among detected individual terpenes, the concentrations of 1,8-terpin stood up showing significantly higher concentrations in Blatina, Trnjak, and also Frankovka wine samples produced by sequential fermentation. Neral concentrations were higher in Babić and Blatina wines, while nerol was presented in higher concentrations in Frankovka and Trnjak sequential variants. Terpine-4-ol was also among compounds pointed out as one whose concentration can be influenced by L. thermotolerans activity [11]. Our data showed a significant increase in Blatina and Trnjak wines. Only in Frankovka control wines was total terpene concentration significantly higher compared to sequentially fermented wines, mainly due to the presence of linalool and citronellol, which showed higher ROCs (Table 2). In addition, as shown in Figure 1, significantly lower concentrations of citronellol were presented in all sequential fermentation wines compared to the control. Comparing total C13-norisoprenoids concentrations, no significant influence of L. thermotolerans yeast, not depending on variety, was noted, while only in Blatina control wines were higher concentrations of β-damascenone and TDN noted.
Higher Alcohols and Esters
Among fermentation aroma compounds, higher alcohols and esters can be strongly influenced by the type of yeasts used and fermentation conditions [4]. The concentrations of higher alcohols not exceeding the amount of 300 mg/L can positively influence the formation of wine complexity [30], which was not the case in our samples. Slightly higher concentrations were present in Frankovka and Trnjak wines, mainly due to 2-methyl-1butanol content, but as can be seen from Table 2, with values under the odor detection threshold. In the analyzed red wines, total higher alcohol concentrations were significantly higher in control variants, except in Trnjak wines, where no marked differences were noted. There was a 13% lower total concentration of higher alcohols, with the greatest difference observed for isoamyl alcohol when L. thermotolerans was used, which was also reported by [20]. Gobbi et al. [9] also reported that in sequential inoculation, L. thermotolerans reduced isoamyl alcohol and isobutanol concentrations. In our work, isoamyl alcohol reduction was also noted in all sequential variants, not depending on variety, while isobutanol concentrations differed according to variety, with higher concentrations in Babić, Frankovka, and Trnjak sequentially fermented wines and lower in Blatina wines. Escribano et al. [31] pointed out L. thermotolerans as a top 1-propanol and 1-hexanol producing species when a pure culture was used, while in our study, results were different between varieties. In Babić wines, sequential fermentation positively influenced 1-hexanol concentrations, in Trnjak there were no differences, while in Blatina and Frankovka control wines, higher concentrations were present. Among all higher alcohols detected, only 1-hexanol was above the OAV. Higher phenylethyl alcohol was detected in Babić and Trnjak sequentially produced wines, while in Blatina and Trnjak wines, higher concentrations were presented in control wines. Similar results were presented in the work by Comitini et al. [27], where just one of the L. thermotolerans strains tested showed a statistically significant difference in phenylethyl alcohol concentration, while Benito et al. [20] noted that between non-Saccharomyces yeasts tested, L. thermotolerans was the best producer of phenylethyl alcohol but with lower concentrations compared to fermentation by S. cerevisiae yeast. Chen et al. [32] observed in L. thermotolerans conducted fermentation a decrease of approximately 15 mg/L of phenylethyl alcohol compared to the wines produced with S. cerevisiae yeast, while no differences were detected for 2-phenylethyl acetate. In our study, total esters concentrations were significantly higher in Babić and Frankovka sequentially fermented wines, while in Blatina and Trnjak wines, higher concentrations were in control variants. Babić and Trnjak sequentially produced wines that had higher concentrations of 2-phenylethyl acetate, even though in the work by Chen et al. [32], no differences were noted. Isoamyl acetate stood out with significantly lower concentrations in all sequentially fermented wines, which is in accordance with data published by [9]. Among ethyl esters, the most abundant was ethyl lactate, whose concentrations were significantly higher in all wines produced by sequential fermentation as a result of the greater lactic acid production involved with L. thermotolerans, which is also in accordance with previously published data [32,33]. From the data presented in Figure 1, it can be noted that in sequentially fermented wines, esters and higher alcohols were mainly presented in lower concentrations compared to control wines.
Fatty Acids
Initial must composition, as well as agricultural conditions and variety, can have a strong influence on fatty acids present in wine [34,35], which was confirmed by our data. In our work, total fatty acid concentrations were significantly higher in Babić and Frankovka sequentially fermented wines, mainly due to higher 2-methylpropionic acid concentrations, while in the other two, there was no difference. Fatty acid concentrations can also be significantly influenced by L. thermotolerans in combined fermentations, where lower production of hexanoic and octanoic acid was noted [27], which was also the case in Blatina, Frankovka, and Trnjak sequentially fermented wines. Babić wines produced with L. thermotolerans were the only ones with a higher concentration of isovaleric acid, which has been pointed out by previously published work [31] as one whose concentrations can also be influenced by the action of L. thermotolerans.
Lactones
Lactones mostly arise from the cyclization of the corresponding γ-hydroxycarboxylic acids, which are unstable molecules that can be formed by glutamic acid deamination and the decarboxylation process, pantolactone being an example [36][37][38]. Lactones may also come from grapes, as is the case in Riesling, where they contribute to the varietal aroma [39,40]. Our results show that γ-butyrolactone was the most abundant lactone in all analyzed wines, with significantly higher concentrations in Babić, Frankovka, and Trnjak sequentially produced wines in which also total lactone concentrations were significantly higher compared to control wines. In the work by Escribano et al. [31] L. thermotolerans was a higher γ-butyrolactone producer when compared with some non-Saccharomyces yeasts, but with no significant difference when compared to S. cerevisiae. Nakamura et al. [41] analyzed γ-nonalactone in 38 Californian and French wines, in which γ-nonalactone concentrations ranged from 0 to 16 µg/L in white samples and 12 to 43 µg/L in red ones. Concentrations of γ-nonalactone in our wines were in agreement with the results of Nakamura et al. [41] but significantly higher in Babić, Frankovka, and Trnjak control wines.
Volatile Phenols
Volatile phenols, such as guaiacol, eugenol, vanillin, 4-vinylguaiacol, and 4-vinylphenol, are relevant components of the hydrolysates obtained from fractions of precursors extracted from grapes or wines [42,43]. Among them, vinylguaiacol and vinylphenol can be formed by yeast phenolic acid decarboxylases or by enzymatic or acid hydrolyses of their glycosides, having a strong influence on wine quality if present at high levels [44]. Significantly higher concentrations of 4-vinylguaiacol and 4-vinylphenol were noted in Babić and Trnjak sequentially fermented wines but with no impact on wine sensory profile, as OAVs were <1. According to our data only eugenol concentrations were above the odor detection threshold, but with no significant differences between variants except in Babić wines, where control wines were more abundant. Even though the levels of vanillin derived from the grape cannot rival levels released by some types of oak wood, they can be released from a large number of grape precursors, for instance during enzymatic hydrolysates from grape berry skin or by oxidation of 4-vinylguaiacol [43]. Diversity in vanillin concentrations between V. vinifera aromatic varieties was also noted in work by D'Onofrio et al. [45]. In our work, higher concentrations of vanillin in Babić and Frankovka control wines may be connected with lower concentrations of 4-vinylguaiacol in the same ones.
Odor Active Values (OAVs) and Relative Odor Contributions (ROCs)
To evaluate the influence of individual volatile compounds on the overall aroma of each red variety of wine, OAVs and ROC indexes were calculated and are presented in Table 2. From a total of 122 compounds, only 17 exceeded the threshold values (OAV > 1). Between them, the most abundant were esters, with four individual compounds, followed by terpenes, aldehydes, and fatty acids, with three compounds each, and higher alcohols, volatile phenols, C13-norisoprenoides, and lactones, with only one compound each. In Babić wines, the highest OAV was β-damascenone, with no marked ROC differences between control and sequential fermentation wines. The use of L. thermotolerans in Babić wines positively influenced total ester, terpene, and fatty acid ROCs with higher ethyl hexanoate, linalool, hexanoic, and octanoic acid OAVs. The ROC of isoamyl acetate was noted in all wines, but especially in Frankovka and Babić control wines. Comparing OAVs in Blatina wines, the highest one was connected with β-damascenone, with higher values in control wine, while the strong influence of sequential fermentation was noted with the presence of aldehydes, especially decanal, which resulted in an almost 10% higher total ROC. Blatina wines produced with the use of L. thermotolerans also stood up with higher total fatty acid and total ester ROCs as well as γ-nonalactone and eugenol values. On the contrary, in Frankovka and Trnjak wines, the total ROC of esters, fatty acids, terpenes, γnonalactone, and eugenol was stronger in control variants, while L. thermotolerans positively influenced total aldehydes and β-damascenone OAV, especially in Trnjak wines.
Conclusions
In conclusion, the data from the presented work pointed out positive effects of L. thermotolerans yeast on overall wine composition, although they were different between the varieties used. For the first time, the influence of an equal sequential fermentation strategy was applied in the production of four different grape varieties of wines. The resulting production of L-lactic acid regardless of primary grape must composition pointed out the use of L. thermotolerans as an effective acidification tool of the fermenting grape must as well as a possible path for reduction of wine alcohol content. In Babić wines, the strongest influence of sequential fermentation was connected with higher total terpene and total ester concentrations, mainly due to the higher farnesol, linalool, neral, geraniol, and geranyl acetate presence, and due to the concentrations of mostly all ethyl esters being above the odor detection threshold, such as ethyl decanoate, ethyl octanoate, and ethyl hexanoate. Blatina sequentially fermented wines can be singled out by higher concentrations of some individual terpenes, such as geraniol, geranyl acetate, neral, and 1,8 terpin but a lower concentration of total esters and ethyl lactate, whose presence was significantly higher in Babić, Frankovka, and Trnjak wines. Significantly higher concentrations of ethyl lactate, together with some already mentioned individual terpenes, were present in Frankovka sequentially fermented wines, while Trnjak sequentially fermented wines stood up with higher total aldehyde volatile phenols and total lactone concentrations. Control wines, regardless of variety, stood up with higher concentrations of total higher alcohols, among them especially isoamyl alcohol. In addition, higher concentrations of citronellol, isoamyl acetate, and vanillin were defined in all control wines, as were total esters in Blatina and Frankovka wines. Thus, the most significantly different profiles between S. cerevisiae yeast fermentation and sequential fermentations were observed in total aldehyde, higher alcohol, ester, and terpene concentrations. Our data also showed that multivariate analysis differences in the volatile aroma compounds can be a useful tool leading to an optimal selection of yeasts with the main purpose of producing high-quality varietal wines.
Supplementary Materials:
The following are available online at www.mdpi.com/2311-5637/7/1/4/s1, Table S1: Identification and quantification parameters for GC-MS analysis. The data presented in this study are available on request from the corresponding author. The data are not publicly available there are part of PhD thesis.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-01-07T09:12:06.850Z
|
2020-12-31T00:00:00.000
|
{
"year": 2020,
"sha1": "e1ae1ec979f927876672cee447144348b8f01994",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-5637/7/1/4/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9f86de33be51aa7c370d4f35e8cda8c7834b0f21",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
235736889
|
pes2o/s2orc
|
v3-fos-license
|
Discriminative correlation filters in robot vision
In less than ten years, deep neural networks have evolved into allencompassing tools inmultiple areas of science and engineering, due to their almost unreasonable effectiveness in modeling complex realworld relationships. In computer vision in particular, they have taken tasks such as object recognition, that were previously considered very difficult, and transformed them into everyday practical tools. However, neural networks have to be trained with supercomputers on massive datasets for hours or days, and this limits their ability adjust to changing conditions. This thesis explores discriminative correlation filters, originally intended for tracking large ob jects in video, socalled visual object tracking. Unlike neural networks, these filters are small and can be quickly adapted to changes, with minimal data and computing power. At the same time, they can take advantage of the computing infrastructure developed for neural networks and operate within them. The main contributions in this thesis demonstrate the versatility and adaptability of correlation filters for various problems, while complementing the capabilities of deep neural networks. In the first problem, it is shown that when adopted to track small regions and points, they outper form the widely used LucasKanade method, both in terms of robustness and precision. In the second problem, the correlation filters take on a completely new task. Here, they are used to tell different places apart, in a 16 by 16 square kilometer region of ocean near land. Given only a horizon profile the coast line silhouette of islands and islets as seen from an ocean vessel it is demonstrated that discriminative correlation filters can effectively distinguish between locations. In the third problem, it is shown how correlation filters can be applied to video object segmen tation. This is the task of classifying individual pixels as belonging either to a target or the back ground, given a segmentation mask provided with the first video frame as the only guidance. It is also shown that discriminative correlation filters and deep neural networks complement each other; where the neural network processes the input video in a contentagnostic way, the fil ters adapt to specific target objects. The joint function is a realtime video object segmentation method. Finally, the segmentation method is extended beyond binary target/background classification to additionally consider distracting objects. This addresses the fundamental difficulty of coping with objects of similar appearance.
This thesis explores discriminative correlation filters, originally intended for tracking large ob jects in video, socalled visual object tracking. Unlike neural networks, these filters are small and can be quickly adapted to changes, with minimal data and computing power. At the same time, they can take advantage of the computing infrastructure developed for neural networks and operate within them.
The main contributions in this thesis demonstrate the versatility and adaptability of correlation filters for various problems, while complementing the capabilities of deep neural networks. In the first problem, it is shown that when adopted to track small regions and points, they outper form the widely used LucasKanade method, both in terms of robustness and precision.
In the second problem, the correlation filters take on a completely new task. Here, they are used to tell different places apart, in a 16 by 16 square kilometer region of ocean near land. Given only a horizon profile the coast line silhouette of islands and islets as seen from an ocean vessel it is demonstrated that discriminative correlation filters can effectively distinguish between locations.
In the third problem, it is shown how correlation filters can be applied to video object segmen tation. This is the task of classifying individual pixels as belonging either to a target or the back ground, given a segmentation mask provided with the first video frame as the only guidance. It is also shown that discriminative correlation filters and deep neural networks complement each other; where the neural network processes the input video in a contentagnostic way, the fil ters adapt to specific target objects. The joint function is a realtime video object segmentation method.
Finally, the segmentation method is extended beyond binary target/background classification to additionally consider distracting objects. This addresses the fundamental difficulty of coping with objects of similar appearance. iv 1 Introduction Within the broader field of computer vision, much theoretical insight can be gained from computationally expensive methods with excellent performance. However in the subfield of robot vision, it is crucial that methods can adapt to fast changes in dynamic environments.
The goal of this work was to develop efficient estimation and learning based algorithms for robotic vision, striking a balance between the quality of results and computation speed. The main research is focused on discrimina tive correlation filters (DCFs). These have proven very successful in object tracking, but are here applied as adaptive classifiers in novel problems.
Compared to neural networks, DCFs are small and can be quickly adapted to changes, with minimal data and computing power. At the same time, they can take advantage of the computing infrastructure developed for neural net works and operate within them. The main contributions in this thesis demon strate their versatility and adaptability in various problems, while comple menting the capabilities of deep networks.
In the first problem, it is shown that when adopted to track small regions and points, they outperform the widely used LucasKanade method, both in terms of robustness and precision.
In the second problem, the correlation filters take on a completely new task. Here, they are used to tell places apart, in a 16 by 16 square kilometer region of ocean near land. Given only a horizon profile the coast line sil houette of islands and islets as seen from an ocean vessel it is demonstrated that discriminative correlation filters can effectively distinguish between lo cations.
In the third problem, it is shown how correlation filters can be applied to video object segmentation. This is the task of classifying individual pixels as belonging either to a target or the background, given a segmentation mask provided with the first video frame as the only guidance. It is also shown that discriminative correlation filters and deep neural networks complement 1. Introduction each other; where the neural network processes the input video in a content agnostic way, the filters adapt to specific target objects. The joint function is a realtime video object segmentation method.
Finally, the segmentation method is extended beyond binary targetand background classification to additionally consider distracting objects. This addresses the fundamental difficulty of coping with objects of similar appear ance.
A major paradigm shift occurred in computer vision during the time of the work presented here was carried out. This is reflected in this thesis by a tran sition from classical methods and optimization to neural networks and learn ing. Specifically, the first two papers of this thesis show the author's initial work on geometry and 3D reconstruction. From paper B (chapter 3) and on ward, the work pivots to focus on classification with DCFs, and progressively moves towards approaches involving deep neural networks and learning.
Outline
Part I of this thesis presents both additional background and overviews of each of the published works, organized into separate chapters for the indi vidual applications. The intent is to outline the core ideas of each paper and provide additional context that is generally left out of the original publication.
Chapter 2 covers traditional concepts of geometric vision, providing back ground to optimal two and threeview triangulation, and introduces the fast threeview triangulation approach of paper A.
Chapter 3 describes discriminative correlation filters for visual object tracking, and outlines how they can be applied to point tracking as elabo rated on in paper B. This chapter marks the beginning of the transition from classical models to datadriven approaches.
Related to paper C, chapter 4 outlines a localization method whereby an observed horizon is matched to a map location with the use of discriminative correlation filters as feature matching operators. The chapter also details the work to transform a classical vision approach to horizon detection and seg mentation, into an equivalent solution based on neural networks.
Chapter 5 covers paper D, where discriminative correlation filters for vi sual object tracking are adapted into a fast method for video object segmen tation.
Chapter 6 introduces the ideas behind paper E, where correlation filters for video object segmentation are generalized to simultaneously recognize distractors in addition to the intended target.
Part II consists of the published editions of the five papers. The abstracts and details of the author's contributions to each one are provided below. This paper won the best paper award in the fourth IEEE international Workshop on Mobile Vision held in conjunction with CVPR.
Abstract
Estimating the position of a 3dimensional world point given its 2 dimensional projections in a set of images is a key component in numerous computer vision systems. There are several methods dealing with this prob lem, ranging from suboptimal, linear least square triangulation in two views, to finding the world point that minimizes the L2reprojection error in three views. This leads to the statistically optimal estimate under the assumption of Gaussian noise. In this paper we present a solution to the optimal triangu lation in three views.
The standard approach for solving the threeview triangulation problem is to find a closedform solution. In contrast to this, we propose a new method based on an iterative scheme. The method is rigorously tested on both syn thetic and real image data with corresponding ground truth, on a midrange desktop PC and a Raspberry Pi, a lowend mobile platform.
We are able to improve the precision achieved by the closedform solvers and reach a speedup of two orders of magnitude compared to the current stateoftheart solver. In numbers, this amounts to around 300K triangula tions per second on the PC and 30K triangulations per second on Raspberry Pi.
Contributions
This work develops a method for threeview triangulation by optimization, that avoids considering the problem structure. It is found to be much faster and more stable than methods formulating and solving systems of equations. The simplicity makes it very fast on hardware for mobile applications.
The author contributed the software implementation, experiment design and execution and the majority of the writing.
Abstract
Discriminative Correlation Filters (DCF) have demonstrated excellent per formance for visual object tracking. The key to their success is the abil ity to efficiently exploit available negative data by including all shifted ver sions of a training sample. However, the underlying DCF formulation is re stricted to singleresolution feature maps, significantly limiting its potential. In this paper, we go beyond the conventional DCF framework and introduce a novel formulation for training continuous convolution filters. We employ an implicit interpolation model to pose the learning problem in the continu ous spatial domain. Our proposed formulation enables efficient integration of multiresolution deep feature maps, leading to superior results on three object tracking benchmarks: OTB2015 (+5.1% in mean OP), TempleColor (+4.6% in mean OP), and VOT2015 (20% relative reduction in failure rate). Additionally, our approach is capable of subpixel localization, crucial for the task of accurate feature point tracking. We also demonstrate the effectiveness of our learning formulation in extensive feature point tracking experiments.
Contributions
In this work it is shown that discriminative correlation filters designed for vi sual object tracking outperform LucasKanade point tracking in terms of ro bustness and precision. The author contributed the implementation, experi ment design and analysis, as well as the writing related to the point tracking.
360°panoramic image around the USV. We design a convolutional neural network (CNN) architecture to determine an approximate horizon line in the image and implicitly determine the camera orientation (the pitch and roll an gles). The panoramic image is warped to compensate for the camera orien tation and to generate an image from an approximately level camera. A sec ond CNN architecture is designed to extract the pixelwise horizon line in the warped image. The extracted horizon line is correlated with digital elevation model data in the Fourier domain using a minimum output sum of squared error correlation filter. Finally, we determine the location of the maximum correlation score over the search area to estimate the position of the USV. Comprehensive experiments are performed in field trials conducted over 3 days in the archipelago. Our approach provides excellent results by achieving robust position estimates with global positioning system (GPS)level accuracy in previously unvisited test areas.
Contributions
The paper included in the thesis is the journal extension of [17]. It demon strates the applicability of DCFs to navigation, exploiting their periodicity to match horizons. The author contributed significantly to the idea develop ment, the software implementation, experiment design and analysis. Both main authors contributed equally to the project.
Abstract
Video object segmentation (VOS) is a highly challenging problem since the initial mask, defining the target object, is only given at testtime. The main difficulty is to effectively handle appearance changes and similar background objects, while maintaining accurate segmentation. Most previous approaches finetune segmentation networks on the first frame, resulting in impractical framerates and risk of overfitting. More recent methods integrate generative target appearance models, but either achieve limited robustness or require large amounts of training data.
We propose a novel VOS architecture consisting of two network compo nents. The target appearance model consists of a lightweight module, which is learned during the inference stage using fast optimization techniques to 1. Introduction predict a coarse but robust target segmentation. The segmentation model is exclusively trained offline, designed to process the coarse scores into high quality segmentation masks. Our method is fast, easily trainable and remains highly effective in cases of limited training data. We perform extensive exper iments on the challenging YouTubeVOS and DAVIS datasets. Our network achieves favorable performance, while operating at higher framerates com pared to stateoftheart.
Contributions
This paper generalizes visual object tracking with DCFs as used in for example paper B, to the video object segmentation task and at videorate speeds. The author initiated the project and shared equally the idea development, imple mentation, experiment design and execution and writing, with the coauthor.
Paper E: Distractoraware video object segmentation
Andreas Robinson, Abdelrahman Eldesokey, and Michael Felsberg. "Distractoraware video object segmentation." In: Submitted. 2021 This paper is submitted for review.
Abstract
Semisupervised video object segmentation is a challenging task that aims to segment a target throughout a video sequence given an initial mask at the first frame. Discriminative approaches have demonstrated competitive per formance on this task at a sensible complexity. These approaches typically formulate the problem as a oneversusone classification between the target and the background. However, in reality, a video sequence usually encom passes a target, background, and possibly other distracting objects. Those objects increase the risk of introducing false positives, especially if they share visual similarities with the target. Therefore, it is more effective to separate distractors from the background, and handle them independently.
We propose a oneversusmany scheme to address this situation by sepa rating distractors into their own class. This separation allows imposing spe cial attention to challenging regions that are most likely to degrade the per formance. We demonstrate the prominence of this formulation by modifying the learningwhattolearn [2] method to be distractoraware. Our proposed approach sets a new stateoftheart on the DAVIS val dataset, and improves over the baseline on the DAVIS testdev benchmark by 4.8 percentage points.
Contributions
This extends paper D (and paper [2]) to improve the robustness of the correlationfilter model, by additionally modeling distracting objects. The author initiated the project, contributed to the idea development, implemen tation, experiment design and execution and writing. Both main authors con tributed equally to this project.
Introduction
Triangulation is a fundamental computer vision task, with many applications such as 3Dreconstruction, mapping and localization. It takes its name from the triangle formed by the point to be triangulated and two locations from which the point is observed. Multiple methods exist, each designed with dif ferent goals in mind, such as ease of computation or correctness. This chapter will overview some of them, starting with a somewhat inaccurate but intuitive method, continuing with precise but slow approaches with closedform solu tions, and ending with that of paper A which trades algebraic correctness for speed.
Easy twoview triangulation
The ideal view of triangulation might be constructed as in figure 2.1a. Here, there are two pinhole cameras observing the point we wish to triangulate.
Rays from their centers are cast through the observations in the image plane into 3Dspace, where they intersect on the point. However, the actual view of triangulation is probably closer to figure 2.1b. Here there are multiple cam eras looking at the point, and their observations are noisy so rays do not nec essarily intersect. A simple and intuitive approach to twoview triangulation that does ac count for nonintersecting rays, is the midpoint method. Outlined in [19], the idea is to cast two rays from the centers of each camera P 1 and P 2 through the respective 2D observations x 1 and x 2 . The midpoint between the two rays at their closest distance to one another is assigned to the triangulated 3D point X. This is easily solved as a leastsquares problem with two unknowns in three equations.
Optimal twoview triangulation
Although straightforward, the midpoint method does not necessarily mini mize the reprojection error, d(P 1 X, is the Euclidean distance in the image plane between the projection of P i X onto it, and the observation x i in it. In other words, it expresses the difference between what the cameras observed, and projections of X back into the cam eras.
Hartley and Sturm offer an improvement over the midpoint method, with their polynomial method [19]. Unlike the midpoint method, it seeks to di rectly minimize the reprojection error, while requiring that the epipolar con straint is fulfilled.
To understand this constraint, first extend a ray from the camera center in one camera, through the observation x 1 of the point X. The ray is visible in the other camera's image plane as the epipolar line λ 2 (t). If the observation x 2 lies on λ 2 (t), i.e λ 2 (t) = x 2 for some t, the epipolar constraint holds.
Reformulating the problem in terms of the epipolar lines, Hartley and Sturm eventually arrive at the expression The optimal t is one of the three minima of the 6th order polynomial equation ∂s/∂t = 0.
Optimal threeview triangulation
The computational complexity of twoview triangulation seems quite man ageable, both for the midpoint and polynomial methods. However, three views affords an advantage over two, in that an extra observation can improve the stability of the solution. Moreover, a third camera is an inexpensive addi 2.5. Fast threeview triangulation tion to a robotic or mobile application, and there exists closedform solutions similar to the polynomial method, even for this case. Given three 3×4 camera matrices P i , observed 2D points x i and the sought 3D point X, Stewenius et al. [37] define the objective function which analogously to the polynomial method, have minimas at ∇ X C = 0, and can be reorganized to a system of three 6th order equations with three un knowns. This much larger system can be attacked with the Gröbner basis method [36], which rewrites it as a (usually) larger equation system, but with the same roots. However Byröd et al. [5] notes that this approach is poorly conditioned and requires 128bit floating point numbers, which also makes it extremely slow. They develop modifications that simultaneously relax the constraints and reduces the size by dropping equations, at the expense of allowing some incorrect solutions. This drastically improves the speed and numerical sta bility and removes the need for highprecision floating point operations.
Fast threeview triangulation
The methods above are ordered by increasing triangulation accuracy, but also in order of increasing computational cost. Paper A details a threeview trian gulation method intended to be much faster, and useful on mobile platforms like robots or mobile phones. Unlike the previous algorithms mentioned in this chapter, this is an iterative approach. To triangulate, it directly mini mizes the cost function in equation 2.2 with a nonlinear least squares solver, with the starting solutions provided by the midpoint method. At the same time, the optimizer is not guaranteed to find a global optimum, but as will be demonstrated below, this may not be an issue.
The triangulation problem is closely related to bundle adjustment (BL), simultaneous camera optimization and triangulation, so a natural choice of solver would be the LevenbergMarquart (LM) algorithm. However, [23] sug gests that Powell's dogleg (DL) method is more suitable for BL as it con verges faster, with almost identical residual error. For this reason, we selected DL over LM, and the experiments indicate that it works well. The DL solver proved to be signficantly faster than Byröd et al.'s closedform approach, at 3.1 µs per triangulated point in singlethread execution on a standard desktop CPU.
In the experiments in paper A, we test the iterative method on both syn thetic and real point clouds and 2D observations, and contrast it to both the polynomial method and that of Byröd et al. Some of these results are shown below, represented here by one result each, on synthetic and real data.
The synthetic data includes a cube of random 3D points, projected into cameras placed on a circle observing it. Noise distributed as N (0, 1) [pixels] was added to the image 2D points. The real data is much harder. This is the Notredame dataset [35], a reconstruction of the famous cathedral from 715 tourist images. Ground truth 3D coordinates are obviously hard to come by here, so we compare to the points estimated from the complete 3D recon struction, that optimized all observations and camera poses jointly. The utility of threeview triangulation over two views is directly apparent on the real data (bottom). Here the polynomial twoview method exhibit an order of magnitude greater 3D placement error, and much greater variance in the reprojection error, compared to the other methods. However, looking closer at the error in 3D placement plot, it seems Byröd's method have signif icant failure cases, visible as a distinct mode around 10 0 . It is not clear what is causing this, but one hypothesis is that these failures stem from the elim inated equations in the relaxed formulation. In summary, the experiments indicate that the additional accuracy allowed by threeview triangulation is worthwhile and can be performed both extremely fast and accurately with an iterative solver.
Introduction
A triangulation method like that of paper A and chapter 2 is but one part of a 3D reconstruction or egomotion (odometry) estimation system. It is also necessary to match visual landmarks, observed in different frames at differ ent times. One way to do this is to follow them from frame to frame, perhaps with the LucasKanade tracker [24]. The landmarks might be selected with the feature detector of Tomasi and Kanade [39]. Together these two meth ods form the KanadeLucasTomassi (KLT) tracker. This chapter will refer to this function, i.e. following many small regions all over an image, as point tracking.
However there exists a similar task, visual object tracking (VOT). The common use of VOT is to observe one or more larger objects (or targets) visi ble in video, estimate their evolving location and size and indicate this with a bounding box. Potential applications are varied, from video compression to traffic monitoring and of course object tracking in robotics. However, here and in paper B, VOT is adapted to point tracking, and its performance evalu ated.
The tracking process
We begin by giving a more detailed overview of a typical visual object tracking framework.
In the beginning, the tracker is given one video frame and a bounding box around the target to track. In figure 3.1, this is illustrated by the topleft im age. The tracker is now supposed to update the bounding box as as the target moves, even if the latter changes in appearance or size, or is partially occluded by something else. To do so, it begins by encoding the bounding box into an internal representation. For our purposes, this label function is a Gaussian centered on the target. In the figure, the label function is exemplified by white blobs, although their sizes are exaggerated for illustration purposes.
Next, the image and label function are stored together in a memory, which is subsequently read by the trainer. It uses this data to build a target model, which is what the tracker uses to recognize the target. It is trained to produce a new label function centered on the target's new location as new frames ar rive. In the figure, the input frames are illustrated as a stack of images in the bottom left.
As the tracking progresses, the view of the target and background will in evitably change in appearance. If the target model is not changed as well, it would eventually fail. To avoid this, new images and newly estimated label functions are periodically added to the memory, so that the trainer can up date the target model.
The tracker output is again bounding boxes, that are recovered from the label functions, as shown in the top right corner of the figure.
Discriminative correlation filters
In the past several years numerous tracking methods have appeared, that im plement their target models with discriminative correlation filters (DCFs). Rather than measuring similarity between a target and template as one typ ically does with correlation or in the KLTtracker, these filters are optimized to separate some observation of interest from everything else nearby. For tracking, this proved to be a very effective strategy which quickly advanced the stateoftheart.
MOSSE
The first such method was the Minimum Output Sum of Square Error adap tive filter (MOSSE) filter [3], and the following will outline how it is trained.
Discriminative correlation filters
To train the filter, assume there exists one or more tuples (x i , y i ) of train ing data. In this case, x i are grayscale images containing views of the target to track and y i are label functions. As was mentioned earlier, y i are chosen to be Gaussian functions with their peaks centered on their corresponding targets.
Training the filter amounts to solving the optimization problem In other words, when convolving the input signal x i with the filter f , the out put signal should be as similar to to the label function y i as possible, in the leastsquares sense.
Applying f to an image x with a view of a target the filter was trained on, but in an unknown location, we get a new label function, which we will refer to as a score map.
This is (ideally) also Gaussianshaped and with its maximum is centered on the target. In the simplest case, the target is found by locating the maximum response in s.
A key benefit of the MOSSE filter is that it is very fast if formulated in the Fourier domain, as elementwise multiplications are equivalent to convo lutions, and because there exists a closedform optimal solution. There, the optimization problem is with the solution where * and ⊙ denotes complex conjugate and elementwise multiplication, respectively. The score s of the filter response to the input signal x given by In other words, the lowcost operation in equation 3.4 in the Fourier domain produces the filter response to x. The inverseFourier transform on this re sponse will produce a score that ideally is a Gaussian function with its peak at the target location.
Continuous correlation operators
Paper B describes another DCFbased approach to target tracking, CCOT, and its application to both object tracking and point tracking. This section outlines the idea of the CCOT target model. More details are found in paper B and in the dissertation of Martin Danelljan [9]. Paper B first defines an operator J mapping gridded pixels x onto a con tinuous domain, as where n is the spatial location of the pixel x[n] and b(t − T N n) is a shifted cubicspline basis function.
The score function is now continuous, with the optimal filter in the Fourier domain, where B are the Fourier coefficients of the cubic spline basis function b. Like before, the objective is to build a filter f such that when applied to an image x, it produces an output label s with its peak response on the center of x. However, the training set {(x i , y i )} is now extended as the tracker is running, with the most recent x and a new continuous Gaussian label y centered on the target.
In contrast to MOSSE, CCOT has additional sampleimportance weights a i and a regularization term β. As new samples are added, they are assigned a fixed importance weight. At the same time, weights of earlier samples are decayed exponentially at a constant rate. While simple, this learning and for getting strategy has been proven to be very effective, and is also used in paper D.
Finally, estimating the peak location with subpixel precision is now a two step process. First the coarse integer location is found on the pixel grid, then an optimization problem is set up to find the subpixel location with the max imal response.
Point tracking
This section outlines how CCOT was adapted to point tracking, and tested. Although an object tracking framework could be directly applied to points, some modificiations are in order.
CCOT for object tracking trains its target models on features from a deep neural network [34]. It is certainly possible to use them here as well, but these are actually less than ideal in this particular application. First of all, deep fea ture maps generally have much lower resolution than the image itself, and 3.4. Point tracking this prevents small spatial details from being tracked, as they tend to blend into one another. Second, deep features are typically represented as high dimensional vectors, and this is potentially computationally very expensive to work with when there are many points to track. Instead, we use gray pixel values. Not only is the resolution high, gray values are also minimally expen sive work with, and it makes fair and direct comparisons with MOSSE and the KLT tracker very easy, as both used gray values in their original formulations.
The other modification is the addition of a coarsetofine search pyramid in three levels, tracking the points at multiple scales, to allow for large frame toframe motion.
The resulting method is tested on the MPISintel dataset [4], a collection of video sequences from the animated short film Sintel. Although it is syn thetic, Sintel is rendered with realistic lighting, atmospheric effects, and both motion and focus blur to make it more challenging. This dataset is equipped with dense groundtruth optical flow, and is widely used for training and benchmarking dense optical flow methods, including the recent [38]. As our task is distinctly separate from dense optical flow, we initialize our ground truth starting points from the flow by first detecting good features to track [33]. The flow in these points is then accumulated forward in time to create the tracks. Unstable points on object boundaries are removed by checking whether the flows in their neighborhoods are divergent.
Our tests are performed on two variants of the CCOT tracker. The first has a memory and evolving target model, as with the VOT version. The second variant (named OursFF in figure 3.3 below) is retrained from scratch every frame and is consequently more prone to drifting. We compare the perfor mance to the classic KLT tracker and MOSSE, both also employing threelevel coarsetofine tracking to allow for large motions. The methods' accuracies are given by the endpointerrors (EPEs) of the tracked points, measured in dependently for every track, in every frame. EPE is commonly used to bench mark optical flow methods, and is simply the Euclidean distance between es timated and groundtruth point coordinates, measured in pixels.
A qualitative example from Sintel, that compares inlier tracks (red) to the groundtruth (green) is shown in figure 3.2. Quantitative experimental re sults are shown in figure 3.3. The plot on the left contains the distribution of EPEs of the four methods over the dataset. The plot on the right is the precision, which in this case is the fraction of EPEs below a threshold indicated on the xaxis. The values in parenthesis are the average inlier EPE (left plot) and precision at the inlier threshold (right plot). A point is considered an inlier, if the EPE is less than three pixels.
From the figure, it is apparent that CCOT with filter updates has its largest fraction of EPEs well below 0.1 pixels. This appears to be the case for the KLT tracker as well, but the lower fraction and the precision plot reveal that it has a much longer tail of outlier tracks, suggesting that CCOT tracker is much more robust. The MOSSE tracker has worse error distribution, possibly because it does not have any subpixel refinement. Nevertheless, the precision plot reveals that it is still more robust than KLT, as its precision above one pixel is on par with CCOT.
Introduction
Historically people navigated long distances by looking at the stars and the sun. This has now mostly been replaced by GPS, but for shorter distances and familiar surroundings, we still look for known landmarks. When traveling on water near the coast, it makes sense to look towards the shore to determine your location. This is the topic of paper C, where we develop a method to look at the horizon and subsequently find the most likely location in a digital eleva tion model (DEM). A DEM is a specific kind of geographic map, where each pixel represents the altitude at the location covered by the pixel. However, the camera is on the ground, while the DEM provides a birdseye view and a drastically different perspective. Somehow both views must be transformed into representations that can be compared and this chapter outlines the steps taken and the considerations made to do so.
Starting from the ground view, it is clear that the first steps should involve both detecting and estimating the shape of the horizon. At this point in time, we may attack a vision problem with one of two categories of approaches; the classical and the one built on neural networks. The strength of the former category is that methods generally need little or no training. The strength of the latter is that methods can be made very robust to irrelevant image details. Developing the localization method of paper C we found it useful to combine both, and developed two methods for the same purpose. The first method, built on classical techniques, was designed to provide training data for the second, trained method. These are outlined below. In paper C, the experiments are based on data captured in a 16 by 16 kilo meter region in the Karlskrona archipelago in Sweden, and a DEM of the same region. The data included panoramic videos, GPS location tracks and com pass readings collected from a small remotely operated boat, shown in figure 4.1. With the captured video we extract horizon profiles from the videos with the method of [15] which is outlined next. A horizon profile is the curve across an image, where land and sea meets the sky.
Horizon regression with classical methods
In a panoramic video with a cylindrical projection model, the horizon will appear like an Sshaped curve across the frame, as shown in the top part of 4.2. Horizon regression with classical methods figure 4.2. This occurs when the world is projected onto a cameraaligned cylinder which is tilted relative to the surface of the ocean. If the observed horizons are reprojected onto an oceanaligned cylinder, the Sshape will dis appear.
It is reasonable to assume that the horizon profile has clearly defined edges between sky, land and sea, and a classic approach to finding edges is the Canny edge detector [8]. However, this detector will not only mark the hori zon, but also any other edgelike structures. To filter out spurious edges, [15] applies another classical approach, Hough voting, for curve detection [13]. The detected curve is expressed as the horizon's normal vector parameter ized as the camera's pitch and roll angles.
With the camera orientation estimated, the next step is to find the hori zon profile. As the purpose of this method is to generate training data, it is acceptable to take a shortcut and exploit the camera's location and heading, as indicated by the GPS and onboard compass 1 . With this information, a vir tual camera is placed in the DEM. Rays are then traced out in every direction from the camera, as is illustrated in figure 4.3. The highest recorded eleva tions in the direction of each ray are subsequently organized into a virtual horizon. Finally, the camera pitch and roll parameters (θ, ϕ) as well as the heading ψ are found by minimizing Here, D is a function that measures the distance to the nearest edge, as de termined by the Canny method and h i are points on the virtual horizon. π is a projection operator mapping 3D space onto the image plane (cylinder) and R(θ, ϕ) is a 3D rotation and R(ψ) is effectively a horizontal shift in the image plane.
An example of the corrected horizon, with estimated horizon and water profiles, is visualized in the bottom part of figure 4.2. Further details of this procedure are found in paper C and in the dissertation of Bertil Grelsson [14].
Horizon regression by semantic segmentation
The classical approach outlined the previous section can determine both cam era orientation and the shape of the horizon profile. However, it is brittle, and requires a fair number of manual adjustments to produce quality results. This is why we now set out to replace it with neural networks, that are less sensi tive to noise and appearance variations. But, before diving into this, a short digression is in order.
Feature extractors
As is probably well known, the AlexNet [22] neural network architecture won the classification category the Large Scale Visual Recognition Challenge (ILSVRC) in 2012, where the goal is to correctly classify the dominating ob ject in a large number of images in the ImageNet dataset [12]. AlexNet could provide the correct answer among its top5 suggestions 83.6 percent of the time, which was a huge improvement over the previous winner [32] at 74.3 percent.
With its eight layers and 60 million parameters, this kickstarted the re search into largescale neural networks. Predictably, AlexNet was quickly su perseded by other architectures such as VGG [34] and ResNet [20]. In par ticular, a 152layer ResNet [20] managed to reach a top5 accuracy of 96.4 percent just three years after AlexNet. Subsequently, ResNets were found to be useful in a whole range of computer vision applications, including in paper C discussed in this chapter and papers D and E treated in chapters 5 and 6.
An outline of the ResNet architecture and its five stages is shown on the left hand side of figure 4.4. Each stage, except the first one, is made up of multiple stacked residual blocks. A slightly simplified form of one such block is shown on the right hand side. The residual block represents one of the To recognize an object in an image, a network must somehow transform descriptions of the the object's color, shape and texture into more abstract representations of the object type. To do so, neurons need to gather visual information from an area large enough to cover at least part of the object, possibly the entire image depending on its size. However, a neuron capable of seeing a complete image would be very expensive to store and use, and so would their activation maps, (also known as feature maps). A practical solution is to progressively reduce the resolution as image features, (pixels in the first layer) are transformed in layer by layer, moving deeper into the network. At increasing depths, the ability to localize the object is lost, while semantic understanding is gained.
A task that needs to know about object appearances, including edges and simple shapes, can extract features from shallow layers at high spatial resolu tion. Another task more concerned with the object class, can extract features with semantic meaning but lower resolution, from deep layers. When given a different purpose, the classifier at the bottom of the layer stack is generally removed. What is left, is referred to as a feature extractor or a backbone.
Horizon attitude normalization and segmentation
We now return to the discussion on finding and regressing the horizon from a panoramic image. We keep the task organization of the classical method, and design one network for camera orientation estimation and another for horizon profile regression. Both are built on top of a ResNet50 backbone with the classifier removed, to supply deep features to our new designs from the input images.
One possible issue with these features, is that the backbone is trained on ImageNet to understand its one thousand classes of "things". We are more interested in amorphous regions of sky, water and land, also known as "stuff" [7], which are not part of the ImageNet classes. Nevertheless, it is reasonable to assume that ImageNet already has imagery with "stuff" in it, and it is likely beneficial for a classification network to understand the context (i.e. "stuff") where it correlates to some "thing". One relevant example could be that boats occur on water more than anywhere else. Given that the output feature maps have 2048d vectors, we assume that some portion of that feature space is dedicated to describe "stuff".
With deep features available, the next step is to decide how to design the two networks, and we start with the camera orientation network, in paper C referred to as HorizonFinder. It is provided panoramic images where the horizons are warped into Scurves, and is supposed to predict the two camera orientation angles so that the warping can be removed. A natural choice is to implement the output layer as a set of fullyconnected neurons, that each see all backbone features at the same time. This is similar to how one would set up a network for image classification, though the output variables are continuous and not categorical. Now if the backbone features were applied directly to this output layer, its neurons would have to be constructed with several million weights each, even though the backbone feature maps are of much lower resolution than its corresponding input images (1/32nd of the original on both axes). Having this many weights is likely unnecessary and might even fail, as the concepts of sky, water and land must be understood as the Scurve from which angles can be extracted. Consequently, there is a greater chance of it working correctly if the transformation is allowed to span multiple network layers.
Finally, there is some auxiliary information that can help shape the new network and reduce the number of weights required in the last layer. First, it is safe to assume that there is a horizon profile stretching horizontally across the frame, and the network only needs to concern itself with one or or two lo cations per column, where sky, land and water meet. Consequently, it should be possible to pool features in the same column and reduce the vertical res olution. At the same time, this will enlarge the spatial context available to subsequent neurons, potentially improving their performance. Second, with only three concepts (sky, land, water) to keep track of, we should be able to project them from the 2048d backbone feature space into one with consider ably fewer dimensions.
The requirements and problemspecific circumstances outlined above, can be compressed into a very simple design with only two convolutional lay ers, shown in figure 4.5. The network output angles are expressed as sine and cosine pairs, which limits the output range to [−1, 1] and entirely avoids the modulo 2π of raw angle values. The requirements on the second network, named HorizonSegmenter in paper C, is very similar to that of the first, although some adjustments are made. This network is outlined in figure 4.6. First, as the camera orientation angles are known, we can correct the individual images from the panoramic camera and segment those individually, without stitching. Second, it is use ful to allow for more detail along the horizon. This is why two layers that double the resolution horizontally, are inserted into the network. The output layer, finally, is a fullyconnected network producing two sets of vertical pixel coordinates to mark the horizon and water profiles across the input frame 2 .
Location estimation
With the two networks, we can now capture a scene from video and then ro bustly rectify and segment it into sky, land and water, and extract a hori zon profile. This is a vector h(ψ), with one element per direction ψ ∈ {0, δ, 2δ, . . . , 2π − δ}. Its values are the tangents of angles θ between sea level and land elevation, from the point in the world where the horizon is observed.
Horizon matching
The similarity of h(ψ) to virtual horizons is subsequently estimated with 1D MOSSE filters, and the location with the highest matching score is expected to be the correct one.
Each of the filters is locationspecific and is trained from a set of virtual horizons h i (ψ) rendered from a 50 by 50 m neighborhood around the loca tion, here indexed by i. This data augmentation acts as a regularization, re ducing the trained filter's sensitivity, allowing it to better match the query horizon profile despite slight variations in appearance. Similar to visual ob ject tracking, the target label is a Gaussian function that the filter is optimized to output.
An example map of filter responses centered on the true location in the archipelago DEM, is visualized in figure 4.7. Green and blue indicate ray traced land and water profiles as seen from the center of the map. Gray scale pixels indicate elevation in locations where no filter response has been eval uated.
Evaluated locations are colored red, with higher brightness indicating stronger matches, although the peak response in the center is significantly stronger than the visualization suggests. The actual response used here is not the raw filter output, but its peak to sidelobe energy ratio. This suppresses spurious detections in locations where the tallest peak is only barely stronger than the background.
Some additional considerations
To have the approach work well in practice, there are some small but critical issues that must be accounted for.
First, the amplitudes of both the computed and observed horizon profiles will depend on the distance to the land mass, which is analogous to an ob ject's scale in a 2D image. A MOSSE filter applied to visual object tracking is sensitive to scale changes, and that weakness is carried over to this problem. Far from land, mismatched query and template scales do not present much of an issue, but when approaching a shore, the observed horizon profiles will rapidly change in height.
However, a good way to normalize the horizon profiles is to divide it with its maximum amplitude if any point h i (ψ) > 1.0. The condition prevents am plification of noise in places far from any landmass. With the normalization, the specificity of the filter response is greatly increased.
Second, A MOSSE filter in the Fourier domain is translation equivariant and in this case the filters are equivariant to the direction ψ. Despite this, we encountered spurious peaks at other angles when attempting to find the correct yaw over all 360°of the filter response. For this reason, we restricted the search for a filter peak response within a ±4°band around the compass heading. This band is wider than the expected deviation from the true head ing in the actual compass used in our experiments, and could be made even wider if necessary.
Performance evaluation
The performance of the method was evaluated on separately collected video sequences and GPS tracks. Without groundtruth, the estimates were com pared to the GPS recording instead, and were found to have a mean deviation of 2.47 m and a standard deviation of 1.26 m. This is at least as accurate as a consumerlevel GPS device, within the archipelago testing area. A qualitative comparison is made in figure 4.8.
5
Video object segmentation
Introduction
Solving the image segmentation problem entails dividing images into coher ent regions. The semantic segmentation problem has the additional condi tion that the regions are defined by the kind of object found inside. Going one step further, video object segmentation (VOS) seeks to accurately seg ment out a moving object in video. The target can be of any class and have any appearance, which can change drastically as it moves, due to deforma tions, occlusions and lighting changes.
VOS currently comes in three flavors; unsupervised, semisupervised and interactive, which differ by how the target is specified at the start. An unsu pervised method is supposed to select a target by itself, for example by choos ing the most salient object it can see. An interactive method expects a human operator to annotate the first frame by drawing sparse "scribbles" to roughly indicate where the target and background are. Dense segments are then de rived from these hints. A semisupervised method explicitly needs full seg mentation mask to be provided with the first video frame. This is the most generic of the three in that the mask generation is left unspecified, and is what will be discussed here.
VOS approaches
VOS has been around for a fairly long time; an early example is [18] from 1997, but we will ignore that and start from 2016 when DAVIS [28] (for Densely Annotated VIdeo Segmentation), the first of the two current ma jor datasets/benchmarks, was published and the first CNNbased approaches appeared.
An early method to appear alongside DAVIS is oneshot video object seg mentation (OSVOS) [6]. This approach repurposes the VGG network [34] but removes the final classifier layer. After transferlearning on DAVIS, the network is finetuned during inference on the first video frame, to output the correct target segmentation mask. Although very simple, it yielded good re sults on the 2016 edition of the DAVIS benchmark. However, by its construc tion it cannot adapt to changes to the target appearance, and the size of the network itself makes it slow to initialize.
A later method, that can adapt to changes, is the Referenceguided mask propagation (RGMP) [26] approach. The authors of this method introduced what they referred to as a Siamese encoderdecoder architecture, which is illustrated in figure 5.1 This figure is not showing any detail of the internal workings of RGMP, but it outlines a structure that appears in several VOS methods, including that of paper D. On the left is an encoder network to process the input, which are images and possibly segmentation masks, into deep features. The same network is applied to both the reference frame (top branch) and the target frame (bottom branch) to extract deep features.
The reference frame is the first image of the video sequence, and a given mask to start the segmentation. The target frame is a later video image to be segmented. Features from both branches are subsequently merged in a methodspecific model (center box), possibly producing output in the form of embeddingvectors in some latent space, typically at lower resolution than the input video. These embeddings are then transformed by the decoder into a fullresolution mask as the output.
In the case of RGMP, the encoders is a ResNet50 network. The network is pretrained, but the first layer is extended to four input channels rather than three, to allow the starting target mask to be encoded alongside the RGB image. In the bottom branch, the video image is complemented with the mask propagated from the previous frame. Features from the reference branch are used to guide the RGMPmodel to attend to similar features in the new video frame.
Fast and robust target models
A third example is the spacetime memory (STM) VOS method [27]. It is the first use of transformers [40] in video object segmentation, and it also employs a Siamese encoderdecoder architecture. Unlike RGMP however, STM maintains a memory of multiple image and mask pairs encoded into keyvalue tuples.
The target image, without a mask, is separately encoded into another keyvalue tuple. The two sources are combined with the transformer self attention mechanism, where the memorized keys in conjunction with the tar get image key, control the merging of the memorized and target values. Fi nally, the merged values are decoded into a new mask. The memory can sub sequently be extended with newly predicted segments. This approach is very effective and have inspired multiple followups, but the ResNet50 encoder network is trained from scratch, which requires additional training data.
Fast and robust target models
We now turn to the method of paper D, which generalizes visual object track ing into segmentation, with discriminative correlation filters to separate the target objects from the background.
Our VOS framework, shown in figure 5.2 merges the tracking structure in Like with tracking, the trainer generates a target model from image fea tures and a label function provided when a new target appears. However, in tracking it is sufficient to train the DCF with a Gaussian label function, so that it can produce a peak at the target location. We now replace it with a pixelaccurate mask, so that the DCF will produce similarly pixelaccurate segments. But unlike STM, our method does not encode masks alongside the image, as they are needed as label functions when training the target model.
As was mentioned earlier, tracking DCFs are implemented in the Fourier domain to reduce the number of calculations. However, the implied circu lar convolution causes wraparound effects near the edges and is a significant drawback of that approach. These must be mitigated by windowing or by tak ing special care when training the filter, as was done in [10]. In addition, it is unfortunately the case that the Fourier transform is actually rather inefficient on GPUs, which are needed to efficiently work with deep features.
Fortunately, individual deep features have significant discriminative power. There is no longer any need to depend on the appearance of a target and background over a spatial region, as some of this information is already encoded in a single deep feature vector. Consequently, there is little to gain from training a filter with significant spatial extent, and no longer any benefit to operate in the Fourier domain. The results in paper D show that a 3 × 3 filter is quite sufficient for this purpose, which of course is also very efficient on GPUs.
Additional efficiencies are gained by factorizing the target model into a projection W p and the actual 3 × 3 filter W f , like The factorization drastically reduces the number of trainable parameters, as W p projects the 1024dimensional deep features into 96 dimensions. After the parameters have been trained once, W p is excluded from further training, which allows W f to be trained even faster during subsequent updates.
Training the target model
As our DCF is now to be trained in the spatial domain, we have no neat closed form optimal solution available. Rather, the filter is now trained with an op timizer, using a quadratic loss function Here, U is a bilinear upsampling function that matches the spatial resolu tion of H W (x k ) to the label function (segmentation mask) y k . W is shorthand for both filter parameters concatenated.
A question remains regarding the training, and that is the choice of opti mizer to minimize the loss and train the filter parameters. One straightfor ward option could be gradient descent optimization, with the update function W = W −α∇L W . However, in the development of the DiMP tracker [1], it was discovered that gradient descent training on deep features converged slowly. They opted for the much fasterconverging the GaussNewton optimization method, and we also adopted this approach.
An interesting trick here, is that the automatic differentiation machinery of a machine learning framework, could be applied in the optimizer. This simplified the implementation greatly since the gradient functions did not have to be calculated manually.
Recovering high resolution
A problem with the target model described above, is that the output segmen tation s = H W (x) it generates, is only 1/16th of the original size. This is the resolution of the deep features x provided to the DCF from the ResNet back bone network.
To recover the full resolution, we adapt the DFN semantic segmentation network [42], illustrated in figure 5.3. This network has four stages, each re ceiving an image feature map x i , from a ResNet101 backbone in our case. The network generates its own attention to these features by globally pooling the output of the previous stage into a "channelattention" vector. After pro gressing through the stages, from deep to shallow features of progressively higher resolution, the output is a segmentation map. When DFN is applied to semantic segmentation, the network determines which of twenty classes it should assign to each pixel, without outside su pervision. For our purposes though, we would rather have the target model decide whether a pixel belongs to the target or the background. To do this we introduce a new block, the target segmentation encoder (TSE). At the input of each DFN stage, the TSE injects the lowresolution target segmentation mask s into the network, by resizing and concatenating it with the backbone features. This is illustrated for one decoder stage, in figure 5.4.
The outputs provided by the decoder is now of much higher resolution, but still only 1/4th of the original image size, so we upsample a factor four as a final step. In paper D, this is performed by a pair of convolutional lay ers and 2× bicubic interpolations, but is replaced in paper E with the guided upsampler of [38].
Merging multiple objects
So far, we have only discussed the process of segmenting a single object, but what if there are multiple objects of interest at the same time? For this pur pose we will now refer to s as a score map and let segmentation mask be a binary mask created from it.
First let us consider the score map s produced by the decoder network as loglikelihoods. This means that each pixel s is assumed to be the log likelihood that the object is present in that location. These can be transformed into probability maps, with p = σ(s), and the final segmentation mask is then given by p > 0.5. Here σ refers to the sigmoid function.
To merge multiple objects i = {1, 2, ...N } that were predicted in separate score maps s i , [26] suggested that that the softmax function could be applied to normalize multiple observations into probabilities. Softmax has previously been applied to both image classification and semantic segmentation as a way to merge independent predictions into perclass probabilities. For the pur poses of video object segmentation [26] defines the merging function as wherep i is the singleobject likelihood (i.e σ(s)) of object i. The background is given the class i = 0, but as there is no separate prediction it is defined as p 0 = ∏ j (1 − σ(s i )), i.e. the probability that there is no other object present.
After probabilities have been determined, each pixel is finally assigned the object identity i of the class with the highest probability p i , forming a map of merged segmentation labels {0, 1, ...N }.
Performance evaluation
The methods mentioned mentioned so far are compared in figure 5.5. The yaxis score is the mean of two evaluation metrics adopted for video object segmentation. The first is the intersectionoverunion metric, also known as the Jaccard distance (abbreviated J ), which measures how well the predicted segment area overlaps the ground truth segment area. The second is referred to as the Fscore and is intended to measure how well the predicted segment's edge adheres to the groundtruth edge. A perfect score in either case, is 100 %. For more details, see [28].
At 76 percent, our method is approximately 7 percentage points below STM, but is significantly faster at 22 frames per second. Paper D was subsequently extended into the learningwhattolearn (LWL) method [2] for video object segmentation. The main feature of LWL is that, unlike the method in paper D, it can backpropagate through the target model trainer. Taking advantage of this, LWL employs a new neural network, the label encoder. It transforms the segmentation masks into multichannel embeddings and provide those as training labels for the target model. These labels have the same resolution as the feature maps provided to the target model trainer which, as was mentioned earlier, is just 1/16th resolution of the segmentation masks. However, this does not cause any loss of fidelity.
A possible explanation why this is the case, is found in figure 6.1, showing a segment mask encoded into a multichannel embedding. It seems higher resolution details have been encoded as the responses of directed filters. This is reasonable as the decoder network is responsible for recovering fine detail.
Providing it with better edge information should improve the results. An other advantage of the multichannel embeddings, is the possibility to train the target model with additional information. We take advantage of this in paper E, to reduce the problem with distracting objects.
What distractors are, and why they would be a problem is illustrated in figure 6.2. The top shows frames from a video sequence with multiple skiers entering the frame from the right, one at a time, and leaving to the left. Each skier is associated with its own color in the segmentations in the middle and bottom rows. In the middle row, when the fourth target (blue) has appeared, it is confused with the earlier green and yellow targets. This is visualized by the multiple colors on the same segmentation mask, in particular in the fourth frame counting from the left. In comparison, the segments in the bottom row are generated by the method of paper E, and the identity problem in frame four has mostly disappeared 1 . To understand what is happening in the example, consider that the target model is trained to determine which side of a boundary in feature space (a hyperplane), that a deep image feature falls on. Features from the target should be on one side of this boundary, and features that are not, on the other side. If the the trainer had taken the distractors into account, the boundary would likely have been placed differently.
Incorporating distractors
There are many possible ways to mark distractors, including, e.g. hard exam ple mining, where objects with deep features similar to the target are detected and added as distractors to the training of the target model. As the number of targets in a video can vary, so can the number of distractors. This is prob lematic as deep neural networks are generally defined with a fixed number of inputs.
In paper E, we address both issues by defining the the distractor segment of target i as the union of all other known targets j ≠ i. The segmentation map is then extended to a second channel to hold the distractor, before it is transformed into an embedding.
However, as the distractor to any one target is an amalgamation of all the other targets, it is reasonable to assume that it incorporates a more diverse set of deep features compared to a single object. In addition, we know the target model has fairly limited expressive power, and it could be difficult for it to maintain a correct segmentation of the distractor over time.
Fortunately, we can sidestep these issues easily, in particular since per fectly accurate distractor masks are unnecessary. First, we can regenerate distractors from all newly predicted objects, after a new frame has been pro cessed. Second, the confidence of a pixel being a distractor need not be high and it is acceptable to incorrectly classify a distractor as background and vice versa.
To merge targets into a distractor, we let the the most certain predictions of target and background "win" in every pixel. Unlike softmax aggregation, this does not normalize the probabilities, so uncertain predictions can remain uncertain.
Distractor generation
More formally, we approach this as follows: Let p ti (x) ∈ R H×W be the target segment probability map of the target with index i ∈ I. This can either be the network decoder output passed through a sigmoid activation function, or set to zero or one from the groundtruth mask pixels, if the target is new. Now let p max (x) = sup j p tj (x) ∀j ∈ I , (6.1) This merges the highest and lowest probabilities (per pixel) of all target maps, into p max and p min . Then let L(x) ∈ (I ∪ {0}) H×W be the map of merged segmentation labels after softmaxaggregation, with zero being the background label.
indicate regions with any foreground pixel, and
Training
To not unnecessarily overconstrain the training, we partially disable the computation of the loss in training samples with a single target and no dis tractors. Specifically, we require the distractor probability to be zero in areas under the target, but allow it to take any value elsewhere.
With this modified training loss, the decoder is allowed to spontaneously create its own distractor masks. An example of this is found in the "camel" sequence of DAVIS, shown in figure 6.3. In this video, there is only one target, the camel in the foreground. Yet, when the camel in the background appears, it is marked as a distractor by the decoder network.
Image
Target Distractor
Performance evaluation
Finally, we can place papers D and E in context to the stateoftheart, figure 6.1 shows the improvement over time, of mean J and F score improvement. The methods were evaluated on the DAVIS 2017 validation split, and includes both the methods mentioned before, i.e OSVOS [6], RGMP [26], STM [27] and LWL [2], as well as the more recent method CFBI [41]. Both STM and CFBI were considered stateoftheart at the time of writing. 7
Concluding remarks
This chapter provides a summary of the results and reflects on possible paths forward.
Results
As was noted in the introduction chapter, this thesis has two distinct direc tions. Papers A and B move in the direction of geometry and 3D reconstruc tion while the theme of papers B through E is to explore the uses of discrimi native correlation filters to both lowlevel (e.g. segmentation) and highlevel (e.g. localization) computer vision problems. In paper A we showed how threeview triangulation through optimization, can be made significantly more robust and precise, and yet be fast compared to algebraic approaches. These properties could perhaps allow the method to be employed as a faster alternative to bundleadjustment for 3D reconstruc tion. Camera pose estimation still requires joint optimization with observed landmarks, but it need not be performed over the full set of points.
In paper B we developed a highly robust and precise 2D point tracker based on discriminative correlation filters and demonstrated that it outper forms the LucasKanade method. This can be applied in the same context as the triangulation of paper A.
Moving away from geometry, we introduced a new application of DCFs by tackling localization in paper C. We showed how discriminative correlation filters are sufficiently powerful for precise place recognition in a large coastal region, given only horizon profiles. This approach can be applied as a backup solution when satellite navigation is unavailable.
Paper D introduced a third application for DCFs, generalizing visual ob ject tracking to video object segmentation. In this paper it was demonstrated that a correlation filter can exploit deep features to robustly and efficiently 7. Concluding remarks classify target and background in video, while staying updated to appearance changes. As this method is realtime, it can readily be employed in other con texts with minimal performance impact. Segmentation is a lowlevel vision task, and as such its main application would be as an attention mechanism to filter out unwanted visual features.
Paper E extended the discriminative model introduced in paper D, to con currently handle both a target and distracting objects in the same filter, useful in situations where multiple targets have very similar appearance.
Future work
With the papers in this thesis as a starting point, there are several interesting avenues of research to explore.
For example, as plain discriminative correlation filters are currently ap plied as linear classifiers, they are fast to train. However the capability dis parity compared to neural networks is quite large. It would be useful to study additional DCF formulations on the continuum between the very fast linear DCFs on one end, and powerful but slow neural networks on the other. For example, one possibility is to construct filters with residual branches, and determine whether this improves their discriminative power, and at what ad ditional computational and data cost.
Another avenue of research related to segmentation, is selfsupervised de tection and training of distracting targets. This would help the DCF main tain a minimal margin between classes over time and reduce the risk of target model drift.
On the applied side, it would be interesting to merge segmentation, track ing and triangulation to create realtime, dense 3D reconstructions of indi vidual objects. If regions of interest are segmented out first, there should be fewer outliers in the reconstruction, leading to less wasted computation and more accurate results. Also, given deep features with semantic meaning, DCFbased segmentation could possibly be applied to partbased processing, potentially needed for dense 3D reconstruction of deformable or piecewise rigid objects.
With regards to the localization application, the current bottleneck is the bruteforce matching. However, if we consider horizonprofiles to be ana logous to raw image pixels, it would be interesting to see whether the place recognition process can be moved into a space of deepfeature embeddings. Such representations could be more compact, better suited to the specific ap plication, and very likely much faster.
Part II
Publications
|
2021-07-06T13:27:51.674Z
|
2021-06-14T00:00:00.000
|
{
"year": 2021,
"sha1": "67c0fc759566611087ef00f97f28afc4ca65a5ce",
"oa_license": null,
"oa_url": "http://liu.diva-portal.org/smash/get/diva2:1545394/FULLTEXT01",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3a695dd8b9f95419f8191e1b92b2427e4119a257",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
}
|
270073899
|
pes2o/s2orc
|
v3-fos-license
|
Dependent Competing Failure Processes in Reliability Systems
This paper deals with a reliability system hit by three types of shocks ranked as harmless, critical, or extreme, depending on their magnitudes, being below H1, between H1 and H2, and above H2, respectively. The system’s failure is caused by a single extreme shock or by a total of N critical shocks. In addition, the system fails under occurrences of M pairs of shocks with lags less than some δ (δ-shocks) in any order. Thus, the system fails when one of the three named cumulative damages occurs first. Thus, it fails due to the competition of the three associated shock processes. We obtain a closed-form joint distribution of the time-to-failure, shock count upon failure, δ-shock count, and cumulative damage to the system on failure, to name a few. In particular, the reliability function directly follows from the marginal distribution of the failure time. In a modified system, we restrict δ-shocks to those with small lags between consecutive harmful shocks. We treat the system as a generalized random walk process and use an embellished variant of discrete operational calculus developed in our earlier work. We demonstrate analytical tractability of our formulas which are also validated, through Monte Carlo simulation.
Introduction 1.Competing Failure Processes
The term "competing failure processes" applies to systems periodically or continuously damaged by at least two factors.For example, a system can be hit by shocks of different magnitudes, so that one single extreme shock of a magnitude exceeding a threshold H can knock the system down.Or if two consecutive shocks land in the system within a very short period of time, say, with a time lag smaller than a δ, this can ruin the system as well.The second of two such shocks is referred to as a δ-shock.In this simple situation, the system fails when it is hit by an extreme shock or by a δ-shock, whichever of the two comes first.The time of the system's failure is referred to as the time-to-failure or lifetime of the system.Even though there is one single shock process, we deal with two different types of damages inflicted on the system.
More formally, suppose that the shocks land in the system at times τ 1 , τ 2 , . . .with respective magnitudes W 1 , W 2 , . ... Thus, a shock at τ k is extreme when W k > H, and the kth shock is a δ-shock if τ k − τ k−1 < δ.The system fails at time τ k if the kth shock is extreme or it is a δ-shock.In this case, τ k is the lifetime of the system.
Other examples of DCFP include the degradation (aging) process that is represented by a monotone-increasing function or monotone-increasing or nondecreasing stochastic process φ that runs until it crosses some fixed sustainability threshold D. If T (whether random or deterministic) is the crossing (first passage) time, then the system fails.Degradation can be accelerated by soft shocks that more quickly degrade the system, causing φ to cross D sooner, which may also occur upon the landing of one of the soft shocks that turns out to be fatal.Thus, the combination of the natural degradation and external soft shocks can represent process R 1 .Other shocks can also hit the system as described above as process R 1 , now R 2 , with some shocks being extreme.If the crossing of D occurs first and thus causes a so-called soft failure of the system, then R 1 wins.If an extreme shock (with magnitude W > H) hits the system first and causes a so-called hard failure, R 2 wins.Altogether, the system fails at some time τ ρ , which is T or some time τ k , when an extreme shock occurs.
Note that in the general case, the magnitude W k depends on τ k − τ k−1 , for all k = 1, 2, . . .(τ 0 = 0).Such a process (τ k , W k : k = 1, 2, . ..) is called position-dependent or the process with position-dependent marking.The latter adds yet additional dependence between R 1 and R 2 .
The System under Study
Consider a reliability system periodically hit by random hard shocks of magnitudes W 1 , W 2 , . . .taking place at respective times τ 1 , τ 2 , . ... Some of these shocks are harmless, some are critical, and some can be singly fatal (usually called extreme).All shocks are classified by one of the three types dependent on their magnitudes relative to two fixed critical thresholds, H 1 < H 2 .The harmless shocks are those whose magnitudes W k s ≤ H 1 .Shocks with magnitudes W k ∈ (H 1 , H 2 ] are critical, and any single shock of magnitude W k > H 2 is extreme and thus fatal.The system fails instantly after being hit by a single extreme shock.However, there are N critical shocks to be landed in any order that need to knock the system down.The last, Nth critical, shock is fatal.In a nutshell, a shock is fatal if it is extreme or Nth critical.Altogether, the system fails whenever it is being hit by N critical shocks or by one extreme shock, whichever of the two events comes first.Note that regarding the critical shocks, it is not a run system, in which critical shocks must follow one another.In our case, the assumptions are looser, allowing the critical shocks to be mixed with harmless shocks that cause failure only when their total number reaches N. Further embellished, the system is refined in such a way that the harmless shocks are not that harmless after all.Namely, the system can also be fatally harmed if any two consecutive shocks (including those categorized as harmless) land with a time lag less than some δ > 0. The second shock is referred to as a δ-shock.Now we have three different forces that can trigger system's failure: (i) A total of N critical shocks.(ii) One extreme shock.(iii) Two consecutive shocks, with a time lag between them less than δ.
(iii ′ ) An embellished variant of (iii) is due to the system's policy with a total of M δ-shocks.Note that M δ-shocks apply to multiple δ-shocks occurring in any order, even consecutively.For example, if M δ-shocks are consecutive, starting, say, at τ i+1 , the i + 1st shock (deemed as the first δ-shock) lands within a period of time less than δ counted from the ith shock at τ i , followed by i + 2nd shock at τ i+2 with a time lag less than δ from τ i+1 , . .., followed by i + Mth shock at τ i+M with a time lag less than δ from τ i+M−1 .An M-δ-shock model in which δ-shocks occur consecutively is called a δ-run model.
Related Literature.For convenience, we break the entirety of the literature into four subsections whose contents may occasionally overlap.
DCFP
The systems with DCFP are more complex than those introduced in Section 1.1.Most work is focused on computing the reliability function R(t) = P τ ρ > t , t ≥ 0 and the utility of the total probability formula to arrive at R(t) that typically includes one or multiple series and integrals, with numerical results or Monte Carlo simulation, all used to compute special cases.Many such papers include interesting practical examples of complex devices in engineering and computer science where such DCFP take place and with the need to proceed with an associated probabilistic analysis.
For example, Che et al. [1] in 2018 studied a system with degradation driven by a monotone-increasing stochastic process intertwined with occasional soft shocks entering the system according to a marked Poisson process.That same process of soft shocks also hits other components but with different impacts, and they are referred to as hard shocks, some of which are extreme due to their magnitudes.The first such extreme shock knocks the system down unless the system fails earlier due to a combination of degradation and soft shocks.
A somewhat similar system was studied in 2018 by Zhang et al. [2], in 2017 by Hao et al. [3], in 2023 by Feng et al. [4] (where degradation is modeled by a gamma process), and in 2021 by Bian et al. [5], who dealt with a multicomponent system.In 2021, Sun et al. [6] studied yet another similar system, where, however, the degradation process is modeled by drifted Brownian motion (which is nonmonotone).Now, Hao and Yang [7] in 2018 embellished Hao et al. [3], which they coauthored, by introducing hard failure thresholds and also adding a δ-shock policy to the competition.
An interesting modification of the above was proposed by Liu et al. [8] in 2017, in which the degradation process is rendered nonmonotone to attribute to downhill directions as a self-healing mechanism.
In 2021, Lyu et al. [9] added the third competing process pertaining to the run shock policy.This condition renders the system fail when the magnitudes of k consecutive shocks exceed a critical threshold.Furthermore, when the total number of shocks attains a certain value, the degradation rate of soft failure changes.Furthermore, the shocks' interarrival times follow a phase distribution.
In 2022, Hao and Li [10] investigated DCFP applied to a single-component model, series, parallel, and mixed series and parallel models.
In 2019, Ranjkesh et al. [11] studied a DCFP system where a shock process is Poisson with position-dependent marking.In this system, there is no other degradation process besides the shocks that accumulate until their cumulative damage crosses a fixed threshold.Another competing process is forged using the δ-shock principle.The authors approximate the system's reliability function.
In 2023, Dshalalow and Aljahani [12] studied an N-critical shock model competing with an aging process.
N-Critical Shocks Models.As a DCFP with multiple processes involved, an N-critical model, along with aging and soft shocks, was studied in 2022 by Dshalalow and White [13].The aging process was defined as linear with a deterministic slope, and it was combined with soft shocks that accelerated aging, and such a cumulative aging process sooner or later crossed a sustainability threshold.The projection of such a crossing point was the soft failure.After this random point, say η, the system was deemed inoperational and shut off.The system could also fail if it was hit by one of the critical shocks, namely, by the Nth critical shock, say at the instant τ ν .Thus, the system fails at time η ∧ τ ν .
In 2012, Jiang et al. [14] studied a variant of such a system with aging, soft shocks (cumulative shocks model), and hard shocks.There are three thresholds, H 0 < H 1 < H 2 , of which H 2 is "critical".It takes just one shock of a magnitude above H 2 to knock the system down.However, once N shocks cross H 0 (but not H 2 ), the threshold H 2 is downgraded to H 1 , so that it now takes one H 1 -critical shock (that is, of a lesser magnitude) to knock the system down.Meanwhile, aging, along with soft shocks, takes its course, and if the aging curve crosses some D, the system soft-fails, unless it fails earlier due to an extreme shock.Now we see that while this system is not exactly N-critical shock, it carries some elements of the N-critical shock protocol.
An N-critical shock system was studied earlier by Cha and Finkelstein [15] in 2011, but with no aging.Wu et al. [16] in 2022 also studied an N-critical shock system with no aging under the assumption that shocks arrive according to a Markov renewal process.
Most recently, an N-critical shock system appeared in 2023 in works by Wei et al.
[17] and Dshalalow and Aljahani [12].The authors of [17] also included shock-dependent maintenance.In [12], Dshalalow and Aljahani worked with an aging process driven by a nonspecified monotone-increasing function δ that crosses a threshold D at point T = δ −1 (D) that can be observed only with some random delay, that is, at some epoch of time when the system's failure can be verified.The system can fail earlier if it is hit by a total of N critical shocks.So, there is a combination of DCFP and N-critical shocks in one system.The authors of [12] arrive at closed-form functionals representing the joint distribution of the lifetime of the system, the overall damage to the system upon failure, and other characteristics, such as prefailure time and the associated damage.
Run Shock Models.A run shock system is a special case of an N-critical shock system, compared to which, a run shock requires N consecutive shocks to occur to knock down the system, whereas an N-critical shock policy allows N critical shocks to mix with noncritical shocks in any order.Furthermore, any consecutive occurrence of N critical shocks is not excluded from the N-critical shock protocol, and thus formally, the N-critical policy is more relaxed compared to the run shock policy.For that reason alone, the N-critical policy seems to apply to a wider class of reliability systems.
Here is another shortcoming of the run shock policy.Suppose a system is hit by a run of N − 1 consecutive critical shocks followed by one noncritical shock, then followed by another run of N − 1 consecutive critical shocks and one noncritical shock, and so on.It seems likely that it takes a while (if ever) to come up against a run of N-critical shocks before the system becomes "inoperational" as per the run shock nomenclature.It appears that in this situation, the system may become exhausted much earlier than at an assumed failure time in the run shock framework.
Yet, run shock represents much earlier modeling with an interesting analytics.An argument for run shocks was given in Mallor and Omey [18] in 2001 that if applied to insurance claims, only a series of N consecutive claims large enough would raise flags.We think that N large claims in any order are sufficiently concerning and more realistic.Note that Mallor and Omey [18] were the first to introduce such systems; they also studied such a system in 2003 [19].Various embellishments of run shock models were studied in Gong et al. [20] in 2018, Eryilmaz and Tekin [21] in 2019, Lyu et al. [9] in 2021, and in 2022 by Wen et al. [22].Poursaeed [23] in 2021 studied a fairly complex multistate run shock system with different lengths of runs and different categories of failures.
δ-Shock Models.Related to our system are also δ-shock models.As already mentioned in the description of our model, the failure of the system is stipulated by the first occurrence of two consecutive shocks with a time lag of less than some fixed δ.This policy pertains to our model when M = 1.The plain δ-shock policy is often implemented whenever shock damages (or magnitudes) are hard to observe.A δ-shock model was first introduced by Li et al. [24] in 1999 followed by Li and Kong [25] in 2007 under the same assumptions, targeting the asymptotic behavior of the system when δ → 0. Another plain δ-shock model from the same period was analyzed by Tang and Lam [26] in 2006.
Embellishments of δ-policy are seen in later works, like one in the article by Parvardeh and Balakrishnan [27], dated 2015.Here, the system is deemed to fail when (a) there is an occurrence of one δ-shock, or (b) the magnitude of any single extreme shock is larger than some H, whichever comes first.Eryilmaz [28] combined run shocks and M δ-shocks (that is, a run and δ-run model in one), which was a significant upgrade of [27] even though the paper by Eryilmaz appeared three years earlier, in 2012, compared to [27].
An interesting embellishment of Eryilmaz's δ-run model [28] was introduced by Jiang [29] in 2020.Such a system had N different failure thresholds If the time lag between two consecutive shocks lies in (δ i+1 , δ i ), i = 1, . . ., N (δ N+1 = 0), the system is associated with ith failure type.The Nth type is irreparable and the whole system needs replacement, while the first N − 1 types allow repair.
Remark 1 (Some applications).While extreme shocks naturally occur in numerous real-world situations and the reliability literature, δ-shock systems are slightly less popular, while N-critical models are especially rare.Yet such situations often arise in connection with various insurance claims or a combination of claims, citations, and violations.It is particularly apparent with car insurance.Each insured automobile driver knows that every incident, even the one caused by another driver, triggers an unwanted citation, collectively crossing a specific threshold ending in cancellation of a policy, because the driver is deemed to pose a risk for the underwriter.Not every accident or incident (i.e., shock) is equal (shock's magnitudes), but roughly a certain number of incidents deemed critical can cause a policy's cancellation.The time lag between such shocks can also play a major role.Typically, several incidents mixed with traffic violations occurring within short time intervals carry a higher risk of cancellation than the same amount of such incidents spread over a longer period.
δ-shocks that occur in technology or electronic devices are regarded more hazardous because they significantly reduce the chance for the system to partially recover after being hit by harmful shocks.Consider, for example, a car suspension system that is periodically hit by bumps or holes.One such critical hit may require a small amount of maintenance.Yet even with maintenance, there is a limit to how many such hits the suspension can sustain before having to undergo a complete and costly replacement.Such hits become even more dangerous if they occur within time intervals short enough without giving the system an opportunity for partial maintenance.
The same applies to biological organisms like human bodies periodically traumatized by various diseases that wear out our immune system.Those ailments occurring with shorter lags reduce the odds for our bodies to (even partially) recover, and thus, such shocks become more life-threatening.One of the reasons why δ-shocks are alarming is because after each disorder (harmful shock), our immune system works hard on the body's recovery, whereas consecutive δ-shocks force the system to multitask.
Critical and δ-shocks often take place in the stock market.Any adverse action, such as proposing a controversial budget in Government chambers deemed harmful to the market or raising crude oil prices, can cause the market to stumble.Raising interest rates due to inflation, wars and the expectation of wars, or bad reports about major companies or sectors, to name a few, can be thought of as critical shocks for the market.On the other hand, an economic shock is harmless if it is just noisy and can be easily identified by using special mathematical tools.However, adverse economic or political events can have big impacts on market health, especially if they occur within short time periods, giving the market no opportunity to recover and increasing the risk of a serious crash.
Our Work.The system under study includes two models.In Model 1, we consider a random process of shocks that are categorized under four types: harmless shocks, critical shocks (with a total of N to ruin the system), extreme shocks of which only one is sufficient to knock the system down, and δ-shocks (when two consecutive shocks of any category hit the system within a time interval shorter than δ).We obtain a closed form of joint distribution of the failure time τ ρ , the shock count ρ upon the failure, the cumulative damage to the system upon the failure, and some other useful random characteristics, such as prefailure time τ ρ−1 and the status of the system at τ ρ−1 .In particular, it gives the reliability function that directly follows from the marginal distribution of τ ρ .
In Model 2, we define a δ-shock as a consecutive critical shock.Consequently, the harmless shocks are excluded, or rather, bypassed.Thus, if two critical shocks, say, at T j and T j+1 , hit the system, one after another, the time lag between them must be smaller than δ.There can be some or many harmless shocks enclosed between T j and T j+1 landing at times τ s , τ s+1 , . .., but they are not counted as a threat to the system, even though their time lags are even shorter.Note that a shock at T j+1 , instead of being critical, can also be extreme and thus counted as a δ-shock.Then for Model 2, we obtain similar characteristics.
Because we treat the system as a generalized random walk process and use an embellished variant of discrete operational calculus, our techniques differ from all others in the reliability literature.We demonstrate analytical tractability of the results obtained from a number of special cases and marginal distributions, leading to compact and explicit expressions, and we discuss various examples.Furthermore, we validate our results by Monte Carlo simulation.
Formalism of Model 1
The current section deals with modeling of a reliability system referred to as Model 1. Section 3 deals with formulas for the joint distribution of key characteristics of the shock process, including the prediction of the time-to-failure established in Theorem 1 (using fluctuation analysis of random walk processes) that by far exceeds what Model 1 originally targets.Sections 4-6 continue with Model 1, discussing various applications and special cases and validating the results associated with Theorem 1 by Monte Carlo simulation.Sections 7-10 deal with Model 2, which emerged from Model 1.
Let [W] denote the equivalent class of all stochastically equivalent r.v.s on a given probability space (Ω, F , P) valued in R + such that W ∈ [W] represents the magnitude of some shock.Then, the sample space Ω can be partitioned into the three events: Let with distribution a = P(E 3 ), b = P(E 2 ), c = P(E 1 ).
Here 1 A stands for the indicator function parametrized by a fixed set A that can be an event like in (2).
Note that the way r.v.Y is defined implies that one extreme shock that ruins the system at once has the strength of N critical shocks.Furthermore, it takes N critical shocks to makes the system inoperational.That said, with a sequence Y 1 , Y 2 , . . ., the system is immune to harmless shocks, that is, when respective Y's equal 0 (with probability c).With the system is stricken by a total of N critical shocks of which the fatal one lands at τ k N .The system can be ruined earlier at some τ n if Y n = N, corresponding to the first and only extreme shock.
Obviously, {Y = N} = E 3 , {Y = 1} = E 2 , and {Y = 0} = E 1 .Then the probability generating function (pgf) of Y is As mentioned, the impact of one extreme shock is equivalent to N critical shocks that occur in any order and are mixed with harmless shocks, in particular, when N = 1, Y = 1 E 2 ∪E 3 , eliminating the need for two thresholds H 1 and H 2 and making any critical shock equally extreme.The pgf of Y reduces to Ez Y = (a + b)z + c, making Y a Bernoulli r.v. with events E 2 and E 3 merged.On the other hand, when N becomes very large, it seems like one extreme shock is a more likely scenario of system's failure, although the latter also depends on a and b.
Suppose (τ k : k = 1, 2, . ..) is a point process on R + , that is, ∑ ∞ k=1 ε τ k (ε a is a point mass) a random measure, representing the times when the shocks hit the system, such that almost surely (a.s.) τ k → ∞ as k → ∞.
Let (W k : k = 1, 2, . ..) ⊆ [W] be a sequence of iid (independent and identically dis- tributed) r.v.s representing magnitudes of shocks exerted on the system at respective times (τ k ).The process of the shocks' times and magnitudes could be specified by the marked point process.However, if we want to easily distinguish the impacts of the shocks as per (1)-( 4), we would rather turn to the auxiliary sequence (Y k ) associated with (W k ), specified in (2), and utilize the marked point process ∑ ∞ k=1 Y k ε τ k .Note that while the r.v.Y k is closely related to W k , it does not reveal the magnitude of W k other than pointing out what category the shock with magnitude W k belongs to.But the sequence {Y k } carries enough information on {W k } to lay the foundation for our forthcoming analysis of a discrete-valued random walk that we are going to employ throughout this paper, and thus it well serves its purpose.
To proceed further, we have and we form the associated sequence of partial sums of {Y k } and define where ν is the ruin index (also the nominal count of harmful hits) exerted upon the system in the absence of any other formal cause of the system's failure.We note that there is a situation when B ν is strictly greater than N.It occurs when B ν−1 < N and Y ν = N, that is, when W ν > H 2 .Thus, the system has accumulated B ν−1 critical shocks (short of N), and a shock at τ ν turns out to be extreme (valued equivalent to N critical shocks), implying that B ν = B ν−1 + N, which can be strictly greater than N.
Observe that even if N is large, the system still can fail fairly soon, because one extreme shock carries N and it knocks the system down on its first occurrence, while a sequence of critical shocks gets spread out over time if probability b of a critical shock is small enough compared to a, making N such shocks unlikely to occur soon.So, the competition between critical and extreme shocks is more flexible compared to a system under critical shocks alone, and it is driven by N as well as a and b.Furthermore, the system can also go under if there are M instances (in any order) when the time lag between any two consecutive (even harmless) shocks is less than some (small) real number δ.This is formalized as follows.Let X i ∈ [X] such that assuming that (later on, we discuss other options for τ 0 ), that is, the lengths ∆ i s of intervals (τ i−1 , τ i )s are identically distributed as some r.v.∆.The r.v.X i is Bernoulli with the marginal pgf where Thus, {X i } is the sequence of i.i.d.Bernoulli r.v.s counting δ-shocks.We would like to call X i s and Y i s the shocks identifiers.
Remark 2. From this on, it makes sense to define X 0 = Y 0 = 0 a.s.; while it is clear why we set X 0 = 0 (because with M = 1, the condition X 0 = 1 would make the system instantly fail and bar it from any further development, which makes sense to avoid), with Y 0 = 0 we agree to have the system started with one harmless shock if we take into consideration Equations ( 2) and (3) where Y ∈ {0, 1, N} with the distribution {c, b, a}.Of course, we can replace the very rigid and impractical condition X 0 = 1 a.s. with P{X 0 = 1} = q instead.Concerning {Y k }, we set Y 1 , Y 2 , . . .∈ [Y].However, we define Y 0 = 0 with probability 1, that is, under the assumption that a = b = 0 and c = 1, which agrees with (2) and (3), and have Y 0 distributed differently from the rest of Y i s.Consequently, the process {(X i , Y i ) : i ∈ N 0 } is delayed renewal.
A benefit of such a setting is that X 1 is a δ-shock if |(τ 0 , τ 1 )| < δ, because with a harmless shock at τ 0 , we ensure that X 1 = 1, in full agreement with its definition in (7) rather than with a conflicting message about X 1 being 1 according to (7) if there is no shock at all at τ 0 .Furthermore, it is also in agreement with a forthcoming formula in Corollary 1.
We also note that with Y 0 = 0, a harmless shock allegedly at τ 0 need not occur exactly at time τ 0 , but at any time prior to τ 0 allowing us to keep a harmless shock on record and assign it to time τ 0 .
The joint distribution of (X, ∆) is naturally obtained through where and whereas the marginal LST (Laplace-Stieltjes transform) of ∆ is As for the common joint transform γ(y, z, θ) = Ey X z Y e −θ∆ of the sequence (X i , Y i , ∆ i ), i = 1, 2, . .., we assume that Y is independent of (X, ∆).That is, in the context of the marked point process (X i , Y i , ∆ i ), i = 1, 2, . . ., we assume position independent marking.This means that the magnitude W i of the ith shock at τ i (and as a common assumption in many real-world reliability systems.Note, however, that this assumption does not hold if τ i s are random observations over the status of the system, and X i s and Y i s at τ i would then strictly depend on ∆ i s.Consequently, Example 1.To illustrate our settings in a practical formation of the joint distribution of ∆ and X, suppose the ∆-marginal distribution of (X, ∆) is exponential with parameter γ, that is, ∆ ∈ [Exp(γ)].Then, from (12), Hence, from (11), verifying that the corresponding marginal transforms are as it should be.Furthermore, from (19), we obtain the subcovariance Therefore, from (19), The same construction as above can be applied to any absolutely continuous, a.s.positive r.v. with a density f .Of course it would be preferable, although not mandatory, that yields a closed-form expression.For example, a gamma r.v. with parameters (α = r, β = γ), where r ∈ N, will do the job.Its density is f (x) = γe −γx (γx) r−1 (r−1)!, implying that The integral can be easily computed because r ∈ N.For example, for r = 3 (without loss of generality),
Now, with the sequence
of partial sums, we define the ruin index on M occasions of pairs of shocks hitting the system within a small time interval: Finally, the cumulative ruin index ρ = µ ∧ ν (23) forms the time-to-failure of the system τ ρ or equivalently, the lifetime of the system.Consequently, τ ρ is the earliest time of an arriving shock when a total of M δ-shocks land in the system or the number of critical shocks reaches N or when the arriving shock at τ ρ is of magnitude W ρ > H 2 , whichever of the three named events comes first.Figure 1 below depicts system's failure caused by one of the two conditions: an occurring extreme shock or the number of critical shocks reaching N. Here, ρ = ν.This is because µ < M. In Figure 2, the cause of the failure is M δ-shocks at τ µ occurring earlier than τ ν (that is, when the total of critical shocks reaches N or if an extreme shock strikes).Thus, ρ = µ.
Background on Discrete Operational Calculus and Its Use for Bivariate Marked Point Processes
In this section, we continue to formalize the above model and lay a foundation for its analysis which goes back to Dshalalow's article [35] (preceded even by their earlier work), further embellished in Dshalalow and White [36] as well as in this paper.Let (Ω, F , (F t ), P) be a filtered probability space.Given the sequence (X k , Y k ) of shock identifiers at respective times (τ k ), we define the random measure with bivariate marks adapted to filtration (F t ), representing the stream (τ k ) of shocks and their respective damages to the system.For example, given a set S ⊆ R + , the r.v.(A, B, τ)(S) gives the total amount of casualties to the system on time set S that can be deduced from (X k , Y k )'s involving those ks for which τ k s∈ S. Most pertinently, if S = [0, t], we can (at least theoretically) conclude whether or not the system remains operational by time t.Thus, {τ k } is a sequence of stopping times relative to (F t ) and so is τ ρ .
Our analysis will focus on the time-to-failure τ ρ , that is, when the system fails due to a fatality through a single extreme shock or due to other damages by all critical or δ-shocks consolidated by τ ρ .Thus, we target the principal portion of (A, B, τ) reduced to The main purpose is to find the joint distribution of τ ρ and the cumulative damages to the system at τ ρ to assess the situation, for example, to see if the overall damage can be fixed and maintained or the system needs replacement.Perhaps it could be reasonable to calibrate the associated thresholds H 1 , H 2 , and δ.An associated control or optimization is more readily feasible if the outcome yields closed-form functionals.
We therefore introduce the functional to provide comprehensive information on the system at time τ ρ , including ρ (the total shock count upon failure including critical, harmless, extreme, and δs); A ρ (≤ M)-the total number of δ-shocks; and B ρ -the number of critical and extreme shocks combined.With B ρ < N, the system's failure is entirely due to A ρ = M; with B ρ = N, the system's failure is due to N critical shocks but no extreme shock; and with B ρ > N, the system fails due to a combination of one extreme and critical shocks.Furthermore, if needed, the above functional also provides the information on all named characteristics at time τ ρ−1 , that is, upon an epoch of a nonfatal shock preceding the one at time-to-failure τ ρ of the system.Other than Φ ρ , of interest is also on the confined space (Ω, F ∩ {ν < µ}, P {ν<µ} ).It gives the status of the system that fails entirely due to critical shocks alone or a combination of critical shocks and one single extreme shock at τ ρ , but not due to δ-shocks.More on this and other variants is established in Theorem 1.
We recall that the random measure (A, More specifically, (A, B, τ) is a marked delayed renewal process with position- dependent marking (although it is not required in the Model 1 setting, where we assume position independence).The latter means that (so far no assumption on the initial condition).(iv Throughout the rest of the paper, we use the D-operator and its calculus, introduced earlier in Dshalalow [35].The D-operator, like the differential operator, is parametric (with integer parameter k ∈ Z), defined as As rendered in calculus, where we rarely use the definition of the derivative, we make use of some properties of the D-operator (see [35,36]): (Di) D is a linear functional.(Dii) D k x (1(x)) = 1, where 1(x) = 1 for all x ∈ R. (Diii) Let g be an analytic function at zero.Then, it holds true that (Div) In particular, if j = k, we have be a marked random measure with position-dependent marking representing a delayed marked renewal process terminated at τ ρ such that the joint transforms of the respective increments of [since (Y 0 , ∆ 0 ) has a different distribution from (Y i , ∆ i )s], with the respective components Then the functionals Φ ρ , Φ µ>ν , Φ µ<ν , Φ µ≥ν , Φ µ≤ν satisfy the following formulas: where Proof.Introduce the following sequences of random indices: and Next, define the associated double sequences of functionals: s, y, z, θ, u, v, ϑ; p, q) + Φ µ(p)<ν(q) (s, y, z, θ, u, v, ϑ; p, q) +Φ µ(p)=ν(q) (s, y, z, θ, u, v, ϑ; p, q), (p, q) ∈ N 2 0 . (44) From ( 44), we first work on Φ µ(p)>ν(q) (s, y, z, θ, u, v, ϑ; p, q): To continue, we introduce operator D applied to a generic function N 0 , _ B(0, 1) ⊆ C, f , where _ B(0, 1) is a compact unit ball in C centered at zero, Note that the dummy index p attached to D is being used for convenience only to indicate which variable (if more than one) it applies to.It can be readily shown that D k of (Di) is the inverse operator of D that can revive f if we apply it for every k: Denote the composition Now the application of operator D pq to 1 {µ(p)=k,ν(q)=j} can be readily proven to yield Using Fubini's theorem and noticing that D pq is a linear operator, we obtain from (44)-(49), and by the independent increments property The convergence of the series is due to ∥γ(xyu, zvw, θ + ϑ)∥ < 1 as established in [36].Finally, we arrive at Formula (32), proving that Φ µ<ν (s, y, z, θ, u, v, ϑ; M, N) of ( 33) can be obtained from ( 32) and (50) by interchanging the roles of µ and ν.Thus, analogous to (50), yielding Formula (33).Now, to obtain (34), we use a similar routine.Lastly, Formula (31) for Φ ρ (s, y, z, θ, u, v, ϑ; M, N) = Es ρ y A ρ z B ρ u A ρ−1 v B ρ−1 e −θτ ρ −ϑτ ρ−1 follows from summing up expressions (32)- (34) as per (44) and a straightforward algebra.Formulas (35) and (36) are also subject to the summation of the pairs of (32), ( 34) and ( 33), (34), respectively.
Reduced Functional
In this section, we discuss special cases that not only are in agreement with popular settings in the reliability literature but also are reducible to very tame formulas in support of our claim of closed-form expressions.First, we drop A ρ−1 , B ρ−1 , and τ ρ−1 from Φ ρ (the reference parameters at the time of a shock prior to system's failure at τ ρ ), even though they might be useful in some applications or even as stand-alone characteristics.Furthermore, we assume M = 1, rather than being arbitrary.Under this constraint, the system reduces to the most common variant of δ-shock models.One possible shortcoming of this assumption is due to a single instance of any two shocks hitting one after another within a very short time interval and standing on par with seemingly more serious assaults by a single extreme shock or N critical shocks.A practical argument for employing this policy is that the system does not have to be destroyed due to a δ-shock (especially if a δ-shock is formed by two consecutive harmless shocks) but may be paused and evaluated for needed maintenance.Granted, a pair of two consecutive harmless shocks can be harmless, but this is hard to know, let alone that in various real-world systems the true magnitudes of shocks is impossible to even approximate; however, often two shocks hitting one after another within a short time interval can raise flags.Secondly, it is possible that at least one of the two shocks in δ-form can be harmful.Thirdly, we address this issue when constructing Model 2 in Sections 7-10.
Main Formula
From (31), we drop A ρ−1 , B ρ−1 , and τ ρ−1 , reducing (31) to the formula for the joint distribution of the lifetime τ ρ ; the cumulative damages to the system A ρ , B ρ from critical, extreme, or δ-shocks (whichever of the three occurs first) at the failure time; and the total shock count ρ, namely Remark 3. Functionals Φ ρ in Formulas ( 31) and ( 52) include a prehistory of the system (that is, prior to the current process of shocks lashing at the system from τ 0 until τ ρ ; plainly, prior to τ 0 ).It may pertain to a history of prior damages to the system until τ 0 that were not reset or repaired and thus had to be integrated as its initial condition.In particular, it can carry out crossings of lower threshold values M 0 and N 0 .The historical information on the system is included in the initial distribution as the joint transform of X 0 -the δ-shocks count; Y 0 -the number of critical shocks; and the duration τ 0 of the process observed from the inception.If any of the specified conditions of system's failure at τ 0 are already met, it will be instantly detected by one of the D-operators pertaining to property (Div), with no further development past τ 0 , because the system would be inoperational.Yet, to tame the underlying formulas in Theorem 1, we often set τ 0 = 0 and X 0 and Y 0 as constants or zeros serving as sufficiently reasonable initial conditions for the system.As mentioned, however, more comprehensive data can include a full cycle of prior assaults and its outcome that can conveniently be integrated by merging utilizing the flexibility of Formulas ( 31) and (52).This option in its most general form is always available, but it would extend our current work beyond its length and we choose to postpone it.
For now, we reduce the historical process to X 0 = Y 0 = τ 0 = 0, implying that γ 0 = 1.That being said, with X 0 = 0, we have no prior δ-shocks but one harmless shock at τ 0 , as per our discussion in Remark 2. We recall that all Ys have distribution c = P{Y = 0}, b = P{Y = 1}, and a = P{Y = N}.We generally assume that a, b, c are positive.However, the latter applies only to Y 1 , Y 2 , . . ., and not to Y 0 which has a = b = 0 to enable a δ-shock at τ 1 with probability 1 in the event |(τ 0 , τ 1 )| < δ, as pointed out in Remark 2.
As pointed out in the beginning of this section, our next attempt to further reduce Φ ρ is through setting M = 1.We checked out the general case for M ≥ 1 and obtained fully explicit, although bulkier, formulas.Consequently, we decided to postpone and finish it in a stand-alone paper.Now, with M = 1, the final variant of (54) turns Thus, where .
To continue, we rewrite π in the form Then, applying D N−1 w to π under (Div) gives The next step is due to the following: In particular, which will play a key role in the forthcoming sections.Thus, from ( 4), ( 11), ( 55) and (61), herewith arriving at a fully explicit expression.
Example 2. The functional φ ρ (s, y, z, θ; N) = Es ρ y A ρ z B ρ e −θτ ρ in (63) represents a closed-form expression, which is obvious, and it is reducible to a fully explicit formula once α(θ) and β(θ) are specified.We turn to Example 1, with that can be substituted in (63), while a(z) = az N + bz + c is all set.
Marginal Distributions and Means
From Formula (63) for the joint distribution of the time-to-failure τ ρ and other characteristics of system's failure, we obtain marginal transforms starting with τ ρ .
Time-to-Failure
For s = y = z = 1, we arrive at where as per (61).
The mean of τ ρ can be easily derived from (62), ( 64) and (65): Figure 3 below depicts Eτ ρ (N) in N, ranging from 1 to 100 with four different scales, allowing us to see with what speed Eτ ρ approaches a constant value.It looks like it reaches equilibrium for N around 50 under a fixed choice of main parameters.γ in the interval (0.1, 10) for four different fixed Ns, 5, 10, 20, 30.Recall that 1 γ is the mean time between any two consecutive shocks.The rest of the parameters are fixed.We see that Eτ ρ 1 γ is monotone-decreasing.
Assessment of (66)
We render a quick verification in (66) that bβ 1−cβ N < 1 under the assumptions that 0 < a and 0 < α.Indeed, let * be one of the relations <, ≤, =, >, ≥.Then, because cβ < 1 (or else a = It follows that relation * is < and thus Because of (67), Eτ ρ is monotone-increasing in N, with the largest value 1 respectively.The mean length of the lifetime τ ρ depends on the mean interarrival time 1 γ of shocks and on α = P{∆ < δ} and β = 1 − α, but more on α.With α small and β large, α + aβ gets smaller, and thus 1 γ gets generally larger.This is because of a lesser impact of δ-shocks and the competition running more between extreme and critical shocks, with a lesser chance to be interrupted by a single δ-shock.
With N large, as per version (69), Eτ ρ is dominated by α alone, where the competition runs entirely between extreme and δ-shocks.Thus, with α small, γ fixed, Eτ ρ largely depends on a single extreme shock.Of course, Eτ ρ in all cases can be made arbitrarily long by decreasing γ and at the same time making any δ-shock's occurrence unlikely.
Remark 4.
Because the key result of Theorem 1 is exclusively established for a finite N, one needs to take extreme caution with N → ∞.In particular, some interpretations under N = ∞ may be even inaccurate or contradictory.The meaning of N = ∞ in the context of the D-operator at the center of Theorem 1 is reminiscent of improper integrals, which circumvent a rigorous Riemann-Darboux construction on compact intervals and sometimes disagree with direct and Lebesgue integrals yet are often used.For that reason, it would be safer to reason with an asymptotic behavior of respective quantities involved under N very large rather than N = ∞.
δ-Shocks Count
From (63), with s = z = 1, θ = 0, whereas per (62), we have the PGF of the δ-shocks count prior to system's failure.( 70) and (71) can be rewritten as implying that A ρ is Bernoulli with parameter αΠ, which is also the mean of A ρ .Obviously, the mean of A ρ is strictly less than 1.
In a nutshell, EA ρ = αΠ.(73) Figure 5 presents five plots of EA ρ (α), comparing them under five different fixed N values.Recall that α = P{∆ < δ} = 1 − e −γδ when ∆ ∈ [Exp(γ)].To plot the five graphs, we did not specify γ and δ.However, we can keep γ fixed like 1 γ = 10 and vary δ in accord with α.Obviously, δ = − 1 γ ln(1 − α) becomes monotone increasing in α with γ fixed, and so does EA ρ (α).Consequently, it becomes increasingly more likely to ruin the system with a δ-shock against critical and extreme shocks and we see it in the plots below that EA ρ approaches 1 under Ns ranging from 1 to 10.
Total Shock Count
From (63), with y = z = 1, θ = 0, where as per (61).The mean of ρ can then be easily derived from (62), (77) and (78) as Figures 6 and 7 depict Eρ as a function of N with different fixed a, b, c and scales of N.
Monte Carlo Simulation of the Process
We next render Monte Carlo simulations of the full stochastic process under some specified special cases and compare empirical means derived above as a demonstration of the results matching empirical findings.In each case below, we assume the times between shocks is are exponential (γ), and we make numerical assumptions about the parameters, including the parameter of time between shocks γ, the time δ, the probabilities of each failure type (a, b, and c), the δ-shock threshold M = 1, and the critical shock threshold N.
For the first set of experiments, we set γ = δ = 1, M = 1, and N = 2. Figures 3-6 below show a comparison of predicted and estimated means of the number of the failure times τ ρ , shocks ρ, δ-shocks A ρ upon failure, and (N × Extreme + Critical) shocks B ρ , respectively.
We display three heat maps: predicted mean, sample mean, and absolute difference for each set of probabilities.In every parameter set tested, the error between the true predicted mean and sample mean is less than 0.002, providing a good validation of the predictions derived above.
Note the predicted means of the failure time τ ρ in Figure 8 and shocks ρ in Figure 9 are identical since E∆ = 1 in this case.We notice τ ρ , ρ, and A ρ have broadly the same pattern: an increase to the extreme shock probability a results in smaller means.This makes sense because a high a indicates a high probability that a single shock knocks down the system, so fewer total shocks are likely to occur over less time with fewer opportunities for δ-shocks.More subtly, increasing b has a negative impact on the means for constant a because it increases the chance of critical shock failures in fewer total shocks, reducing all three means (See Figure 10).The trend here is drastically different: B ρ is positively related to both extreme shock probability a and critical shock probabilities b.This makes sense: if a or b increase, each shock is more likely to be extreme or critical, each of which add to B ρ .Further, when these probabilities are low, δ-shock failures become more likely, in which case B ρ tends to be smaller (See Figure 11).In addition, we perform some simulations where δ = 1, (a, b, c) = (0.5, 0.4, 0.1), and M = 1.Furthermore, we vary the waiting time parameter γ and critical/extreme shock threshold N as (γ, N) ∈ {0.1, 0.2, . . ., 2} × {1, 3, 5, 10}.
Sample means here are based on 100,000 simulated paths for every pair (γ, N). Figure 12 below shows the predicted and estimated means of the number of shocks ρ, δ-shocks A ρ upon failure, (N × Extreme + Critical) shocks B ρ , and failure time τ ρ .As is seen, the dots (empirical) align precisely with the means derived above and run on a much denser mesh of γ values to form smooth curves, providing additional validation.
As expected, the mean failure time τ ρ always decreases as shocks become more frequent (larger γ).In addition, more frequent shocks make δ-shocks more common, so A ρ grows with γ.The means of ρ and B ρ are inversely related to γ since more frequent shocks make δ-shock failures so common, so there is a reduction in mean number of shocks at failure time and, hence, B ρ as well.
When the System Fails Prior to a δ-Shock
We already said in Section 4 that a single δ-shock need not necessarily ruin the system, but it can; while the occurrence of a δ-shock may not sound convincing enough to suggest the system becomes inoperational, any such event is worth checking out and so the system can be fixed if needed.We are interested in estimating the probability that the system fails through a single extreme shock or multiple critical shocks or their combination before any δ-shock takes place.Thus, we turn to functional (32) of Theorem 1 and reduce it under the same assumptions as for Φ ρ made in Section 4. So, the following will be assumed: Hence, we arrive at the functional which is very similar to φ ρ of (56) with the same principal part Π(s, z, θ) obtained in (61).Note that unlike φ ρ , the functional φ µ>ν does not depend on y other than that y = 0 seems to be the only dependence on y in its right-hand side.
Furthermore, for N = 1 (as the critical shocks degenerate), we have that Thus, we see that, under N = 1, with α small and, thus β large, the probability P{ν < µ} is pretty large.This is because there are no critical shocks competing with extreme shocks, as all critical and extreme shocks are just extreme shocks (as mentioned in Section 2), and the occurrence of just one extreme shock will sharply increase the likelihood of system's failure on the basis of one extreme shock alone.
On the other hand, when N increases, P{ν < µ} gets smaller, because now critical and extreme shocks compete, while with N large, extreme shocks, as noticed in Section 2, have an edge over critical shocks.Yet, P{ν < µ} = aβ α+aβ in this case reveals an even stronger competition between extreme and δ-shocks, and with much lesser impact of the critical shocks.Note that if α is large, β is very small, making the probability P{ν < µ} of an earlier failure due to one extreme shock disproportionately smaller, because β in the numerator essentially determines the value of P{ν < µ}.
As noted, the graph in Figure 13 shows P{ν < µ}(N) as a function decreasing in N.
A Modified System. Model 2. Preliminaries
In Model 2, we redefine δ-shocks to single out only those pairs of shocks with shorter time lags than δ that are either critical or extreme.In the previous sections, the δ-shocks applied to any pairs of consecutive shocks under times lags smaller than δ.The latter meant that any two consecutive harmless shocks with time lags less than δ also qualified, and because M = 1, any such occurrence was deemed fatal for the system.In some models, such an occurrence is of concern.In other models, it takes more than two consecutive shocks in a row to raise flags.In the present modification, we define two consecutive shocks within a close time proximity of each other to be a threat to the system only if either of them is harmful (with some further constraints to follow).Note that in the event two consecutive harmful shocks occur within a time frame less than δ, there can be arbitrarily many harmless shocks in between, of which all were δ-shocks in the context of Model 1.Now this is no longer the case.
An extreme shock can be a δ-shock but only if it is preceded by a critical shock.In this case, the system fails on two counts.If an extreme shock is not δ, the system instantly fails without giving a chance to any consecutive harmful shock to be δ.Thus, a harmful shock can be δ only if it is a consecutive shock.Consequently, it can be critical (in particular, Nth critical) or extreme.
In a nutshell, in a pair of two consecutive harmful shocks with a time lag less than δ, the second shock is deemed a δ-shock if the first of the two is neither extreme nor Nth critical.
As mentioned, the harmless shocks still land in the system, but they are no longer counted as δ-shocks regardless of how many of them occur consecutively with time lags less than δ.
In contrast with Model 1, we assume that at time τ 0 , when the system was first observed, exactly one, strictly critical, shock landed (that is, at any time t ≤ τ 0 ).
We form the process of harmful shocks from {τ n }.Suppose T 1 is the time of the first harmful shock after epoch τ 0 , that is, and furthermore, Thus, T j is an embedded sequence of consecutive harmful shocks (that excludes harmless shocks).
We proceed with a more rigorous construction of the embedded point process T j .Define the random index Then, T 1 = τ η and furthermore, implying that T j = τ η j , j = 0, 1, . . ., with η 0 = 0. (86) In Figure 14, below, we focus on new variants of δ-shocks.Here, we see a path of the shocks process in which δ-shocks can only be among harmful shocks.In particular, τ 6 = T 2 is identified as the second critical shock and also a δ-shock.The three other shocks squeezed between T 1 = τ 2 and T 2 = τ 6 are harmless, and their roles reduce only to the determination of the distance between consecutive harmful shocks (critical or extreme).Thus, the δ-shock at τ 6 = T 2 is also fatal.
In another scenario, for convenience depicted in the same figure, we assume that the shock at T 2 is not δ.Then, the system will keep functioning until eventually reaching the time-to-failure at T ρ (introduced in Section 8), that is, at T 7 when the first extreme critical shock lands.This shock becomes fatal on two counts: firstly, because it is extreme, and secondly, because it is also δ.If neither of these were to take place at T 7 , then the next harmful shock would be fatal, because it is Nth critical (assuming that N = 8).It seems obvious that ∑ ∞ k=0 ε T k is a delayed renewal process of consecutive harmful shocks that we will mark in a few moments.The "delay" is driven by one critical shock striking the system at time T 0 or earlier but associated with T 0 .
Marked point process of shocks: To identify δ-shocks we start with the sequence of i.i.d.Bernoulli r.v.s (being identifiers of δ-shocks) followed by another sequence of identifiers as i.i.d.binary r.v.s valued in {1, N}.Because W η j > H 1 a.s., there is no need to include the number 0 in the set {1, N} as much as any other number equally irrelevant.With the shocks identifiers V j , U j , we complete our marking of the point process ∑ ∞ k=0 ε T k (now the support counting measure) as a delayed marked renewal process Note that (V, U , T ) runs indefinitely, continually hitting the system even after it fails.We fix it in Section 8 after some more formalism.
For the forthcoming analysis, we need to find the joint functional where [V] and [U] are associated equivalence classes of r.v.s distributed as V j s and U j s, respectively, and τ η is the time between T 0 = τ 0 and T 1 .
We begin with the marginal functional Γ(1, z, θ) = Ez U e −θτ η , which satisfies the key fluctuation theorem (Dshalalow [35]) established there for a marked delayed renewal process with three active components and holding also for a single active component, in this case U. A component in a multivariate marked process is deemed active if it is supposed to cross some critical threshold.Any other component that has no threshold to cross is referred to as passive.If a multivariate marked point process carries only one active component, say, U, all other passive components assume their respective values on U's crossing.For example, if another passive component is a time component, then it registers the time when U crosses that threshold.All other passive components assume their values accordingly at the time of U's crossing.The process no longer evolves after this event, or the rest of its future is of no further interest.
If a multivariate marked process has more than one active component, there is a competition (or a game) between them, in which one of the active components hits their associated threshold first.When it occurs, the rest of active as well as all passive components assume their respective values, and the process stops.We dealt with this situation in Theorem 1, established specifically for a wide class of reliability models with competing failure processes.Now, of the two components U and T 1 = τ η in Γ(1, z, θ) = Ez U e −θτ η , τ η is passive and it assumes its time value when U turns 1 or N for the first time after T 0 .To apply the key fluctuation theorem, we first turn to Section 2 concerning the functional γ(y, z, θ) = Ey X z Y e −θ∆ , although we focus on the two last components, Y and ∆.Recall that Y took values 0, 1, N, but from the above setting we are interested in the binary version of Y when Y is either 0 or greater than 0.
Recall that in Section 2, the sequence {B n } of partial sums B n = ∑ n k=1 Y k was associated with index ν =min{n : B n ≥ N}, which would have been a ruin index in the absence of δ-shocks.This was because the system (with no δ-shocks) was harassed exclusively by harmless, critical, and extreme shocks, and because the system could endure some number of critical shocks and one extreme shock to land at the total of more than N shocks altogether upon its failure.In our present setting, we deal with a special case when the process of shocks is "suspended" or, rather, observed at τ η when the first harmful shock lands, which can be either critical or extreme and thus valued 1 or N, respectively.
To make use of the key fluctuation theorem, we temporarily dismiss the initial critical shock at τ 0 and set ν =min{n ∈ N : B n ≥ 1}.The suspension of the initial critical shock makes us assume that Y 0 = B 0 = 0. Correspondingly, if B n turns ≥ 1 for some n > 0 at the first time, it means that B n−1 = 0 and so are all other Bs with lower indices, but B n ∈ {1, N}.In the event the more general version of ν =min{n : B n ≥ N} is of interest, we would use the formula as per Dshalalow [35] (or even Dshalalow's earlier results pertaining to this basic case).
Operator D is the same as the one in (Di) of Section 3. In our present case, as argued, we need the version of (91) precisely for N = 1, namely, that instantly reduces the right-hand side of (91) to as per (Di).Note that with all the simplicity of (92), the formula would be difficult to deduce by direct probabilistic means.Now recall from ( 15) that under the assumed independence of r.v.s Y (an integervalued identifier of W) and ∆, with needed in (92).Substituting (93)-( 95) into (92) yields where Remark 6. Formulas (96) and (97) embellish the marginal distribution (which is type 1 geometric with interrenewal times included in the classic geometric experiment of a series of independent Bernoulli trials) that alone could be readily obtained by the double expectation formula without the use of fluctuation calculus.However, the joint distribution Γ(1, z, θ) is more difficult to justify using straightforward probability arguments.Furthermore, the factor a 0 (z) in ( 96) and (97) points to a rather surprising outcome that the r.v.s U and τ η are independent, which would not be obvious when using other means.Furthermore, Formula (96) identifies the distribution of r.v.U that looks conditioned on set In a nutshell, fluctuation calculus turns out to be a straightforward method that gives a fully secure result, circumventing common ambiguities of the double expectation (in some difficult cases) and other, less conventional, tools.
Thus, {T n } is an embedded point process with the marginal LST of the interrenewal times satisfying Formula (98).In particular, if ∆ ∈ [Exp(γ)], that is, when γ(θ) = γ γ+θ , In conclusion, we consider a modified system with shocks landing at T 0 = 0, T 1 , T 2 , . . ., of magnitudes W 0 , W 1 , W 2 , . . .such that H 1 < W 0 ≤ H 2 ; for the other Ws, when H 1 < W i ≤ H 2 , the shock at T i is critical, and when W i > H 2 , the shock is extreme and thus fatal.The system fails if a single extreme shock hits the system at some time T i or if a shock at T i is Nth critical, counting from that at T 0 .To avoid triviality, we thus assume that N > 1.The δ-policy has not been introduced yet.
The former δ-shock policy applied to any types or mixes of shocks is altered in the following way.It is now restricted entirely to critical or extreme shocks (harmful shocks).More specifically, if a shock that landed at time T i , is such that |(T i−1 , T i ]| < δ, this shock is referred to as a δ-shock, provided that the shock at T i−1 is critical but not Nth critical.That said, the shock at T i can be (a) critical, (b) Nth critical, (c) extreme.Now we are back to the formalism of functional Γ(y, z, θ) = Ey V z U e −θτ η , where V = 1 {τη<δ} .This functional was not a part of the key fluctuation formula, because combined with V, the underlying trivariate process did not meet the conditions in the associated theorem of [35].However, with the newly established U that turned independent from τ η , we can use the same argument as in the formation of γ(y, z, θ) regarding U and V, τ η as independent.Thus, because V is binary with we define G(y, θ) as the marginal of Γ(y, z, θ) in the form with α 0 (θ) = Ee −θτ η 1 {τη<δ} and β 0 (θ) = Ee −θτ η 1 {τη≥δ}.
Note that Note that α 0 (θ) and β 0 (θ) in (101) are implicit unless we specify them as in our forthcoming discussion in Example 3. Finally, where (α 0 , β 0 ) is the marginal distribution of V, with the PGF In summary, we note the following: Proposition 1.In Model 2, where δ-shocks are formed through pairs of consecutive harmful shocks with time lags less than δ, the associated marked point process of harmful shocks (embedded in the process of all shocks of Model 1) is a marked delayed renewal process, with interrenewal times, jointly with their marks Us and Vs, and are distributed in accordance with the functional Γ(y, z, θ), satisfying formula (102) and exhibiting independence of U and V, τ η with respective marginal transforms in (100)-( 104).
The distribution of the delay is unspecified and so far is arbitrary.Note that we have not restricted δ-shocks as to how they turn fatal (which we do in the forthcoming sections), nor did we specify exactly how the system fails, except for some allusions and loose preliminaries.
Remark 7 (An informal discussion).Assume we have a process of shocks reduced to harmful shocks only, thus with one threshold H 2 .Any shock with a magnitude below H 2 is critical and above H 2 is extreme.Suppose the associated marked random measure is delayed renewal with assumed position-independent marking.The above specifications of U, V, and T (= τ η ) apply but with the distribution of T being arbitrary.The conditions are the same as in Proposition 1, except that the position independence is now assumed rather than proved.Furthermore, Proposition 1 yields the special case Γ(1, 1, θ) = pγ(θ) 1−(1−p)γ(θ) of the marginal functional Γ(y, z, θ) instead of no assumption on Γ(•).Furthermore, Proposition 1 suggests that T 1 , T 2 , . . .are the successive epochs of harmful shocks and thus with independent and identically distributed interarrival times following the principles of a "geometric process" of some arrivals at random epochs of time until the first success, with Γ(1, 1, θ) = pγ(θ) 1−(1−p)γ(θ) using the double expectation formula.Then we used the key fluctuation theorem to arrive at Γ(1, z, θ) = a 0 (z) pγ(θ) 1−(1−p)γ(θ) , where a 0 (z) is the new marginal of shocks' binary identifiers conditioned on Ω 0 that they are exclusively harmful.The consequence altogether is that under the above actions, we are now on the new traced probability space (Ω 0 , F ∩ Ω 0 , P 0 ), P 0 = P(• ∩ Ω 0 )/P(Ω 0 ), where there is no place for harmless shocks anymore.See more in Section 8.
Further Formalism of Model 2 Remark 8.
Reiterating what was said in Remark 7, we note that while in Section 2, Y ∈ {N, 1, 0} with the respective distribution {a, b, c}, the associated identifier U is valued in {N, 1} under the distribution {a 0 , b 0 , }, as per (97). With the above marginal PGF a 0 (z) of U in (97) can also be justified using the conditional expectation: , which is the associated expectation relative to the traced probability space (Ω 0 , F ∩ Ω 0 , P 0 ).Here, P 0 is the conditional probability measure E[(•)1 Ω 0 ] 1 P(Ω 0 ) .We will, however, relax the measure-theoretical contents of our forthcoming calculus.
For notational convenience, we will use P for the conditional probability measure (P 0 ) and the associated conditional expectation as E (in place of E 0 ), bearing in mind, however, that we deal with the system on the traced space, in which the harmless shocks play no role beyond the determination of the joint distribution of the times between consecutive harmful (critical or extreme) shocks and the associated shocks identifiers.
Consequently, the embedded process of shocks can be seen upon T 1 , T 2 . . .through U 1 , U 2 , . . . ,where U k ∈ [U], k = 1, 2, . . .Of course, Us are shock identifiers and they represent the respective magnitudes of shocks W η 1 , W η 2 , . . .at T 1 , T 2 , . . .which can now only be critical or extreme.Hence, the associated embedded marked point process of times and shock magnitudes ∑ ∞ k=1 W η k ε T k can be replaced with a cruder but sufficiently descriptive variant ∑ ∞ k=1 U k ε T k that will be better suited for the associated random walk analysis that proceeds under the same course as in Sections 2 and 3, starting with forming the sequence of partial sums of {U k } with The δ-shocks are included in (89) via the sequence of i.i.d.Bernoulli r.v.s Acting alone, the sequence would continue until V k = 1 a.s.However, the sequence as well as the whole process (V, U , T ) = ∑ ∞ k=0 V j , U j ε T k can be interrupted by an earlier occurrence of an extreme or Nth critical shock.Now we define tentatively assuming that M δ-shocks occurring in any order will ruin the system at time T χ unless other harmful shocks will cause an earlier failure.We will again deal only with the special case M = 1, although Theorem 1 is formulated for the general value of M (that we plan to explore in our forthcoming paper).The cumulative ruin index is then (while it would be more proper to use some different character for ζ ∧ χ than ρ to tell it from ρ in Sections 2-6, it would be harder to associate it with the common ρ in Theorem 1).Consequently, T ρ is the time-to-failure of this system.Under this formalism of T ρ , we can revisit Figure 14 and the preceding interpretation, which now makes more sense.
Analogous to Section 2, denote τ η as the equivalence class of all r.v.s having the same distribution as T i − T i−1 , i = 1, 2, . ... Ten, the failure time of the system occurs at T ρ , with the total count of critical and extreme shocks U ρ and δ-shocks count A ρ the on system's failure.Consequently, the marked process (V, Example 3. We revisit Example 1 in a similar context.Recall that back then, we set the ∆-marginal distribution of (X, ∆) exponential with parameter γ, that is, ∆ ∈ [Exp(γ)].This assumption as we pointed out in Section 7 implied that τ η ∈ [Exp(γp)], where p = a + b.Now it takes very little to adjust all computations in Example 1 replacing γ with γp.Yet we proceed with details under the new notation: and further from (114) we obtain Therefore, from the last expression, (112), and that Eτ η = 1 γp ,
Competing Processes
Since the new system is similar to that treated in Sections 2-6, we abridge our reasoning and computations making only some necessary adjustments.The formula analogous to (52) reads Here, M ≥ 1 but it will be reduced to M = 1, while now N ≥ 2, because we assumed that the system started with one critical shock that landed at T 0 = 0. Thus, Γ 0 (y, z, θ) = Ey A 0 z V 0 e −θT 0 = z, because A 0 = V 0 = T 0 = 0, while U 0 = 1 a.s. as previously defined.With no restriction on N, rather than N ≥ 2, we now set M = 1, implying that 1−sΓ(0,zw,θ) .
(117) Remark 9.In particular, the marginal transform of T ρ turns where and [readily from (117)] The latter is the mean value of the total count ρ (if we are still dealing with shocks, although the above Formula (117) is for an unspecified process) of all harmful shocks until failure.Indeed, the ρ-marginal PGF is In conclusion, This formula holds without any special assumptions or specifications rendered in Sections 7 and 8.It is even invariant of any interpretation imposed on the process dealt with in (117).
Returning to the special case pertaining to Model 2 specified in Section 7, from Γ(y, z, θ) = a 0 (z)G(y, θ) = a 0 z N + b 0 z (α 0 (θ)y + β 0 (θ)], we write down expression 1 1−sΓ(0,zw,θ) in (117) in its explicit form as where A 0 = sa 0 β 0 (θ)z N , B 0 = sb 0 β 0 (θ)z.F looks simpler than its counterpart in (57).This is because its polynomial in the denominator does not carry a constant (57) has.Expanding F(s, zw, θ) in series of powers of A 0 w N + B 0 w gives Series S converges to F in a vicinity of w = 0. Then we apply operator Lemma 2 is almost identical to Lemma 1, with the same outcome but still slightly different from Lemma 1. Furthermore, In particular, So we close on after the use of operator D N−2 w , and summarize it as Theorem 2.
Theorem 2. In the reliability system (originally set up with four types of shocks: harmless, critical, extreme, and δ-shocks), in which δ-shocks can only be among harmful shocks under the specifications in Sections 7-9 and formalized on the traced probability space (Ω 0 , F ∩ Ω 0 , P 0 ) , the functional ψ ρ (s, y, z, θ; N) = Es ρ y A ρ z B ρ e −θT ρ of the joint transforms of the lifetime T ρ , the total shocks count ρ at T ρ , the number A ρ of δ-shocks at T ρ , and the sum B ρ of all other shock identifiers at T ρ , satisfy Formulas (118) and (120).
Note that ρ ̸ = A ρ + B ρ as it might be assumed, because B ρ gives the sum of the shock identifiers Us, which assumes values 1 (for a critical shock) and N (for an extreme shock).However, for each ω ∈ Ω 0 , B ρ (ω) identifies how many critical and extreme shocks landed by T ρ (ω).For example, if B ρ (ω) < N, we figure that the number of critical shocks was exactly B ρ (ω), with no extreme shock included and with one δ-shock at T ρ (ω), which turns out to be the only fatal shock.With B ρ (ω) = N, the total harmful shocks count is N. Again, we know that no extreme shock hit the system at T ρ (ω), because otherwise, B ρ (ω) would have been 2N − 1 and not N.We just do not know from B ρ (ω) alone if the Nth shock was also δ.Finally, with B ρ > N, we know that the fatal shock at T ρ (ω) was extreme or extreme and δ combined.
Remark 10.While it is obvious that ψ ρ is given in its closed form through Equations ( 118) and (120), we conclude our claim of analytical tractability by calling on the special case of Example 3, through a single insertion of in (118) and (120) as per formulas (110) and (111).
Here, the system fails regardless of whatever shock (critical, extreme, or δ) strikes it.
δ-Shocks Count
From with s = z = 1 and θ = 0, implying that A ρ is a Bernoulli r.v. with parameter α 0 Π(1, 1, 0) , where In a nutshell, implying that the mean δ-shock count lies in [α 0 , α 0 1−b 0 β 0 ).Thus, when N = 2, the second shock is a δ-shock with mean value α 0 .Consequently, with N = 2, Since the system is observed at T 0 with a prior critical shock, β 0 is the probability that a shock at T 1 is not δ, and with N = 2, a non-δ-shock at T 1 is Nth critical or extreme.Now, with Formally, if B ρ > N, then the system fails due to an extreme shock alone or on the count of an extreme and δ-shock occurring at the same time.If B ρ = N, then the system accumulated exactly N critical shocks by T ρ and it failed on Nth critical shock that turns fatal or on the count of an Nth critical and δ-shock combined.
If B ρ < N, then B ρ gives the exact number of all critical shocks landing in the system by T ρ when the system fails, and the last of these shocks at T ρ is δ.One needs to be reminded that for various ω ∈ Ω 0 , B ρ (ω) can assume any of those named values, and more accurate information comes from the distribution of B ρ obtainable from the marginal PGF Ez B ρ . From 10.4.Total Count of Harmful Shocks (Critical/Extreme/δ-Shocks) until Failure This applies to r.v.ρ and its marginal pgf: with the expected number of harmful shocks and δ-shocks combined: We notice τ ρ , ρ, and A ρ have broadly the same pattern: an increase to the critical shock probability b 0 results in larger means.Sample means here are based on 100,000 simulated paths for every pair (γ, N). Figure 19 below shows the predicted and estimated means of the number of shocks ρ, δ-shocks A ρ upon failure, (N × Extreme + Critical) shocks B ρ , and failure time τ ρ .As is seen, the dots (empirical) align precisely with the means derived above and run on a much denser mesh of γ values to form smooth curves, providing additional validation.We see a good agreement between the true means predicted as the curves and the Monte Carlo simulations as the dots.
Summary
In this paper, we studied a reliability system subject to random shocks causing different degrees of damages.The shocks enter the system according to a delayed renewal process {τ n } with respective magnitudes {W n }, and they are categorized as harmless, critical, and extreme depending on their strengths relative to two thresholds 0 < H 1 < H 2 .We assume that the associated marked renewal process ∑ ∞ n=0 W n ε τ n is with position-independent marking; in particular, W n s are i.i.d.random variables picked up from an equivalence class [W].Correspondingly, a shock of magnitude W is harmless if W < H 1 , critical if H 1 ≤ W < H 2 , and extreme if W > H 2 .
One of the three events can ruin the system: if there is a single extreme shock; if the system accumulates a total of N critical shocks; or if there is a single δ-shock, which is fatal.A δ-shock is defined as an occurrence of two consecutive shocks with a time lag less than some δ.Thus, a pair of two, even harmless, shocks can ruin the system if the time lag between the two is small.Using a common terminology in reliability, there are three competing processes, and the winner is the one that ruins the system first.Understandably, an extreme or Nth critical shock can occur at the same time as a δ-shock, and thus, two processes may end up sharing the reward.
The objective of this paper was to predict the time-of-failure, as well as the shocks count, including δ-shocks, upon failure.Therefore, of interest was to find the joint distribution of the system's lifetime and damages incurred upon its ruin.At the same time, we targeted a closed-form functional of such a distribution that was given as an explicit formula in a symbolic form that is reducible to a totally explicit expression once involved input parameters (such as interarrival shock times and their magnitudes in the form of the joint distribution) are specified.We have less interest in working on asymptotic formulas or algorithms.
The results were based on stand-alone Theorem 1, which fitted our system's settings, although we obtained far more than we needed.More specifically, we considered a marked point process (A, B, τ) = ∑ ∞ k=1 (X k , Y k )ε τ k describing the evolution of shocks with their magnitudes and respective time lags.Because that process was terminated at τ ρ (time-tofailure), (A, B, τ) was truncated to A ρ , B ρ , τ ρ = ∑ ρ k=1 (X k , Y k )ε τ k .Theorem 1 established a closed-form expression for the functional Φ ρ (s, y, z, θ, u, v, ϑ; M, N) = Es ρ y A ρ z B ρ u A ρ−1 v B ρ−1 e −θτ ρ −ϑτ ρ−1 and other variants.The formula even allowed us to handle M δ-shocks in any order, which we, however, reduced to one (postponing the more general case).A further reduction of Φ ρ led to functional φ ρ (s, y, z, θ; N) = Es ρ y A ρ z B ρ e −θτ ρ (Corollary 1, under M = 1).The closed-form claim was fully supported by Example 2.
In Section 5, we discussed marginal distributions and means of time-to-failure τ ρ , δshock count, the sum of all critical/extreme shocks' identifiers, and the total shock count, by τ ρ followed by Monte Carlo simulation of the above process and validation of the results.
Section 6 was dedicated to the functional Φ µ>ν (s, y, z, θ, s, u, v, ϑ; M, N) = Es ρ y A ρ z B ρ u A ρ−1 v B ρ−1 e −θτ ρ −ϑτ ρ−1 1 µ>ν of the general process under the assumption that the system will be ruined by extreme or critical shocks prior to its failure due to δ-shocks (also established in Theorem 1) and its special cases.Of particular interest was the probability P{ν < µ} that the failure is due to extreme/critical shocks alone.
In part II of the paper (Sections 7-10), we introduced a variation of the above model, called Model 2. Namely, in Model 1, we assumed that any two consecutive shocks with a time lag less than δ were deemed fatal regardless of what kind of shocks were involved.This protocol did not exclude incidents with pairs of harmless shocks.In some real-world systems, such an approach seems unwarranted, even though it is universally applied to situations where the magnitudes of shocks are hard or impossible to observe.Yet we decided to offer an alternative model in the event this rule ends up being too rigid.In Model 2, we restricted the use of δ-policy to harmful shocks only.That being said, we bypass harmless shocks and include only critical and extreme shocks with short lags that are now deemed as δ-shocks.
In a nutshell, we singled out those critical or extreme shocks which are consecutive and have time lags smaller than δ.Precisely, they must appear in pairs, and if a shock at some τ j is critical, we bypassed all harmless shocks at τ j+1 , τ j+2 , . . .until the next critical or extreme shock, say, at time τ j+k , comes in a close time proximity from τ j .To proceed further, we singled out the successive epochs T 1 , T 2 , . . ., from the sequence τ j of critical and extreme shocks.The interarrival times T 1 − T 0 , T 2 − T 1 , . . .between harmful shocks was easy to find.More challenging was to determine the joint distribution of those times and the identifiers U 1 , U 2 , . . . of shocks' magnitudes with identifiers V 1 , V 2 , . . . of δ-shocks (all binary).We had to use a key fluctuation theorem, previously established.The harmless shocks process was essential in the determination of this joint functional.
Figure 2 .
Figure 2. Failure due to Mth δ-shock that occurs prior to an extreme or Nth critical shock.
Figure 4
Figure 4 takes on Eτ ρ as a function of 1γ in the interval (0.1, 10) for four different fixed Ns, 5, 10, 20, 30.Recall that 1 γ is the mean time between any two consecutive shocks.The rest of the parameters are fixed.We see that Eτ ρ
Figure 8 .
Figure 8. Predicted/estimated mean and absolute error for the failure time τ ρ .
Figure 9 .
Figure 9. Predicted/estimated mean and absolute error for the shocks ρ.
Figure 10 .
Figure 10.Predicted/estimated mean and absolute error for the δ-shocks A ρ .
Figure 11 .
Figure 11.Predicted/estimated mean and absolute error for the (N × Extreme + Critical) shocks B ρ .
Figure 14 .
Figure 14.A System where δ-shocks can only be among harmful shocks (i.e., critical or extreme).
Figure 17 .
Figure 17.Predicted/estimated mean and absolute error for the δ-shocks A ρ .
)
10.3.Impact B ρ of Critical/Extreme ShocksRecall that B ρ = ∑ ρ k=0 U k is the sum of all shocks identifiers collected by the timeto-failure T ρ .It thus is an integer with 2 ≤ B ρ ≤ 2N − 1.It is not equal to the shock's count, because an extreme shock counts as N that is the largest quantity of critical shocks.It nevertheless allows us to identify the number of critical and extreme shocks by T ρ as noted at the end of Section 9.
|
2024-05-29T15:18:52.214Z
|
2024-05-25T00:00:00.000
|
{
"year": 2024,
"sha1": "77d27054ae3a599c580641e2a8ec517a3dbe12c6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/26/6/444/pdf?version=1716628693",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c1b42f8f4d8e1e5f26e413d2a93a8a2ce6f82a7",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
234358828
|
pes2o/s2orc
|
v3-fos-license
|
Pair production due to an electric field in 1+1 dimensions and the validity of the semiclassical approximation
Solutions to the backreaction equation in 1+1-dimensional semiclassical electrodynamics are obtained and analyzed when considering a time-varying homogeneous electric field initially generated by a classical electric current, coupled to either a quantized scalar field or a quantized spin-$\frac{1}{2}$ field. Particle production by way of the Schwinger effect leads to backreaction effects that modulate the electric field strength. Details of the particle production process are investigated along with the transfer of energy between the electric field and the particles. The validity of the semiclassical approximation is also investigated using a criterion previously implemented for chaotic inflation and, in an earlier form, semiclassical gravity. The criterion states that the semiclassical approximation will break down if any linearized gauge-invariant quantity constructed from solutions to the linear response equation, with finite nonsingular data, grows rapidly for some period of time. Approximations to homogeneous solutions of the linear response equation are computed and it is found that the criterion is violated when the maximum value, $E_{\rm max}$, obtained by the electric field is of the order of the critical scale for the Schwinger effect, $E_{\rm max} \sim E_{\rm crit}\equiv m^2/q$, where $m$ is the mass of the quantized field and $q$ is its electric charge. For these approximate solutions the criterion appears to be satisfied in the extreme limits $\frac{qE_{\rm max}}{m^2} \ll 1$ and $\frac{qE_{\rm max}}{m^2} \gg 1$.
I. INTRODUCTION
The semiclassical approximation has been commonly used among a wide variety of physical scenarios where a quantized field on a classical background is investigated, with interesting phenomena emerging from such considerations including the decay of an electric field by the Schwinger effect [1], particle creation in an expanding universe [2], and black hole evaporation via the Hawking effect [3] (see also Refs. [4,5] and references therein). Consider for instance quantum electrodynamics, described in terms of an electromagnetic potential A µ and a Dirac field ψ, with classical action S[A µ ,ψ, ψ]. The semiclassical theory can be formally described using the concept of the effective action Γ[A µ ], obtained by functional integration of the matter degrees of freedom [6] exp{iΓ[A µ ]} = DψDψ exp iS[A µ ,ψ, ψ] . (1.1) Within this framework the (semiclassical) Maxwell field equations take the form and replace the proper Maxwell equations of the full quantized theory in the Schwinger-Dyson form ∂ µ F µν = q ψ γ µ ψ . In Eqs. (1.1) and (1.2) the electromagnetic field is treated as a purely classical entity. Moreover, the right-hand side of Eq. (1.2) is implicitly a function of A µ in the sense that the assumed vacuum depends on A µ . This is so because the modes of the charged Dirac field, defining the appropriate vacuum |0 A , satisfy equations involving the background field A µ .
This semiclassical approach is usually regarded as a truncated and effective version of the fully quantized theory, with a limited range of validity.
One advantage of the semiclassical viewpoint is that it provides a clear description of the spontaneous particle creation phenomena. The nonzero imaginary part of the effective action Γ[A µ ] indicates the quantum instability of the vacuum |0 A and the corresponding pair creation process [1]. This phenomena can be better understood in the canonical language: a positive-frequency solution of the Dirac equation (i / D − m)ψ = 0 at early times will evolve into a superposition of positive-and negative-frequency solutions at late times (this was first described for a gravitational background [2]). The semiclassical approach encapsulates in a clear way this very important effect.
The original calculation by Schwinger [1] involved a background field calculation in which the electric field E is constant in both space and time. A particle production rate was obtained. The dependence on the coupling constant q displayed an essential singularity e −m 2 /qE , showing the nonperturbative nature of the Schwinger effect. The damping of the electric field can be deduced from this particle production rate. The real part of the (Heisenberg-Euler) effective action can also account for perturbative effects, such as light-by-light scattering, in agreement with the exact one-loop calculation in the limit of low-frequency light, or the running of the effective coupling constant.
Subsequently the semiclassical backreaction equation was solved for an electromagnetic field coupled to a massive scalar field or a massive spin-1 2 field in 1+1 dimensions (D) [7][8][9] and 3+1D [9][10][11]. The electric field was assumed to be homogeneous in space, but was allowed to vary in time in response to the electric current that occurs when the produced particles are accelerated by the electric field. It was found the counter-electric field produced by this current initially starts to negate the original background electric field. Eventually the background field is completely canceled, but by this time there is a significant electric current due to particle production and the result is that the particles keep moving which generates an electric field in the opposite direction.
The process continues and the particles end up undergoing plasma oscillations with an overall electric field oscillation in time. Similar studies have also been done by solving the Vlasov equation with a source term to account for particle production [7][8][9]12], using lattice simulations [13,14], and classical statistical field theory techniques [11].
In this paper we obtain and further study solutions to the semiclassical backreaction equation in 1+1D for both scalar and spin- 1 2 fields coupled to an electromagnetic field initially generated by a homogeneous, classical current. We have two primary goals. The first is to study the details of the particle production process when backreaction effects have been taken into account, including also the transfer of energy between the electric field and the created particles. The second goal is to estimate the importance of certain types of quantum fluctuations and use the results to assess the validity of the semiclassical approximation.
We study three classical current profiles which generate an electric field that is initially zero.
The first is similar to the previous cases in that the current is proportional to a delta function potential and the electric field goes from zero to a constant value instantaneously. A second profile involves a sudden turn on of the classical current but a gradual turn on of the electric field. The third profile is that of the Sauter pulse [15] in which the current is in the form of a smooth pulse that has a significant value only for a finite period of time. For the Sauter pulse the turn on and, if quantum effects are ignored, the turn off of both the current and the electric field are very gradual.
In all three cases there is a well-defined vacuum state for the quantum fields since the electric field is initially zero. The semiclassical backreaction equation is solved numerically both in the case of semiclassical scalar and spinor electrodynamics. To our knowledge the semiclassical backreaction equations have not been generically studied for the second and third classical current profiles. The first one has been considered in Refs. [10][11][12][13][14].
The particle production process for individual modes of the quantum field has previously been studied in background field calculations where the electric field is either constant [16,17] or is gradually turned on and then off [18]. It was found that a single particle creation event occurs for many modes when the electric field is either constant or approximately constant. Here, we consider particle production when backreaction effects are taken into account. Because of the plasma oscillations, there is a richer evolution for some modes that involves multiple particle creation events and can also involve particle destruction events. We do this for individual modes for the delta function classical current profile.
For completeness, and to give better insight into the particle creation process, we also compute the total number of particles produced for all three profiles and the energy density of the produced particles for the delta function current profile. The energy density of the particles is compared with the energy density of the electric field. Similar calculations have been done previously in 1 + 1D using lattice simulations [13] and in 3 + 1D using canonical quantization [10] and classical statistical field theory techniques [11].
We compute the energy density of the quantum field using the continuous adiabatic regularization prescription, obtaining compatible results. The agreement between both approaches for Dirac massless fermions can be easily understood since the full QED 2 model is integrable [19] and particle production can be well described within the semiclassical framework. The presence of a nonzero mass breaks integrability and hence one could expect it to also break the accuracy of the semiclassical picture.
The validity of the semiclassical approximation is studied here by estimating the importance of some of the quantum fluctuations. The semiclassical approximation breaks down if quantum fluctuations are too large. We use a criterion for the validity of the semiclassical approximation that has been previously applied to the process of preheating in models of chaotic inflation [20].
An earlier version of the criterion has also been used to study the validity of the semiclassical approximation for free scalar fields in flat space when the fields are in the Minkowski vacuum state [21] and for the conformally invariant scalar field in the Bunch-Davies state in de Sitter space in the usual spatially flat cosmological coordinates [22]. To our knowledge no similar study of the validity of the semiclassical approximation has been done previously for scalar electrodynamics or quantum electrodynamics when particle creation occurs due to the presence of a strong electric field.
The method we use to study the validity of the semiclassical approximation involves an analysis of solutions to the linear response equation which can be obtained by perturbing the semiclassical backreaction equation. In general, the linear response equation obtained in this way is an integrodifferential equation which involves an integral over the retarded two-point correlation function for the source term in the semiclassical backreaction equation. In this case, that is the two-point correlation function for the electric current. While the general form is known, the specific forms for the case of a homogeneous electric field in 1+1D coupled to either a massive scalar field or a spin-1 2 field has not previously been derived. We do so in the Appendix for both of these cases. Although the linear response equation can be solved directly, there is a simpler method which can be used to obtain an approximate solution which should be valid at early times if the exact solution is relatively small. The method involves computing the difference ∆E between two solutions to the semiclassical backreaction equation which have similar starting values at a given time. This method was used to investigate the validity of the semiclassical approximation during the preheating phase of chaotic inflation in Ref. [20]. It works for the homogeneous solutions to the linear response equation that we consider here.
The paper is organized as follows. In Sec. II brief reviews are given of the quantization of complex charged scalar and spin-1 2 fields in electrodynamics. The semiclassical backreaction equations are also discussed along with the renormalization techniques used. In Sec. III the details of the particle production process are investigated for the case of a classical current profile proportional to a delta function. Also discussed is the transfer of energy between the electric field and the created particles.
The criterion for the validity of the semiclassical approximation that we use is discussed in Sec. IV where both the general form and the specific form of the linear response equation are displayed for the separate cases of a scalar field and spin-1 2 field coupled to the electromagnetic field. In Sec. V some of the results of numerical calculations we have made related to the validity of the semiclassical approximation are presented and discussed. A summary of our results and some conclusions are given in Sec. VI. The Appendix contains derivations of the specific contributions to the linear response equations from the current-current commutators when scalar fields and spin 1 2 fields are coupled to the electromagnetic field.
SPIN-1 2 FIELDS
In this section we will briefly describe the models under consideration: a quantized complex scalar field and a quantized Dirac field, both interacting with a background electromagnetic field generated by a prescribed classical source. For the two systems under investigation, we restrict our analysis to a 1 + 1D Minkowski space and assume that the background electric field is spatially homogeneous so that E = E(t) in a given reference frame. We use units such that = c = 1 and our convention for the metric signature is (−, +).
A. Scalar field
The classical action representing a scalar field φ(t, x) coupled to a background electromagnetic field is is the electromagnetic field-strength tensor, the mass of scalar field excitations is given by m, and D µ = ∂ µ − iqA µ is the gauge-covariant derivative required to make the action gauge invariant. J µ C is a classical and conserved external source. Variation of Eq. (2.1) with respect to the vector potential yields the classical Maxwell equations where the source term J µ Q induced by the scalar field is given by and fix the vector potential in the convenient form Quantizing the scalar field and expanding it in terms of modes yields where a k , a † k , b k , and b † k are the usual creation and annihilation operators obeying the commutation . Due to spatial homogeneity we can write the modes where f k (t) satisfies the ordinary differential equation and is normalized using the Wronskian condition This allows us to recast the scalar field mode decomposition as The classical action representing a spin-1 2 field ψ(t, x) coupled to a background electric field is (2.10) whereψ = ψ † γ 0 , with F µν and D µ defined the same as for the scalar field case. The Dirac matrices γ µ satisfy the anticommutation relations {γ µ , γ ν } = −2η µν . As for the scalar field, J C is an external classical source. The Maxwell equations include the source term induced by the field ψ J µ Q = qψ γ µ ψ .
Quantizing the spin-1 2 field and expanding it in terms of modes yields where here B k , B † k , D k , and D † k are the usual creation and annihilation operators obeying the an- . Using the formalism introduced in Refs. [23,24], we can construct two independent spinor solutions as follows . (2.14) Utilizing the Weyl representation of the Dirac matrices γ µ The normalization condition |h I k | 2 +|h II k | 2 = 1 ensures that the standard anticommutation relations between the creation and annihilation operators are satisfied.
C. Semiclassical backreaction equation and renormalization
A simple way to obtain the semiclassical backreaction equation is to replace J µ Q in Eq. (2.2) with J µ Q and then use Eq. (2.4) and either (2.9) or Eq. (2.13), with the result Here we have simplified the notation by omitting the superscript x on J C and J Q since in this case the t component of these vectors vanishes. When particle production occurs the background electric field accelerates the produced particles creating a current which then reacts back on this electric field. In the semiclassical approximation this current is J Q . The net electric field E(t) is then generated by both the classical current J C and the current from the created particles J Q .
We now obtain the generic forms of the finite, physical expression of J Q for both scalar and fermion fields. This is nontrivial since the formal expressions for the current are quadratic in the quantized fields. Here we will explain how the ultraviolet divergences can be tamed by using the so-called adiabatic regularization method. The method was originally proposed to obtain finite expectation values for the stress-energy tensors of scalar fields in expanding universes [25][26][27] (see also Refs. [4,5] for scalar fields and Refs. [28][29][30][31][32][33] for fermion fields). The adiabatic method has been adapted to treat spatially homogeneous electric backgrounds in Refs. [8,34,35], and it has been improved to make it consistent with gravity in Refs. [23,36,37] and connected to the DeWitt-Schwinger proper-time expansion in Ref. [38]. Here we follow the procedure proposed in Ref. [23,36,37].
Scalar field
It is useful to symmetrize the current operator for the scalar field with the result Note that the µ = 0 component of the current is identically zero, meaning that no net charge is created. The integral (2.19) contains ultraviolet divergences and hence must be renormalized.
Since the external electric field is assumed to be spatially homogeneous, it is especially convenient to use an extension of the adiabatic regularization method. For scalar fields the procedure is based on the standard WKB-type expansion of the field modes. In our case one writes the ansatz where Ω k is expanded in powers of derivatives of A(t), as Ω k = ω (0) + ω (1) + ω (2) + · · · . The leading term ω (0) is assumed to be of zeroth adiabatic order, while ω (1) is of adiabatic order one, etc. The choice of the leading-order term ω (0) determines univocally the subsequent orders. A natural possibility [35] is ω (0) = (k − qA) 2 + m 2 , which assumes that A(t) should be considered as a variable of adiabatic order zero,Ȧ of adiabatic order one, etc.
However, A(t) is intrinsically a dimensionful quantity and this suggests an alternative possibility.
As proposed in Refs. [23,36], one can also choose ω (0) ≡ ω = √ k 2 + m 2 . This choice is attached to the adiabatic assignment of one for A(t), whileȦ is considered to be of adiabatic order two, etc. This second possibility is actually the only consistent possibility in the presence of both electromagnetic and gravitational backgrounds. We then obtain Similarly, one can also determine the renormalized energy density T 00 = ρ induced by the quantized field For the spin-1 2 field the appropriate antisymmetrized term is [4] The expression for µ = 0 corresponds to the induced electric charge and, as expected, J 0 Q is identically zero, i.e. no net charge is created. The renormalized expression for the spatial component of the spin-1 2 current evaluated in the vacuum state is [23,37] It is particularly interesting to consider the massless case where the first two terms in the above integral cancel and the expression for the current becomes This result is consistent with the two-dimensional axial anomaly where J µ 5 =ψγ µ γ 5 ψ and J µ Q = −q µν J ν5 . Furthermore, the renormalized energy density is given by
SEMICLASSICAL FRAMEWORK
In this section we study both the details of the particle production process and the transfer of energy between the electric field and the produced particles for some solutions to the semiclassical backreaction equation for the delta function current profile mentioned in the Introduction.
The vacuum instability due to pair production was first realized by Heisenberg and Euler [39], who predicted, on the basis of an effective action for a constant and homogeneous electromagnetic background, a pair production rate in an electric field of order ∼ q 2 E 2 e − m 2 π qE . Schwinger, using the modern language of QED, computed the imaginary part of the one-loop effective action, also for a homogeneous and constant electric field, to evaluate the vacuum persistence amplitude (for a historical perspective, see Ref. [40]). From the exponential factor one notes immediately that the order of the critical scale for pair production can be defined to be For some of the numerical work described in the following sections we compare the classical electric field to E crit and for those comparisons we take E crit to be equal to m 2 /q, as is customary in the literature on the Schwinger effect.
While particle production in quantum field theory is a nonlocal process, for free quantum fields such as the ones we are considering, it is possible to define a time-dependent particle number that is based on the WKB approximation for the modes of the quantum field. This has been done previously in the electric field case in Refs. [16][17][18] where background electric fields were considered.
While there was some variation in the details depending on the order of the WKB approximation used, it was found for a constant electric field that when a given mode starts out in an adiabatic vacuum state, as the vector potential A(t) increases in time, there is a particle creation event that occurs when |k − qA| ∼ m and lasts for a relatively short period of time. After which the particle number for that mode approaches a constant value. Here we use the zeroth-order WKB approximation, used in Refs. [16][17][18]: Writing the exact mode functions as and substituting these expressions into Eq. (2.7) converts the mode equation into two first-order coupled differential equations for α k (t) and β k (t). Substitution into the Wronskian condition (2.8) gives the condition |α k (t)| 2 − |β k (t)| 2 = 1. Note that if the vector potential stops varying in time then the zeroth-order WKB approximation becomes exact and α k and β k become Bogoliubov coefficients which relate the in vacuum state to the out vacuum state. With this motivation one can define the time-dependent particle number for a given mode k to be with the total number of created particles at time t given by Inverting Eqs. (3.3a) and (3.3b) gives A similar analysis can be done for spin-1 2 particles. Time-dependent Bogoliubov coefficients can be obtained by first defining and then imposing the relations with the result that A classical current adds energy to the electric field and, if particle production occurs, then some of the electric field's energy is used for this process. If the classical current shuts off at some point then, since the calculations are being done in flat space, energy is conserved but can still be transferred between the electric field and the produced particles. To see this, note that the energy density of the electric field is ρ elec = 1 2 E 2 . A formula for the energy density of a scalar field in the case of a homogeneous electric field in 1 + 1D is given in Eq. (2.22) and one for the energy density of a spin-1 2 field is given in Eq. (2.27). With these definitions it is easy to check that where the last term in parentheses is precisely the semiclassical Maxwell equation for the electric field (2.17). Thus one can investigate the time dependence of the transfer of energy between the electric field and the particles by simply plotting ρ elec and ρ ren . We note that in our approach energy conservation is a rigorous consequence of the adiabatic renormalization prescription.
To study the effects of both particle production and the transfer of energy we consider models in which the electric field is initially generated by a classical current of the form Since the electric field is zero for t < 0, there is a natural initial vacuum state which for a scalar field is For a spin-1 2 field the initial vacuum state is Since the classical current is zero for t > 0, the total energy density of the system is constant for both the scalar and spin-1 2 cases. To solve the semiclassical backreaction equations numerically we have used dimensionless variables and parameters. We have scaled the mode equations, (2.7) for scalars, (2.16a), (2.16b) for spin-1 2 fields, and also the semiclassical Maxwell equation (2.17) in terms of the electric charge q. The new scaled parameters are For the mode functions for the scalar field We also use the definitions where E crit is the critical scale for pair production defined in Eq. (3.1).
A. Particle production and energy transfer
Here we investigate some of the details of the particle production process including the transfer of energy between the electric field and the particles for solutions to the semiclassical backreaction equation when either a scalar field or a spin-1 2 field is coupled to the electric field and the classical current is given by Eq. (3.11). The specific solutions considered have E crit = m 2 q = 10 and either E 0 = E crit or E 0 = 5E crit .
In Fig. 1, some of our results for a scalar field coupled to the electric field are shown for E 0 = E crit in the top panels and E 0 = 5E crit in the bottom ones. cIt is apparent that as soon as particle production starts to occur, the initial electric field decays and the electric current increases as a consequence of the created particles. When the electric field has been reduced significantly the current reaches a plateau and the particle creation saturates. Furthermore, when the electric field changes sign and its magnitude again becomes large, the particle creation rate is enhanced while the current is slowed and then reversed. This results in plasma oscillations. Note also that the duration of the initial growth of the electric current J Q is of the same order as the duration of the initial growth in the particle number Ñ .
In Fig. 2, some of our results for a spin-1 2 field coupled to the electric field are shown for E 0 = E crit in the top panels and E 0 = 5E crit in the bottom ones. Comparing Fig. 2 with Fig. 1, one finds that for the smaller value of the initial electric field, E 0 = E crit , all of the details are very similar to the scalar field case. For the larger initial value of the electric field many of the general features are also similar including the initial damping of the electric field and subsequent plasma oscillations. However, some of the details differ significantly. Due to Pauli blocking the particle production for the spin-1 2 field effectively shuts off fairly early in the process. One result is that there is less energy permanently transferred to the particles than in the scalar field case.
There are some differences in both the scalar field and spin-1 2 cases between the solution for which the electric field is at the critical value initially and the solution for which it is initially much larger. As would be expected there is significantly more particle production and a significantly faster initial damping for the larger field. Once the plasma oscillations begin there also appears to be a much faster approach of the amplitude of the electric field and the total number of particles to their asymptotic values when the initial electric field is larger. Further, examination of the energy density shows that a significant amount of the initial energy of the larger electric field is permanently transferred to the particles during the first damping phase and this increases during the plasma oscillation phase. For the smaller field less energy is transferred initially to the particles during the first damping phase and the permanent transfer of energy to the particles upon each plasma oscillation is smaller.
For both the scalar and spin-1 2 fields, a clear correlation is found between the maxima of the energy density of the created particles and the maxima and minima of the current due to the created field is m 2 q 2 = 10 and thus E0 q = 10 and 50 respectively. In the left panels the electric fieldẼ and the electric current J Q ren are plotted. For each of the middle panels the blue dashed curve corresponds to the energy density of the electric field ρ elec , the orange solid curve represents the energy density of the created particles ρ ren , and the straight yellow line is the total energy density of the system. The total particle number Ñ is plotted in the right panels.
particles. For cases in which the total number of particles continues to increase significantly after the first burst of particle production, the maxima in the energy density of the created particles correlate with the middles of the time periods when the total number of particles is approximately constant. As expected, the minima of the energy densities of the created particles correspond to times when a new round of significant particle production is just beginning in cases where there is significant particle production after the first burst. In general the periods of significant particle production correspond to periods when energy is being transferred to the particles. It is interesting to note that the above results, obtained within the adiabatic renormalization prescription in the continuous limit, are compatible with the results obtained using a similar method in 3 + 1D [10] as well as those obtained in 1 + 1D and/or 3 + 1D using lattice simulations [13,14] and classical statistical field theory techniques [11].
It was shown in Refs. [16][17][18] that a single particle creation event occurs for an individual mode if the background electric field is either constant or approximately constant. What is different here is that the backreaction of the produced particles produces plasma oscillations. The resulting oscillations of the electric field lead to some modes undergoing multiple particle creation events and sometimes also particle destruction events. This can be seen in Fig. 3 where the time evolution of the function |β k | 2 forẼ 0 = 1 is shown for both the scalar field and spin-1 2 field cases. Comparison with the plot of the vector potential A(t) shows that the creation, or destruction, process for an individual mode k happens when k − qA(t) ≈ m. For completeness we extend our analysis to the massless limit for the spin-1 2 field. In this case, the mode equations (2.16a) and (2.16b) decouple, and with the initial conditions given in For a detailed analysis of the adiabatic invariance of the particle number see Ref. [41]. As in the general case, the total energy of the system is conserved. We note the exact analytic solubility of the case m = 0 is due entirely to the axial anomaly in 1 + 1D. In fact, the constant |q| √ π is the mass of the "photon" in the Schwinger model generated by radiative corrections [19]. In the massless case the (nonlocal) effective action Γ[A µ , J C ] can be obtained exactly and it describes a gauge-invariant vector field with mass |q| √ π (see, for instance, Ref. [42]). The semiclassical calculation of the produced energy due to the external source provides an accurate result. In the massive case the effective action does not describe an integrable model [43,44] and the semiclassical picture is expected to break down at some point. The validity of the semiclassical approximation for massless and massive spin-1 2 fields for the asymptotically constant classical profile is addressed in Sec. V A.
IV. VALIDITY CRITERION FOR THE SEMICLASSICAL APPROXIMATION
The semiclassical backreaction equation can be derived from Eq. (1.1) via a loop expansion [6].
In this case when solving the semiclassical backreaction equation, the semiclassical approximation breaks down if contributions from the quantum terms to the equations become comparable to that of the classical background field and any other classical fields. The reason is that one expects higherorder terms in the loop expansion to be important in that limit. However, there is a different way to derive the semiclassical backreaction equation called the large-N expansion. In this expansion one considers N identical quantum fields coupled to the background field, which to leading order is treated as a classical field. At next-to-leading order in the large-N expansion, quantum effects due to the background field first appear [45,46]. Thus in this expansion it is consistent to consider solutions to the semiclassical backreaction equation for which the quantum fields have a significant effect on the classical background field. Here we will take N = 1 and consider a wide range of situations ranging from those where the background electric field is small compared with the (Schwinger) critical scale E crit ≡ m 2 /q and quantum effects are correspondingly small to those where the background electric field is large compared to the critical value and quantum effects are correspondingly large. The critical value is the threshold for which a significant amount of particle production is expected to occur.
The large-N expansion provides a formal framework for the semiclassical backreaction equation when quantum effects are significant. However, it does not guarantee that the semiclassical approximation is valid. There are three reasons. The first is that interactions of the quantum fields which are coupled to the classical background field are ignored in most cases, including those considered here. This works if the interactions are small over the time scales relevant to the problem. The second is that even if the next-to-leading order terms in the large-N expansion are initially small in size, it has been shown in certain quantum mechanics calculations that they undergo secular growth [47] and there is evidence that secular growth also occurs for such terms in quantum field theory [48]. However, there is also evidence that partial resummations of certain classes of Feynman diagrams eliminate this problem [49,50]. The third is that the semiclassical backreaction equation involves an expectation value of some quantity such as the electric current or stress-energy tensor that is constructed from the quantum fields. For an expectation value to be a good approximation to what one would measure in quantum theory, it is necessary that quantum fluctuations are small. There are problems associated with some of these, as described in Refs. [21,51,52]. For example, it has been shown for the symmetric part of the stress-energy tensor two-point correlation function that there can be state-dependent divergences in the limit that the points come together [51]. A related issue is that it has been shown in at least one case in the limit that the points come together that different renormalization schemes can give different results for a particular quantity made from one component of the stress-energy tensor two-point correlation function [52]. There can also be covariance issues with some of the quantities made from the stress-energy tensor two-point correlation function [21].
There is a correlation function that is free of these problems and which emerges naturally from the semiclassical theory itself and that is [J(t, x), J(t , x )] . By perturbing the semiclassical backreaction equation one is led to the so-called linear response equation which contains this correlation function and which describes the time evolution of perturbations about a given semiclassical solution. A criterion was developed in Ref. [21] for the validity of the semiclassical approximation in gravity which states that a necessary condition for the validity of the semiclassical approximation to be valid is that any linearized, gauge-invariant scalar quantity constructed from solutions to the linear response equations with finite nonsingular initial data should not grow without bound. It is important to emphasize that this is not a sufficient condition for the validity of the semiclassical approximation. The criterion was adapted to cover preheating during chaotic inflation [20] where a significant amount of particle production occurs and quantum effects are large. If the criterion is applied to semiclassical quantum electrodynamics then it would state that the semiclassical approximation breaks down if any linearized gauge-invariant quantity constructed from solutions to the linear response equation with finite nonsingular initial data grows rapidly for some period of time.
A. Linear response equation To analyze the behaviors of solutions to this equation, particularly at early times, it is useful to break the solutions to the semiclassical backreaction equation into two parts with From the structure of the linear response equation it is clear that its solutions δE can be broken up in exactly the same way. Then, the criterion for the validity of the semiclassical approximation can be modified to state that if the quantity δE Q grows significantly during some period of time then the semiclassical approximation is invalid. It is worth noting that because J Q and δ J Q are constructed from solutions to the mode equation which depend on the vector potential A, and therefore indirectly on E, then E Q depends on E C and δE Q depends on δE C .
In Appendix A it is shown for both the scalar and spin-1 2 coupled systems that for homogeneous perturbations, δ J Q depends upon the two-point correlation function for the current. A more general derivation is given in Ref. [53]. For scalar fields the result is where and (4.5) It can be shown, using the point-splitting technique, that the divergence structure in the first integral is conveniently compensated for by the divergence structure that is inherent in the second integral. 1 Therefore, δ J Q is finite and the overall equation is well defined.
For spin-1 2 fields the renormalized perturbation of the quantum current in (4.1) is Recall that in the massless limit we find that the mode equations decouple and the solutions are given in Eq. (3.17). Thus, for a given value of k either h I k or h II k is zero, and hence h I k h II k = 0 for any value of k. Therefore in the massless limit the current-current commutator in Eq. it is more problematic when one considers the relative difference because the denominator will 1 It is not obvious that there is a divergence in the second integral because the commutator vanishes in the limit that the points come together. However, a careful analysis shows it to be there. vanish at certain points. For this reason we introduce a modified version of the relative difference which is guaranteed to be no smaller than zero and no larger than one. Consider two solutions to either the classical or semiclassical backreaction equation in 1+1D, E 1 = E 1x and E 2 = E 2x (or just E 1 and E 2 since we are only considering one spatial dimension). Then the absolute and relative differences are respectively We note that R can be easily reexpressed as a Lorentz-invariant quantity.
It is useful to apply the relative difference R for two solutions to the classical backreaction equation which, as can be seen in Eq. (4.2b), are simply integrals over the classical current J C .
Consider a classical current of the form Hereġ(t) is the time derivative of some well-behaved, dimensionless function g(t), and the solution to the classical Maxwell equation is E C = E 0 g(t). In the following sections we will consider the cases g(t) = qt 1+qt and g(t) = sech 2 (qt), with the latter being the Sauter pulse. The solutions are parametrized by the constant E 0 . For two solutions to Eqs. (4.2b) with (4.9), E C1 and E C2 , with E 0 = E 01 and E 0 = E 02 respectively, we have for the absolute and relative difference Next, consider two solutions to the semiclassical backreaction equation. Since we are considering classical currents, which are zero initially, and an electric field that is zero initially, there is no ambiguity in the choice of vacuum state. Therefore these solutions are also parametrized by the value of E 0 for a given function g(t). Using the subscripts 1 and 2 to denote quantities computed for these solutions, it is clear that the difference ∆E is an exact solution to the equation Suppose at some early time t 1 , when E C is still very small with no significant amount of particle production, that R C (t 1 ) 1. One can then arrange the initial conditions for the perturbation δE such that δE(t 1 ) = ∆E(t 1 ). It is also obvious that one can set for all times δJ C (t) = ∆J C (t). Then Eq. (4.11) is approximately equivalent to the linear response equation (4.1) so long as ∆ J Q ≈ δ J Q , which one would certainly expect to be the case at times near t 1 .
As discussed in the previous subsection [see Eq. (4.2a)], it is more useful at early times to consider the quantity ∆E Q ≈ δE Q . To measure the relative growth of ∆E Q we compute the relative difference This difference can then be compared to the relative difference between the corresponding classical solutions R C in Eq. (4.10b), which does not change in time.
Consider two times t 2 > t 1 where t 1 is the initial time discussed above when one imagines fixing the starting values for the linear response equation and t 2 is a relatively early time after that.
Then the possibilities are as follows. (i) If R Q (t) R C then the criterion for the validity of the semiclassical approximation will be satisfied by the approximate homogeneous solutions that we consider up to the time t 2 . (ii) If for any times between t 1 and t 2 , R Q (t) R C , then the solution to the linear response equation, δE, grows rapidly during at least some part of the period t 1 ≤ t ≤ t 2 and the criterion for validity of the semiclassical approximation is not satisfied. Note that once the semiclassical approximation has broken down, one can no longer trust its solutions even if for later times R Q R C . (iii) Finally, the intermediate case when R Q is larger than R C but still of the same order of magnitude is ambiguous. Perhaps the best that can be said is in this case quantum fluctuations are increasing and so the accuracy of the semiclassical approximation is decreasing in proportion to this increase.
V. NUMERICAL RESULTS
In this section we implement a numerical analysis to study the validity of the semiclassical approximation for two different classical source profiles. To do so, we use the method described in the previous section to compare the numerical solutions of the semiclassical backreaction equation for two distinct, but very close values of the external source amplitude E 0 . The first profile considered has a classical source current given by for t ≥ 0 and J C = 0 for t < 0. The classical solution of the Maxwell equation (−Ė C = J C ) gives rise to the asymptotically constant electric field profile for t ≥ 0 The second profile considered is the Sauter pulse with source current given by As discussed in Sec. IV B, it is useful, particularly at early times, to work with the quantity E Q in Eq. (4.2a) which is the difference between the net electric field and the electric field E C that would be present if there were no quantum effects. Therefore the natural quantity to consider is the relative difference R Q in Eq. In what follows, numerical results will be shown for calculations of R Q and other quantities such as E(t), J Q , and N for scalar and spin-1 2 semiclassical electrodynamics for the asymptotically constant classical profile and then for the Sauter pulse classical profile. As stressed before, we mainly focus on the early-time behavior. In both cases it is assumed that the electric field and vector potential are initially zero. As a result, for scalar fields the initial conditions for the mode functions are For spin-1 2 fields the initial conditions are First, we discuss the mass dependence of the function R Q and its relation to the validity of the semiclassical approximation, with a focus on the asymptotically constant profile. Then, we show the results of our analysis for the most relevant case E 0 ∼ E crit = m 2 /q for both the asymptotically constant profile and the Sauter pulse. As in Sec. III, for the numerical computations we use the dimensionless parameters described therein. However, in this section the electric field and the electric current are given in terms of E/q and J/q 2 respectively.
Since we are considering multiple cases and subcases, a summary of all relevant information, including all cases and sub-cases with figure references, can be found in Table I.
which is the equation for a simple harmonic oscillator with frequency |q| √ π and external source J C . In this case, the linear response equation is just δÄ + q 2 π δA = δJ C . (5.8) Note that δ J Q ren = − q 2 π δA and also that the initial conditions for δJ C can be arranged so that δJ C ≡ ∆J C .
For the asymptotically constant profile, J C is given in Eq. (5.1). With initial conditions A(0) = 0 and E(0) = 0, we immediately find t dt are the cosine and the sine integral functions respectively. Hence, we can conclude that for any two solutions E 1 (t) and E 2 (t) with E 0 = E 01 and E 0 = E 02 respectively, the relation is always satisfied. Although this result was derived for the asymptotically constant profile (5.1), it holds for any classical current of the form J C = −E 0 g(t).
Massive spin-1 2 field
We next study the relationship between the behavior of R Q , the mass of the spin 1 2 field, and the value of E 0 in Eq. (5.1). As illustrated in our numerical results below, the most important effect on R Q comes from the size of the dimensionless quantity qE 0 m 2 . We distinguish between three different cases: (i) qE 0 m 2 1 in which the mass is relatively small compared to the electric field and there is a lot of particle production, (ii) the intermediate case qE 0 m 2 ∼ 1 where there is a significant amount of particle production, and (iii) qE 0 m 2 1 in which the mass is relatively large compared to the electric field and there is very little particle production. Fig. 5 where various quantities, such as the electric field, are plotted for E 0 /q = 1 and m 2 q 2 = 1 and m 2 q 2 = 2. As expected, the amount of particle production that occurs decreases significantly as qE 0 m 2 decreases and thus as the effective mass increases. Note that the time scale on which backreaction effects occur increases significantly with an increase in the effective mass.
The beginning of the transition from intermediate to large effective masses is shown in
In the very-large-mass limit qE 0 m 2 → 0, the electric field will not have enough energy to create particles, so one expects that J Q ren → 0 and E → E C . This is in agreement with the decoupling theorem in perturbative quantum field theory [54]. Heavy masses decouple in the low-energy description of the theory, which in this case is purely classical electrodynamics for m 2 → ∞, with E 0 fixed. Fig. 5 where qE 0 m 2 ∼ 1, there is a significant amount of particle production and once enough particle production has occurred the value of R Q starts to increase rapidly, possibly exponentially for qE 0 m 2 = 1. This rapid rise continues until the backreaction of the particles on the background electric field is strong enough that the electric field has stopped increasing and has begun to noticeably decrease in size. Thus in the intermediate case it appears that our criterion for the validity of the semiclassical approximation is not satisfied due the rapid and significant growth in R Q at relatively early times.
In the intermediate cases shown in
The transition from the intermediate case to the small-effective-mass case when E 0 /q = 1 is shown in Fig. 6. Comparison with Fig. 5 shows that the intermediate case extends to m 2 q 2 = 0.1, but not to m 2 q 2 = 0.01 which has a qualitatively different behavior. In particular for the relatively small-mass and zero-mass cases the particle production is more rapid and backreaction effects on the electric field are significant after a much smaller amount of time than for intermediate masses.
Examination of the behavior of R Q shows that it does not grow rapidly in time for the small-mass case and, as mentioned above, is constant in the massless case. Thus our criterion for the validity of the semiclassical approximation is satisfied by the homogeneous approximate solutions that we consider in the relatively small-mass case.
In the above analysis the value of the ratio Figure 5. Here, the values E 01 /q = 10 and E 02 /q = 10 + 10 −3 have been chosen for the representation of the function R Q .
Scalar field
Unlike the case of the spin-1 2 field, there is no clear limit that we have found as m → 0 for a scalar field coupled to the electromagnetic field. However, our numerical results shown in Fig. 8 indicate that, as for the spin-1 2 field, R Q grows significantly at early times for qE 0 m 2 ∼ 1 but grows much less rapidly in time for larger values of qE 0 m 2 . Thus our criterion is violated for qE 0 m 2 ∼ 1 but, at least for the homogeneous approximate solutions that we consider, it appears to be satisfied for qE 0 m 2
1.
We have found that the behaviors of solutions to the semiclassical backreaction equation when a scalar field is present are in many ways qualitatively similar to the corresponding ones for the spin-1 2 field for cases in which the ratio qE 0 m 2 is not too large. This is illustrated in Fig. 9 for qE 0 m 2 = 1 and 10. The main difference occurs for the latter case where a larger ratio results in more particle production for the scalar field than for the spin-1 2 field due to Pauli blocking. Even in that case the early-time behaviors of R Q are similar for the two fields.
For the large-mass limit, we expect that, as for the spin-1 2 case, the semiclassical approximation will approach the classical limit as qE 0 m 2 → 0. Figure 5. We have chosen E 01 /q = 10 and E 02 /q = 10 + 10 −3 to represent the function R Q .
B. Sauter pulse classical profile
While our results relating to the validity of the semiclassical approximation are the same for the scalar and spin-1 2 fields for the asymptotically constant classical profile, one might be concerned that there could be significant differences for other classical profiles. To test this we have also investigated the validity of the semiclassical approximation for the Sauter pulse classical profile given in Eq. (5.4) with the classical current (5.3). Unlike the asymptotically constant classical profile, the classical current in this case is a C ∞ function so there is no extraneous particle production due to the sudden approximation.
We find for the Sauter pulse classical profile for both the scalar and spin-1 2 cases, that R Q grows significantly at early times for qE 0 m 2 ∼ 1, as it does for the asymptotically constant classical profile, and it is bounded for qE 0 m 2 1. Thus we find that our criterion for the validity of the semiclassical approximation is violated for qE 0 m 2 ∼ 1 while, for the approximate homogeneous solutions that we consider, our criterion appears to be satisfied for qE 0 m 2
1.
Not surprisingly, given the difference between the Sauter pulse and asymptotically constant classical profiles, there are significant qualitative differences in the solutions for the electric field and in the time dependence of the number of particles that have been created. These results are illustrated in Fig. 10 for both the scalar field and spin-1 2 field cases. It is clear from the plots that for the values qE 0 m 2 = 1 and 10 the backreaction effects start to be relevant before the classical pulse ends. After the effect of the classical current subsides, plasma oscillations are expected to occur because of the current created by the produced particles. There is evidence for this in the plots of the electric field. In the case qE 0 m 2 = 1, backreaction effects are relatively weak and the particle creation essentially ceases once the pulse in the electric field has ended. However, for qE 0 m 2 = 10 the initial plasma oscillation is large enough that particles are created in the scalar field case after the pulse ends.
VI. CONCLUSIONS AND FINAL COMMENTS
Numerical solutions to the semiclassical backreaction equation for quantum electrodynamics in 1+1D have been obtained for models of the Schwinger effect where particle production occurs due to the presence of a strong electric field. The particle production results from the coupling of either a quantized massive charged scalar field or spin-1 2 field to a classical electric field. In each case the homogeneous electric field is zero initially, as it would be in a laboratory setting, and is generated by a classical current. We have also used a renormalization scheme for the electric current and for the energy density of the quantum fields that is consistent with what would be used in a curved space background. This is different from previous backreaction calculations where the electric field was nonzero initially [7,8,34].
In agreement with the previous backreaction calculations, it was found that if the electric field becomes large enough so that qE m 2 1 then a significant amount of particle production occurs.
Subsequently, the produced particles create a current which generates an electric field in the opposite direction which begins to cancel the background electric field. After the initial damping of the background electric field, both the electric field and the current generated by the particles oscillate.
The particle creation process has been discussed in detail for background electric fields in Refs. [16][17][18]. It was found that individual modes undergo a quasilocal particle creation event at roughly the time when (k − qA) 2 ≈ m 2 . Here we have found that when backreaction effects are taken into account the same type of particle creation events occur. What is different is that, because of the oscillations in the the vector potential at late times, there are modes that undergo multiple particle creation events. Furthermore, once a given mode has undergone a particle creation event, it is possible for it to also undergo a particle destruction event although this does not always happen.
The total number of particles was obtained using the standard definition of a time-dependent particle number [16,17]. For all three profiles considered it was found that the total particle number never decreases by any significant amount but that it is approximately constant for periods of time. This is compatible with previous calculations of the total particle number when the electric field is turned on suddenly by a classical current that is proportional to δ(t) in 3 + 1D using canonical quantization [10] and in both 1 + 1D and 3 + 1D using lattice simulations [13,14].
The energy density of the quantum field was computed for a classical current that is proportional to δ(t) and is thus zero for t > 0. The total energy of the system is then constant and one can unambiguously track the transfer of energy between the particles and the electric field. It was found that a significant amount of energy is permanently transferred to the particles during the first damping phase of the electric field. More is then permanently transferred to the particles upon subsequent oscillations of the electric field. This is also consistent with previous calculations in 1 + 1D using lattice simulations [13] and in 3 + 1D using canonical quantization [10] and classical statistical field theory techniques [11].
Correlations between the energy density of the particles, the current due to the particles, and the total particle number were found. In particular, times when the number of particles grows directly correspond to times when the current is changing, and times when the total number is not growing significantly correspond to times when the current is approximately constant. However, the current keeps oscillating even after the particle number stops growing significantly.
Since semiclassical electrodynamics is an approximation to quantum electrodynamics, an important question is whether this approximation is a good one for a given solution to the semiclassical backreaction equation. We have addressed this question by adapting a criterion developed for semiclassical gravity and modified for chaotic inflation models, that should be satisfied if the semiclassical approximation is valid. It is therefore a necessary but not sufficient condition. The condition is based upon the fact that the retarded two-point function for the current appears in the linear response equations for semiclassical electrodynamics. If this correlation function grows significantly in time and therefore solutions to the linear response equation grow significantly, then one expects that quantum fluctuations are significant. We have approximated homogeneous solutions to the linear response equation by taking two solutions to the semiclassical backreaction equation which are nearly the same at early times and plotting a relative difference between them which we call R Q , defined in Eq. (4.12). In cases where this difference grows significantly in time one expects that the corresponding solution to the linear response equation will also do so.
We have investigated the validity of the semiclassical approximation for both the scalar and spin-1 2 fields using two different classical current profiles which are shown along with the resulting electric field (if backreaction effects are ignored) in Fig. 4.
In the zero-mass limit for the spin-1 2 field, the solutions to the semiclassical backreaction equations are completely determined by the axial anomaly. In this case, there is no growth whatsoever in the relative difference R Q , and thus, for the approximate homogeneous solutions to the linear response equation that we considered, our criterion appears to be satisfied. We have investigated the behaviors of solutions in the small-mass case, i.e., m 2 qE 0 , and found that they smoothly approach those found in the zero-mass limit. Thus, for the same type of solutions to the linear response equation, our criterion appears to be satisfied in the small-mass limit as well. Note that in this limit there is a great deal of particle production and backreaction effects are very strong (see Figs. 6 and 7). Although there is no solvable massless limit for spin-0 field, we have also checked numerically that there is less growth in R Q with time as we decrease the mass of the created particles (see Fig. 8).
The intermediate case m 2 ∼ qE 0 is very different. In both the asymptotically constant and Sauter pulse models and for both the scalar and spin-1 2 fields, once the amount of particle production has become significant, there is a rapid and significant growth in the ratio R Q . Thus in this case our criterion is not satisfied because of this growth. This is similar to the breakdown of the semiclassical approximation found in Ref. [20] for the preheating phase of chaotic inflation.
In the large-mass limit where qE 0 m 2 → 0, particle production does not occur and the behavior of the electric field can be predicted by classical electrodynamics. This is in agreement with the decoupling theorem [54].
It is very likely that the first experimental verification of the Schwinger effect will be for the intermediate-mass case. Thus it is worth examining the predictions for that case more carefully.
First, there is no observed growth in R Q at very early times before backreaction effects become significant. Therefore our criterion appears to be initially satisfied. However, given the difficulty in creating a strong enough electric field for the Schwinger effect to be observed in the laboratory (the field strength required being on the order of E crit ∼ 10 18 V/m), the focus of the initial experiments is likely to be on the detection of particles rather than their backreaction effects.
Thus the semiclassical approximation should be able to give a good description of the particle production process at such early times. Second, once backreaction effects become significant, a relatively large number of particles is likely to have been created. In previous work on the study of the validity of the semiclassical approximation for preheating in chaotic inflation [20] it was found that in one case that could be compared there was good qualitative agreement with calculations that used a random-phase approximation [55][56][57] even though the semiclassical approximation broke down early in the process. Similarly, the backreaction calculations in Ref. [13] using classical statistical field theory techniques in 1+1 D are in qualitative agreement with our calculations of the electric field, energy density, and total particle number. Thus the semiclassical approximation can, at least in some cases, provide reasonable qualitative predictions even when its quantitative predictions cannot be trusted.
Thus the solutions to Eq. (A2) can be written in the form where δU H k (t, x) is a solution to the homogeneous part of Eq. (A2). The explicit form of G R (x, x ) can be found using Eq. (A3) with Eq. (2.9) evaluated in the vacuum state, which yields Restricting attention to spatially homogeneous perturbations we have Substituting Eqs. (A6) and (A7) into Eq. (A5) and integrating yields The perturbation of the renormalized current (2.21) yields Substituting Eq. (A8) and its complex conjugate into Eq. (A9) yields Our goal is to show that the above linear response equation can be written in terms of the Thus δ J Q ren for a scalar field has been cast in terms of the current-current two-point correlation function. Note that δf H k (t) corresponds to a change of state of the quantum field. For the cases considered in this paper the vector potential and its first time derivative are zero initially so the perturbations do not cause a change in the state of the field so δf H k (t) = 0. Then the linear response equation (4.1) becomes (A13)
Spin-1 2 field
The mode equation for a massive charged spin 1 2 field can be obtained by substituting Eq. (2.13) into Eq. (2.12) with the result i γ t ∂ t + i γ x ∂ x + q γ x A(t) − m u k (t, x) = 0 . (A14) If one perturbs the vector potential about some solution to the semiclassical backreaction equation A(t) such that A(t) → A(t) + δA(t) and writes for the mode function u k (t, x) → u k (t, x) + δu k (t, x) then to leading order For a massive spin-1 2 field the retarded Green's function is a solution to the inhomogeneous equation where 1 is the identity matrix. Thus the solution to Eq. (A15) can be written in the form where H represents the homogeneous solution. The explicit form of G R (x, x ) can be found using Eq. (A16) with the Dirac field expansion (2.13) in terms of spinor solutions (2.14) evaluated in the vacuum state, which yields Restricting attention to spatially homogeneous perturbations and using Eq. (2.14) gives Changing the integration variable to k in (A19), substituting the result along with (A14) and (A20) into (A18), and integrating first over x and then over k gives The perturbation of the renormalized current (2.24) yields δ J Q ren = q 2π ∞ −∞ dk h I * k (t)δh I k (t)+h I k (t)δh I * k (t)−h II * k (t)δh II k (t)−h II k (t)δh II * k (t)− q m 2 ω 3 δA(t) .
(A22) Equation (A21) and its complex conjugate can be substituted into Eq. (A22) to yield dt Im h I (t)h II (t)h I * (t )h II * (t ) δA(t ) . (A23) As in the scalar field case, an explicit expression for the two-point correlation function is needed.
To calculate the two-point correlation function we begin by utilizing the antisymmetrized current density (2.23) with the fermion field mode expansion (2.13) evaluated in the vacuum state. Integrating over the spatial coordinate gives dk Im h I k (t)h II k (t)h I * k (t )h II * k (t ) . (A24)
|
2020-10-21T01:01:01.741Z
|
2020-10-19T00:00:00.000
|
{
"year": 2020,
"sha1": "12f8d3935757d5b8d32630da157f76e1d9b40069",
"oa_license": null,
"oa_url": "https://digital.csic.es/bitstream/10261/266933/1/21_Pla_Pair.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2403eb9d802fb527f5d03a1f0038687331242a4d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
55633220
|
pes2o/s2orc
|
v3-fos-license
|
Using Community-Based Programming to Increase Family Social Support for Healthy Eating among African American Adolescents
Little is known about emotional and instrumental social support for nutrition behaviors among African-American adolescents. In this paper, we specifically examine intervention effects on emotional, instrumental and total (composite) social support for fruit/vegetable and low-fat dairy intake. Data from a larger intervention, based on Social Cognitive Theory, which was implemented with 38 African-American adolescents and their families to increase fruit/vegetable intake, low-fat dairy intake and physical activity behaviors are presented. One-way ANOVA analyses revealed that intervention participants had positive and significant increases in emotional social support for low-fat dairy intake (P=0.01), total social support for fruit/vegetable intake (P=0.05), and total social support for low-fat dairy intake (P=0.02). Specific recommendations addressing family social support for healthy eating through youth development programming are discussed.
Introduction
In 1983, a report from the U.S. Department of Health and Human Services revealed that the overall health of the nation was improving, but there were significant racial disparities (U.S.Department of Health and Human Services, 1983).Now, more than 20 years later, the problem of health disparities persists (Fuller, 2003).Four of the six causes of death that still disproportionately affect minorities are related to nutrition (i.e., cardiovascular disease, cancer, diabetes, infant mortality).Professionals involved in health promotion programming with youth need proven methods to intervene on specific nutrition behaviors related to health disparities.Family-based interventions aimed at influencing social support for healthy eating may be an effective avenue.
Adolescents and Nutrition
There is a trend for the consumption of fruits and vegetables among children to decline with age.Further, differences in fruit and vegetable consumption among racial/ethnic groups have been noted from childhood through adulthood.(Center for Nutrition Policy and Promotion, 2001).At least one study has shown that racial differences in vegetable intake were not significant, but African-American adolescents did report consuming significantly more fruits than Caucasian adolescents (Brady, et al., 2000).A similar study revealed that African-American adults reported consuming fewer fruits and vegetables than Caucasians (Kumanyika & Odoms, 2001).These findings underscore the importance of intervening early with African-American adolescents in order to maintain and improve their levels of fruit and vegetable consumption across the lifespan.
Milk and other dairy products are the major source of calcium in the U.S. food supply, contributing 72% of the available calcium in American diets (Miller, Jarvis & McBean, 2001).Milk has a higher concentration of calcium as compared to other foods, and milk is fortified with vitamin D, which increases calcium absorption (Standing Committee on the Scientific Evaluation of Dietary Reference Intakes, Food and Nutrition Board, & Institute of Medicine, 1999).Without consuming dairy products, it is difficult to meet the dietary calcium recommendations (Food Surveys Research Group, 1999).However, according to data from the 1994-96 Continuing Survey of Food Intakes by Individuals, Americans two years old and older consumed an average of 1.5 servings of dairy per day; the Food Guide Pyramid recommendations are two to three servings per day (Neumark-Sztainer, et al., 1997).Intake of milk and other dairy products has been shown to decrease between six and eleven years of age (Grunbaum, et al., 2002;Neumark-Sztainer, et al., 1997; Standing Committee on the Scientific Evaluation of Dietary Reference Intakes, Food and Nutrition Board, & Institute of Medicine, 1999).Additionally, when comparing race and gender, Caucasian girls report a 4% higher prevalence of milk consumption than African-American girls and Caucasian boys report a 6% higher prevalence of milk consumption than African-American boys (Gillum, 1991).This racial discrepancy in milk and dairy intake for adolescents may be due to the higher proportion of lactose intolerance among African-Americans and may reflect cultural eating habits modeled from parent to child (Kumanyika & Odoms, 2001; Standing Committee on the Scientific Evaluation of Dietary Reference Intakes, Food and Nutrition Board, & Institute of Medicine, 1999).
Compared to Caucasians, African-American children and adolescents are at a higher risk for developing essential hypertension and cardiovascular disease in early adulthood.The consumption of fruits, vegetables and low-fat dairy can greatly reduce this risk (Gillum, 1991).A subgroup analysis of hypertensive African-Americans in the Dietary Approaches to Stopping Hypertension (DASH) study demonstrated a greater blood pressure lowering effect when participants consumed a diet emphasizing fruits, vegetables and low-fat dairy.In the DASH study, a control group who ate a diet emphasizing fruits and vegetables reduced systolic blood pressure by 8.0 mm Hg and diastolic blood pressure by 3.4 mm Hg.An intervention group who ate a diet emphasizing fruits, vegetables and low-fat dairy, reduced systolic blood pressure by 13.2 mm Hg and diastolic blood pressure by 6.1 mm Hg (Svetkey, et al., 1999).Increased calcium intake via low-fat dairy products greatly improved blood pressure outcomes in this study compared to a diet emphasizing fruits and vegetables.
Family-based Interventions
Family greatly impacts behavioral development of children (Baranowski & Nader, 1985), and has been characterized as the greatest overall influence on a child's health (Roberts & Wallander, 1992).Regarding dietary intake, family factors have been shown to affect food preferences and subsequent eating behaviors (Sallis & Nader, 1988).Families influence children and young adolescents through social support.Family support for diet has been shown to be more highly correlated to dietary intake than support from friends, although peer influences increasingly impact behavior as adolescents become more autonomous (Sallis, et al., 1987).Nevertheless, intervention studies have shown that families can significantly impact dietary knowledge, attitudes, self-efficacy and behavioral intention (Crockett, et al., 1989), metabolic control (Hanson, et al., 1995), and weight loss (Wadden, et al., 1990) among children and young adolescents.Nader, et al. (1983) reported on an intervention with 24 families (8 each: African-American, Mexican-American, and Caucasian) with children in the third to sixth grades.The significant treatment effects for social support resulting from this study indicated that family based, social environment-focused interventions were both feasible and important.Perry, et al. (1988) reported on a study comparing a school-based program to an equivalent home-based program with 2,250 elementary students in Minnesota targeting reductions in dietary fat consumption and sodium intake.Students in the home-based program reported more behavior change, showed reduced total fat and saturated fat measured via dietary recall and had more of the "program encouraged" foods on their food shelves, as compared to the school-based program students.
The Child and Adolescent Trial for Cardiovascular Health (CATCH) was a multi-state efficacy study examining the effects of an intervention to reduce cardiovascular risk factors among adolescents (Edmundson, et al., 1996;Nader et al., 1996).The intervention schools involved school-based intervention or school-based intervention plus a family treatment plan.Significant intervention effects were observed for perceived social reinforcement for healthy food choices, improved knowledge, intentions, and self-efficacy.Girls reported significantly greater perceived reinforcement for healthy eating than did boys.
The findings of these studies suggest interventions involving families may be effective avenues for promoting healthy nutrition.The family, as a socially-supportive environment, may in turn reinforce and sustain behavioral changes.The purpose of this paper is to examine the effects of a community-based intervention on family social support for healthy eating.The data were obtained from a larger study designed to promote healthy eating and physical activity among African American families (Wilson, et al., 2004).
Participants
The larger study utilized a quasi-experimental, pretest/posttest intervention design with a control group.Intervention and control groups met at one of two community centers.Participants for the intervention study were recruited from adolescents involved in general health screenings at community-and church-based centers.Eligibility requirements included being between the ages of 10 and 15 years of age, weighing less than or equal to the 95th percentile body mass index (BMI) for age and gender, African-American race, with normal blood pressure, and not taking medications known to affect blood pressure.Adolescents were invited to participate in the study via phone call solicitation to their parents.During the phone call, parents were asked if they had other children between the ages of 10-15 who would also like to participate, pending screening for eligibility.A control group (attention control) also met once per week for the same duration of time as the intervention groups and participated in a general health education class that did not emphasize changing nutrition or physical activity behaviors.
Examples of topics covered in the comparison group included alcohol and other drug use prevention, HIV/STD/Teen pregnancy prevention, stress management, and study skills.In total, 38 African-American adolescents and their mothers participated in the study.Table 1 presents study participant characteristics.
Data Collection
All mothers completed an IRB approved parental consent form and all adolescents completed an IRB approved assent form.Mothers completed demographic surveys, trained staff measured adolescents' height and weight, and adolescents completed paper and pencil psychosocial scales.Measures were administered to the adolescents in small groups (without mothers in the room) with one-to-one help provided by trained staff, prior to and after intervention participation.Week one involved obtaining baseline information such a food intake and educating the participants on serving sizes.At the end of the last session, the same (post-test) measures were completed by all adolescents.
Intervention Description
Adolescents and at least one of their parents (usually the mother) participated in the fivesession intervention.The nutrition intervention goal for the treatment group was to increase fruit and vegetable intake to 6-8 servings per day and low-fat dairy intake to 3-4 servings per day, consistent with previous studies by Wilson et al. (Wilson, Sica & Miller, 1999;Wilson, et al., 2002), and following modified DASH diet guidelines (Appel, et al., 1997;Sacks, et al., 1999;Sacks, et al., 2001).Social Cognitive Theory (Bandura, 1986) was used to guide the intervention.Environment, self-monitoring, goal setting, behavioral skills, and social supportseeking skills were identified as the most relevant SCT constructs.To address self-monitoring the participants were taught to set weekly food intake goals and record their daily food intake behaviors for each using a dietary and physical activity record.Family members used these records during discussion, problem-solving and goal-setting activities each week.Behavioral skill activities were led by a health psychologist and involved problem solving, goal-setting, practicing positive self-talk, self-reward plans, social support seeking, and long-term maintenance skills.The role of family was always emphasized during these activities by asking the families to have discussions while eating food from the food stations.Each session ended with the families discussing specific behavioral skills with the group.Finally, instructions and preparations for the next session were given.
During this first session adolescents were also asked to indicate on a list, the fruits, vegetables and low-fat dairy foods they liked and disliked in order to determine what healthy foods to provide to the families to facilitate availability and accessibility (environment) of preferred healthy foods in their homes.At the end of session two through session five, adolescents were given individually prepared bags of fruits, vegetables and low-fat dairy items following their documented preferences to facilitate access to healthy foods.Food stations were set up for each session and were designed to teach the families how to prepare snacks and meals emphasizing DASH foods.Each week the recipes were shared with the families and by the end of the intervention, the families were given a book of recipes from all food stations during the intervention and from recipes provided by the family participants.Family members were also taught to record their daily food intake behaviors for each week using diet diaries which they used in discussion, problem-solving and goal-setting activities (behavioral skills training) during the sessions.Sessions two through five involved four structured activities: 30 minutes of physical activity, 30 minutes of food preparation, 30 minutes of behavioral skills training, and 30 minutes of discussion.During the last session, families volunteered to bring healthy DASH-style foods prepared at home for all to sample.
Staffing
Most of the staff leading intervention activities were African-American and were from the community in which the intervention took place.During the sessions, physical activities included stretching, calisthenics, walking and basketball (led by a certified physical education teacher) as well as aerobics (led by a certified aerobic teacher).Food preparation stations were led by a retired nutritionist from the local extension service and a registered dietician from the local hospital system.
Measures
Modified versions of the Social Support for Eating Scale (Sallis, et al., 1987) and the Inventory of Socially Supportive Behaviors (Barrera & Ainlay, 1983;Barrera, Sandler & Ramsay, 1981) were used to measure emotional social support and instrumental social support, respectively.In addition, two versions of each scale were modified to assess social support for fruit/vegetable intake and low-fat dairy intake, respectively.Because the instrumental support scales had not been previously used with African-American adolescents, those instruments were pilot tested for readability and comprehension prior to the intervention study.Appropriate changes to the instruments occurred prior to the intervention study.Pilot testing procedures and forms were approved by [our university's] Institutional Review Board.
Emotional social support for fruit/vegetable intake and low-fat dairy intake A modified version of the Social Support for Eating Scale (Sallis, et al., 1987) was used to assess emotional social support for fruit/vegetable intake as well as low-fat dairy intake.These instruments emphasized positive and negative emotional social support.Using a five-point Likert-type scale, ranging from 1 (none) to 5 (very often), respondents answered how often family and friends did what was described in each item during the past month.Ultimately, responses to all items on a given instrument were added together to produce a summary score for that instrument.Wilson and Ampey-Thornhill (2001) demonstrated test-retest reliability correlations of r=.60 to r=.84 for the family social support scale with a sample of 148, 13-16 year old African-American adolescents.The 16-item instrument used by Wilson and Ampey-Thornhill is the same Emotional SS F&V instrument used in the present study.
An alternate version of that instrument, worded for low-fat dairy was also used in the present study to assess Emotional SS LFD, by replacing the words 'fruit and vegetable' with 'low-fat dairy.' Instrumental social support for fruit/vegetable intake and low-fat dairy intake A review of the literature failed to reveal the existence of social support scales specific to Instrumental SS F&V or Instrumental SS LFD that have been validated with adolescents.Therefore items from an existing instrument designed to assess general instrumental social support (Barrera & Ainlay, 1983;Barrera, Sandler & Ramsay, 1981) were modified to create two separate instruments; one reflecting instrumental social support for fruit/vegetable intake and one reflecting social support for low-fat dairy.Both instruments contained 17 items.Using a five-point Likert-type scale, ranging from 1 (never) to 5 (about every day), respondents answered how often during in past month family members did specific activities with/for them.Ultimately, responses to all items on a given instrument were added together to produce a summary score for that instrument.
Composite social support for fruit/vegetable intake and low-fat dairy intake A composite measure of social support for fruit/vegetable intake (Composite SS F&V) was obtained by adding the summary scores from the emotional support for fruit/vegetable intake and instrumental support for fruit/vegetable intake together.Similarly, a composite measure of social support for low-fat dairy intake (Composite SS LFD) was obtained by adding the emotional and instrumental summary scores from the low-fat dairy instruments.
Fifteen male and fifteen female African-American adolescents, aged 10-15 (M=12.0,SD=1.1), were recruited from a community-based weekend basketball physical activity program to give feedback (pilot test) on the Instrumental SS F&V and Instrumental SS LFD scales.This feedback led to the addition of a clarification sentence in the instructions for both instruments that indicated some items refer to meals and some to snacks, changes made prior to use in the intervention study.
Analyses
The present analysis focuses on intervention effects on social support for healthy nutrition behavior.All analyses were performed using SPSS version 11.5.Descriptive statistics were produced to describe the sample and to examine treatment group differences at baseline.Sample demographics from the intervention study are presented in Table 1.Means and standard deviations were calculated for continuous variables; frequencies and percentages were calculated for categorical variables.Correlation analyses were conducted to determine if any participant demographic or behavioral variables co-varied with social support dependent variables.Analysis of variance (ANOVA) was performed to examine treatment effects on change in social support (change score = post-test social support summary score minus pre-test social support summary score) for each type of social support (Emotional Social Support for FV, Emotional Social Support for LFD, Instrumental Social Support for FV, Instrumental Social Support for LFD, Composite Social Support for FV, and Composite Social Support for LFD).
Results
All participants involved in pre-test data collection completed the study.No significant differences between treatment groups at pre-test were identified.Intervention process evaluation also revealed excellent participation rates (>75% session attendance) by adolescents and parents with no significant differences between groups.One-way ANOVA analyses revealed that the intervention participants demonstrated significantly greater mean change in social support summary scores for Emotional Social Support for LFD, Composite Social Support for FV and Composite Social Support for LFD than did the control participants.These results are presented in Figure 1.
Figure 1
Change
Discussion
The family focus of this study as an effective avenue for health promotion is supported by the findings of other studies (Edmundson, et al., 1996;Nader, et al. 1983;Nader, et al., 1996;Perry, et al., 1988;Sallis, et al., 1987) in that health behavior may be positively influenced by significant others during the course of the change process and that family members, especially mothers, have important influences on subjective norms related to healthy nutrition.In addition, it may be important for researchers and youth development program staff involved in health promotion programs to explore and attempt to influence social support more broadly.The present study found that total (composite) social support for a given nutrition behavior was affected by the intervention whereas it was more difficult to detect changes in specific types of social support for nutrition behaviors.
The social ecological model (SEM) is a conceptual framework that is useful in translating research to practice.According to SEM, an individual's behavior is determined by factors at various "levels" including: the individual (i.e., "within" the person), interpersonal (interactions "between" people, e.g., 4-H club), community (e.g., in a school or county) and policy levels (i.e., "rules" enforced by national, state, or local entities) (McLeroy, et al., 1988;Stokols, 1996).Further, although an individual's behavior may be influenced predominantly by one level, theoretically, their behavior is determined by a complex interaction of forces from multiple levels.Ample opportunities exist through partnering with schools including health education classes, family nights, health fairs, and school-wide or district-wide healthy eating challenges (school/community levels).
The present study findings must be interpreted with caution for several reasons.
• First, the small sample size could account for the non-significant posttest change scores for Emotional Social Support for FV, Instrumental Social Support for FV and Instrumental Social Support for LFD.
• Second, the intervention dosage was once per week for five weeks.Perhaps meeting more than once per week and/or having a longer intervention could have lead to significant differences in other types of social support for the intervention group.
• Third, the intervention intended to change multiple behaviors (i.e., fruit/vegetable intake, low-fat dairy intake, physical activity) and researchers involved in large scale efficacy trials suggest long duration and great involvement is needed to change multiple behaviors (Edmundson, et al., 1996).
The intervention may have been more effective by focusing on only one behavior.This recommendation has also been noted by researchers involved in large-scale trials (Edmundson, et al., 1996).
Because little is known about emotional and instrumental social support for nutrition behaviors among African-American adolescents, these findings warrant further investigation.Although this was a pilot study, the significant effects observed are encouraging and suggests that familybased interventions with African-American families are both, feasible and are potentially effective avenues for promoting healthy eating.Future research should examine the effects of family-based nutrition promoting interventions focused on increasing social support for healthy nutrition behaviors with larger samples of African-American families and longer intervention durations.In addition, similar programs should be targeted to other ethnic/racial minority groups and mixed ethnic/race (several population groups together) groups and evaluated.
Table 1
Study participant characteristics by participant treatment group
Table 2
presents pre-test and post-test social support scores by treatment group.No participant variables were significantly correlated with social support change scores.
Table 2
Pre and Post intervention Social Support Scale Scores by Treatment Group * Pre-Test differences not significant † Significant change from pre-test to post-test in Social Support Scale Scores by Treatment Group There are a number of ways that youth development staff could promote (especially through healthy lifestyles initiatives) positive family interactions, emphasizing emotional and instrumental social support, for healthy eating.Below we list specific suggestions, arranged by SEM levels applicable to 4-H programs: Summer programs and camps could more actively include/engage family members through a range of activities.For example, at the most basic level camps could communicate with parents (via newsletters, e-mails, parent materials that come home with each camper, etc) regarding healthy behaviors learned during camp and how to support the specific behaviors at home.At a more intensive level youth organizations and clubs could host family camp days or family camp weekends.Additionally, alumni camps targeting parents who were members of the club and their children can be an avenue to reach more families with healthy lifestyle messages (family/club/community levels);
|
2018-12-05T03:13:41.406Z
|
2010-09-01T00:00:00.000
|
{
"year": 2010,
"sha1": "66dc0840a6ab2843cfe80bc25611b238a3022e26",
"oa_license": "CCBY",
"oa_url": "https://jyd.pitt.edu/ojs/jyd/article/download/213/199",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "66dc0840a6ab2843cfe80bc25611b238a3022e26",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
261326469
|
pes2o/s2orc
|
v3-fos-license
|
DIGITAL PICTORIAL LEARNING MEDIA FOR TEACHING AFFIXES OF SEVENTH GRADE OF JUNIOR HIGH SCHOOL
Perception shows that there was an issue in mastering lexicon and the nonappearance of learning media. The point of this inquire about is creating directions media for dominance of attaches lexicon within the graphic content through the utilize of Computerized Pictorial of Fastens Amusement (POAME) media. This inquire about is inquire about and improvement (R&D). The inquire about subjects were 7th-grade students’ junior tall school. The usage was within the indeed semester of the 2019/2020 scholastic year. The analyst did perception in three schools with the comes about of the meet and perception appeared that the understudies had not however caught on the joins lexicon and there were no media for learning, whereas the records appeared that the students' scores were underneath least completeness criteria, which was a normal of 50, 70.7, 77,3, Amid media advancement, the media is inspected by material experts and media specialists so that discernments may be tried and get result 91.4% from instructor reactions and 89.6% student responses. This was proved that the POAME media is valid and feasible to use as an alternative media for learning affixes vocabulary for 7th grade of MTs Darussalam Kademangan. This is evident from the teachers’ and students’ responses who fall into the very feasible category with an interval of 81% - 100%.
INTRODUCTION
All the knowledge can be learned from the Indonesian people in particular a language study as defined in the 57th year 2014 Indonesian Government Regulations on the creation, promotion, and security of languages and literature, and improvement of Indonesian functions. According to Indonesia government regulation, number 57 the year 2014 article 4 section, 3, other Indonesian and regional languages are considered as a foreign language. "In Indonesia, English is seen as a foreign language that is which it has no widespread or official role (native speakers) in a country. 1 .
Tuning in, talking, perusing, and composing is the most expertise for learning English. Lexicon could be a department of its aptitude. So, it fundamental to know the lexicon. Lexicon is one of the vital viewpoints of learning English, without lexicon nothing can be passed on Alqahtani's inquire about Lexicon learning is a basic portion of learning an outside dialect since the implications of unused words are highlighted exceptionally regularly, whether in books or classrooms. It is additionally central to dialect education and basic for a dialect learner. Encourage portrays as complementary the relationship between lexicon information and dialect utilize: lexicon information empowers dialect to utilize and, alternately, dialect utilize leads to an increment in lexicon 4. Lexicon is key for instructing the English dialect and without it, understudies are incapable to get it or communicate with others. Depend to Pan & Xu (2011) vocabulary is thee most important factor in learning a foreign language well, as one of three essential components (phonetics, vocabulary, and grammar) 6. A good vocabulary is a vital part of effective communication so have many words will make you a better writer, speaker, listener, and readerr (Susanto, 2017). Students are in the class who have high grades in language learning are students who have the adequate vocabularyy (Susanto, 2017).
Based on the results of observations made by the researcher, Affixes vocabularies were found in student's English book which created by the Ministry and Culture in 2017 which included the 4th edition of the revised edition under the title "Bahasa Inggris When English Rings a Bell" in chapter 5,6, and 7. It also was found that the average student was lacking in the mastery of the affixes vocabulary. They have difficulty analyzing the differences in affixes and basic vocabulary as well as how to add affixes to form prefix, infix, and suffix. Even though the teachers have used materials, learning resources that are by the curriculum and syllabus shown, students still find it difficult to master the affixes vocabulary.
According to Miarso (2009: 458), learning media is all used to channel messages and can move thoughts, the learner's emotions, attention, and willingness can promote arousal of conscious, guided, and controlled learning processes 7. Teachers who use media have many varied benefits for students. The use of media makes a stimulus to the brain of students so that the brain can be used optimally and provide a different experience. The media provides students the opportunity to learn independently which also arouses new desires and interests.
A few sorts of learning media are most regularly utilized by books (Supriana, 2011), pictures/photos, movies (Sadiman, 2008: 28), and diversions (Azhar Arsyad, 2011: 54). Diversions can be within the shape of physical action or utilizing program applications (Sugiyo, et al., 2008). In the interim, Paul (2003) said that recreations have a major part in studentcentered learning and permit them to be completely included in learning.
Games used for learning are usually called learning media or educational games. There are various kinds of learning media used to make it easier to learn vocabulary mastery. Digital Pictorial of Affixes Game (POAME) is an example of a game to master affixes vocabulary in English learning for grade 7 SMP. POAME is a derivative of a flashcard game that is packaged into a digital game. POAME is designed in the form of a kit that contains a master game on a flash disk with 3 levels, a guidebook for teachers, students, and the public. With some of the above opinions, this study researchers tried to apply the POAME game to provide an alternative learning media for learning affix words for grade 7 junior high school students at MTs Darussalam.
RESEARCH METHODS
The type of research used in this study is Research and Development (RnD). Research and development (RnD) is an important technique or an adequate method of study for improving practice 8. Research and development is a phase or measure that occurs and can be accounted for to produce a new product or a perfect product. Hardware or software may be the commodity. This research was conducted in 3 phases. The first phase is. Analyzing potential and problem which the researcher do identify problems and possible solutions then conducted data collection: interviews, observation, document. The second phases are product design and development, product validation, product revision. The last phases are product testing and perception test, evaluation product, and final product.
The first step of doing product testing was to explain the purpose. The researcher told the purpose that would hold the precipitation test to find out the appropriateness of digital pictorial learning media for teaching affixes. Second, shared the application with the students and teacher. The researcher shared the application game with the student's or teacher's laptop. The third step was explaining the product. It was explaining the function of media, the content of media, how to play then the focus grub tried to use this media and learn the content of it. Forth, open games and guide books then learn the material from those media together. Sixth, finish understanding the material after that continued to play the game starts at a low level and medium level. Perceptions test was done after product testing and this is the feasibility.
RESULTS AND DISCUSSION
The initial design in developing this product is to make a media design using the construct 2 software application. The tools used by researchers in making products are using computers/laptops to design and make products. The first step was made of learning media is media selection. Media selection was done to identify learning media that are relevant to material characteristics. The design of media was divided into several stages, starting from designing to manufacturing. The design was based on references from several sources that resulted in a valid design. The product Digital Pictorial of Affixes Game has an attractive design, which is following the basic competencies of 2013. The resulting product is computer-based media to increase students' learning interest and activity.
The second step was made of learning media is format selection. The selection of the form of material presentation was adjusted to the learning media was used. The selection of formats in development is intended by designing learning content and learning resources. The choice of format must be adjusted to the characteristics of students. The products were produced are media to motivate students to learn.
The third step was made of learning media is the initial design. The beginning of making this product is designed in a systematic manner where the researcher was assisted by a supervisor who provides input and suggestions so that this product is well arranged. The stages in making game media used Construct 2 application: Prepare materials such as subject matter, pictures, backgrounds, and then make previously designed layers. The description of the manufacture is The researcher download images needed on google. After the image is downloaded, we opened Corel Draw to combine the images that we download with the appearance we want for the game. The researcher also made layouts for material and picture dictionaries. After all, layouts are finished to be edited and exported. then the researcher begins to enter in the game making an application that is with open construct 2. Open construct 2 then click 2x on the file to create a new layout then click new empty project then we set the layout size. If the new look comes out the researcher click 2 x on the layout then selects insert new project then selects sprite to insert the edited image in Corel Draw. then open the file to find the layout that was created in Corel Draw. After all the images are inserted then we start setting in the game in the event sheet construct that we have prepared earlier. Then export it in the nwjs file.
After the product development stage is completed, the next step is to measure the feasibility of the learning media that has been made, in this step the researcher validates the product to 6 experts, namely 3 material experts, 3 media experts. The results of the validation for 6 experts are as follows: The results obtained, the validation of the material experts got 75.7% which was in the feasible category with a value between 61% -80% and the results from the media expert was 87.5% which was in the very feasible category with a value between 81% -100%.
After the validation phase is completed and declared valid by the expert, then the product can be tested. The trial here was conducted to determine the feasibility of the media to be used by teachers and students for the POAME media being developed. Following are the results of teacher and student responses:
web.id
Based on the table above, there was no teacher disagree about the product and the answers were variant between agree and strongly agree. Then the researcher counted the Likert index, it was 91,4 % which means the teachers Strongly Agree in using the product and this media proved that was very proper to use.
In the teacher's perception test, there were 30 statements divided into four aspects, namely aspects of media appearance, media operation, material presentation, motivation, and benefits of media with four assessment criteria, namely strongly agree with four points, agree with three-point, disagree with two-points and strongly disagree with one point. In this perception test, the researcher gets an average score of three and four.
The aspect of media display is in statements 1-11 where the statement contains the initial appearance of the media, cover, writing, pictures, background, color composition, question variations, feasibility. This aspect gets points between 3 and 4 these were very agreed and agree. The next aspect is about the operation of the game where there are two aspects, namely smoothness, and comfort. five teachers strongly agree with the smooth running of the game and for convenience three out of five teachers give four points. The third aspect is the presentation of material that discusses the suitability of students' level, according to KI and KD, ease of understanding, questions, and sample questions. This aspect has eight statements where the eight statements about the questions presented according to the level of the student get perfect points from the five teachers. The last aspect is the benefits and uses which are outlined in eight statements with the average score obtained is four and all the teachers give perfect scores, namely strongly agreeing on the eighth statement, namely that students find it easier to learn using this game media. Seeing the result of the students's perceptions test is 89,.6%, then it is included in the very valid category where very valid has a percentage between 81% -100%.
CONCLUSION
The development of Pictorial of Affixes Game (POAME) as learning media was valid and appropriate to be implemented. It proved by the results of the responses are for teachers and students toward the use of the developed POAME as learning media. The result of teachers' responses is 91,4% and the result of students' responses 89,6 % that categorized as "Absolutely Agree". It means that the teachers and students gave positive responses toward the use of developed POAME for teaching and learning affixes vocabulary to the seventh-grade students in junior high school and it is the validated hypothetical as final development of POAME.
SUGGESTIONS
The suggestions given by the findings above are the Digital Pictorial of Affixes Game (POAME) game media for learning affixes vocabulary in English seventh grade of junior high school which can be used to make it easier for students to master new vocabulary in this field. For another researcher, it is recommended to try other games so that many game references will be used for the process of mastering vocabulary.
|
2023-08-30T15:15:58.160Z
|
2022-02-07T00:00:00.000
|
{
"year": 2022,
"sha1": "eceb8453647dbf1b951b53995c299adfd86314ca",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.unisbablitar.ac.id/index.php/josar/article/download/2057/1187",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b004ef68fd205031c0a4d5e24e9d181d1648b93c",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": []
}
|
239699235
|
pes2o/s2orc
|
v3-fos-license
|
Towards Green 3D-Microfabrication of Bio-MEMS Devices Using ADEX Dry Film Photoresists
Current trends in miniaturized diagnostics indicate an increasing demand for large quantities of mobile devices for health monitoring and point-of-care diagnostics. This comes along with a need for rapid but preferably also green microfabrication. Dry film photoresists (DFPs) promise low-cost and greener microfabrication and can partly or fully replace conventional silicon-technologies being associated with high-energy demands and the intense use of toxic and climate-active chemicals. Due to their mechanical stability and superior film thickness homogeneity, DFPs outperform conventional spin-on photoresists, such as SU-8, especially when three-dimensional architectures are required for micro-analytical devices (e.g. microfluidics). In this study, we utilize the commercial epoxy-based DFP ADEX to demonstrate various application scenarios ranging from the direct modification of microcantilever beams via the assembly of microfluidic channels to lamination-free patterning of DFPs, which employs the DFP directly as a substrate material. Finally, kinked, bottom-up grown silicon nanowires were integrated in this manner as prospective ion-sensitive field-effect transistors in a bio-probe architecture directly on ADEX substrates. Hence, we have developed the required set of microfabrication protocols for such an assembly comprising metal thin film deposition, direct burn-in of lithography alignment markers, and polymer patterning on top of the DFP.
Introduction
Miniature analytical devices represent a crucial component in the preparation of future personalized medicine with emphasis to point-of-care microscale total analysis systems (µTAS). Such devices enable, for instance, virus detection, personalized cancer diagnostics, disease biomarker registration as well as single cell and cell culture studies [1][2][3]. Thus, different types of miniature bioanalytical devices for a multitude of application cases were proposed and demonstrated. Microfluidic devices operate mainly with liquid samples and were discussed for the sorting, analysis and manipulation of single-cells by electric, mechanical, biochemical, and electrophoretic methods, by means of piezoactuators and optical and acoustic tweezers [4]. Hence, they enabled even to study the role of mechanical properties and biochemical signals on the migration and invasion of metastasizing cancer cells [5]. In contrast to 'bulky' microfluidic chips, microscale probes are required for minimalinvasive in-vitro and in-vivo studies of living cells [6,7]. For instance, high resolution detection of the transepithelial transport of K + -, Na + -, and Cl − -ions in the thin lining fluid at the surface of pulmonary epithelial cells would elucidate the underlying processes of diseases like cystic fibrosis and lung oedema [8,9]. Miniature solid-state ion-selective electrodes were demonstrated for in-vivo measurements of the K + -concentration in rodent brains [7], but were built on silicon. Lieber et al. presented multiple strategies to integrate single silicon nanowire field-effect transistors as biological signal transducers and employed polymeric probes (SU-8 resist, Microchem Corp.) on a silicon substrate and other probe designs to record the heartbeat of single cardiomyocite cells in-vitro [10][11][12]. Syringe injectable SU-8-based mesh electronics were recently proposed for therapeutic applications, demonstrating in-vivo recording of physiological parameters in rodent brains [13][14][15].
As these examples demonstrate, elaborated 3D microfabrication tools are essential to realize 3D microscale features, such as microfluidic channels as well as integrated microscale sensors and actors. For the implementation of microfabricated devices on an industrial scale, manufacturing processes must, furthermore, meet high requirements comprising bio-device reliability, hygiene aspects, safety, and scalability. Due to biological safety and medical hygiene, many of the aforementioned miniature devices must be designed for single-use only, which is already true for many of today's macroscopic medical devices and device parts [16]. In this regard, but also in a more general point of view, green fabrication strategies should be supported and developed also in the field of microsystems, which includes the reduction of the number of required process steps, the total amount of waste, the total energy consumption, and the overall need for toxic and environmentally harmful substances.
Here, the use of the commercial epoxy-based dry film photoresist (DFP) ADEX™ (DJ MicroLaminates, Inc.) is discussed as a promising material for microdevice fabrication. DFPs are solid photosensitive polymeric foils, originally developed for the production of printed circuit boards. They can be directly laminated onto wafers, chips, circuit boards, and other substrates and micropatterned by ultraviolet (UV) optical lithography, providing a promising alternative to conventional spin-on photoresists. With their merely organic chemicals, DFPs can be considered environmentfriendly in comparison to the conventional silicon-and also glass-based microfabrication processes. The latter comprise commonly energy-intensive single-crystalline silicon synthesis and wafer fabrication as well as aggressive, toxic and environmentally harmful microtechnological process chemicals, such as the highly toxic hydrogen fluoride or the climate-active sulfur hexafluoride (SF 6 ) [17]. One kilogram of SF 6 can be potentially more than 20.000 times as climateactive as 1 kg of CO 2 with regard to a reference period of 100 years [18]. Hence, such process should be avoided if possible.
Based on this, polymeric materials, which comprise also biopolymers like chitin [19], lignin [20], shellac [21] and silk [22], are in general appealing and might be preferred for larger scale fabrication in future wherever possible as a cheaper and more environment-friendly alternative [23]. Nevertheless, SU-8-based microfabrication and the widespread soft-lithography methods, based e.g. on stamping, moulding, and nano-imprinting, exist as well but require solid substrates, which are mainly silicon or glass wafers. Additionally, 3D-printing and two-photon lithography are highly versatile tools for the creation of 3D-polymeric structures but they are currently only suited for prototyping rather than for large scale fabrication of identical devices [23][24][25]. Conventional lamination-based techniques allow in principle 3D-manufacturing based on patterned multilayer-architectures. However, they are based on stacking and bonding of multiple previously laser-or knife-cut layers [24]. This is hardly compatible with high-throughput manufacturing at microscale resolution and poses also the challenge of accurate cutting and multilayer alignment at each lamination layer [24]. Another approach for lithographic 3D microfabrication is "inclined UV lithography" or "multidirectional UV lithography" [26] that could be used alternatively or complementary to multilayer patterning but requires suitable lithography tools.
In contrast to these issues, DFPs show several general advantages for the manufacturing of 3D-microstructures, biomedical systems, or in general for MEMS that make them superior if compared to conventional spin-on photoresists, such as the epoxy-based photoresist SU-8, namely: • Defined and homogeneous film thickness and material properties over large areas as well as high planarity • Possibility to deposit thick films (e.g. commercial epoxybased DFP SUEX ® (DJ MicroLaminates, Inc.): up to 1000 µm) • Low outgassing of potentially harmful solvents • No soft-bake required • Relatively low costs (naturally depending on the overall system design) • Overall facile and rapid processing • High suitability for multilayer assembly The unique possibilities for microsystems assembly were already shown for some applications, using self-made and commercial DFPs, and comprise microfluidic channels [27,28], MEMS packaging, switches [29], sandblasting and etch masks [30], and electroplating moulds [31]. Johnson et al. and Lemke et al. achieved high aspect ratios of up to 40 using SUEX DFP [32,33] and up to 100 using the commercial DFP mr-X (MRT GmbH) [34]. Structures made of the acrylic DFP product Ordyl SY (Elga Europe s.r.l) were used as spacers, bonding materials, and as lithography alignment features within a complex fluidic valve that involved two layers of wafers and a Peltier cooler [35]. Wangler et al. [36] investigated the effect of lamination temperature, roller speed, and pressure in a double-layer lamination process for the commercially available, hydrophilic, and epoxy-based DFP TMMF (Tokyo Ohka Kogyo Co. Ltd.); and they created ideally covered microchannel of up to 2 mm self-supporting width. Other groups studied multilayer lithography of the commercial epoxy-based DFPs DF-1000 (EMS Nagase Group) or PerMX (Dupont) combined with conventional SU-8 [37,38] and assembled for instance a unique fluidic "coil" with three separated fluids [38]. Finally, also microarray print heads involving multilayer TMMF DFP lithography were shown with the remark that DFPs can drastically reduce the costs and lead time of MEMS production.
Despite their versatile use, various DFPs have already disappeared from the market comprising PerMX, Shipley 5038 (Dow Chemical Company), Etertec HT/HQ (Eternal Chemical Co. Ltd.), ME1050 (Hitachi Chemical Co. Ltd.) or mr-X. However, six different DFPs are currently commercially available as listed in the appendix. These commercial products differ significantly with respect to their original material base (e.g. acrylic or epoxy), the handling, the available film thickness range, the optical properties, and the selling price.
One major drawback of DFPs, compared to the vastly exploited spin-on epoxy resist SU-8 (Microchem Corp.), is the limited experience and knowledge of the overall reproducibility, long-term effects as well as the stability of technological process parameters, which is required to make this technology ready for a broader industrial use [39]. Suspended DFPs within multilayer stacks are prone to so-called sagging based on the geometrical design and the DFP properties. This issue is well known from other techniques like soft nano-imprint lithography [40]. Hence, a proper design and optimization of the process parameters (i.e. temperature, pressure and speed) with respect to the intended design is required.
To facilitate the usage of DFPs, we discuss here possibilities of a so far hardly investigated material system, namely the epoxy-based DFP ADEX. We present several microfabrication approaches that, compared to previously shown SU-8 based strategies [11], allow to access further application scenarios or that are significantly more facile than previous strategies. We will discuss them by increasing importance of the DFP within the device fabrication strategy. Hence, we present at first a strategy (previously published in [41]) for the direct microtechnological modification of freestanding silicon nitride cantilever beams, which are frequently used for (bio-)AFM [42] and for microcantilever assisted biosensors [43]. DFP ADEX is used in this case as a photoresist material. Second, we demonstrate the use of DFP ADEX for the manufacturing of microfluidic channel walls and lids. Previously required process steps, such as gluing, thermal fusion bonding, or ultrasonic welding of a lid onto the top of the channel wall become obsolete. Last, we introduce a lamination-free implementation of DFPs as substrates themselves that allows patterning of DFP ADEX via direct laser lithography, which is beneficial for prospective polymeric lab-on-chip, microprobe, and mesh electronic devices. We show and discuss furthermore in this context, the creation of metal and polymeric structures on top of ADEX as well as the direct burn-in of alignment markers to support multilayer processes. In summary of these techniques, we demonstrate the integration of bottom-up grown kinked silicon nanowires in a frequently discussed biosensor field-effect transistor configuration [10,11,44] on top of DFP ADEX. Making several previously required process steps and materials obsolete, our DFP strategy enhances the overall fabrication efficiency, while lowering the overall environmental impact. Handling of SUEX and ADEX DFPs was observed to differ slightly. Due to SUEXs brittleness at room temperature, the so-called thick film sheets (thickness > 100 µm) were more difficult to handle than the flexible ADEX thin films (thickness ≤ 75 µm). Therefore, we focus here on ADEX rather than SUEX. So-called SUEX thin dry film sheets that appeared recently on the market, were hardly included in this study.
Materials and Methods
DFPs ADEX and SUEX were bought from DJ MicroLaminates Corporation. All process chemicals were purchased from MicroChemicals GmbH.
Direct lamination of ADEX films onto the substrates was performed with a hot roll laminator (LMG Sky 335 R6) after the removal of the polypropylene (PP) bottom cover film using 60-65 °C at the minimal velocity of ca. 6 mm/s. The mechanical lamination pressure could not be controlled. To achieve an overall uniform coating and to avoid bubble formation, an additional bake at 65 °C for 5 min was added.
If not specified otherwise, lithographic exposure was performed by a UV laser lithography tool (µPG101, Heidelberg Instruments, 375 nm, writing speed 5 mm 2 /min, focal length 4 mm) after removal of the top protective liner. After exposure, a post-exposure bake was done at 95 °C for 5-10 min on a hotplate. Finally, the resist was developed either in propylene glycol methyl ether acetate (PGMEA) or in cyclohexanone according to the manufacturer specifications with development times ranging from 2 to 20 min depending on the utilized resist thickness. Detailed studies of the fabrication parameters for lamination-free implementation of ADEX and SUEX are attached in the supplementary materials, Sect. 3.
Photoresist AZ 5214 E (used in positive tone) was spincoated at 4000 rpm for 1 min, dried at ambient conditions for 4 h, exposed at 0.65 mW effective laser power, and developed with the alkaline developer AZ MIF-726. Photoresist SU-8 (2000.5) was spin-coated at 2000 rpm, dried overnight under ambient conditions, exposed at a laser power of 1.3 mW, post-exposure baked at 95 °C, and developed in cyclohexanone for 1 min.
Gold metal structures were deposited with a Leybold L560 thermal evaporator (5 nm Ti + 100 nm Au) with a thin layer of titanium for improved adhesion on ADEX. Transmission spectra were measured with a dispersive spectrophotometer (Shimadzu UV-3101PC, controlled by UVProbe 2.33 software package). The spectrometer had a resolution of 0.1 nm. The combination of deuterium (D2) and the tungsten-halogen (WI) lamps enabled a spectral range between 190 and 800 nm.
Results and Discussion
One major limitation of conventional spin-on resists, such as SU-8, is their restriction to mainly planar surfaces if a homogeneous resist coverage shall be achieved. At edges or on 3D features, but also on larger planar surfaces, the resist thickness tends to be inhomogeneous, making it difficult to ensure a reliable and reproducible fabrication quality. With regard to this, DFPs provide the desirable feature that they can be laminated onto a substrate. Thus, homogeneous coatings across large areas and 3D patterned surfaces are possible. Another significant advantage of laminated DFPs, compared to spin-on resists, is the elimination of the so-called softbake processing step, which is known to frequently cause the formation of resist cracks.
The schematic illustration in Fig. 1a depicts the conventional implementation of DFPs as a single layer photoresist on a planar substrate. DFPs are laminated onto the substrate, locally exposed by UV light using a lithographic mask or laser lithography, and finally, the DFP is developed. This strategy for microfabrication of planar patterns allows reaching high aspect ratios up to at least 15, as the 75 µm thick lamellar structures in Fig. 1b demonstrate.
Instead of crack formation during the softbake, DFPs can suffer from bubble formation at the interface between the DFP and the substrate due to air trapping occurring during the lamination process. Bubble-free DFP coatings are achievable, partly supported by the aforementioned additional bake at 65 °C for 5 min, but the risk for bubble trapping increases in particular at edges within a ca. 500 µm wide surrounding region (Fig. 1c) and obviously for spacious laminations simply due to statistics. This issue must be addressed in future for instance by developing and employing more advanced lamination devices.
As a certain light absorption is required to activate the photoinitiators and to expose the photoresist, the transmission spectra of an unexposed resist can give valuable insights into the lithographically usable optical wavelength range to expand the overall DFP usability and to enable multi-wavelength and grayscale exposure [45,46]. Hence, we measured the transmission spectra of two non-exposed ADEX foils of 20 µm and 50 µm thickness, laminated on amorphous silicon oxide (SiO 2 ) chips, using a dispersive spectrophotometer. Figure 2 shows the transmission spectra and the extinction coefficient for the wavelength range that is commonly used for optical lithography. Based on these measurements, DFP ADEX can be structured within the wavelength range from violet to ultraviolet (340-425 nm). Above a wavelength of about 450 nm, the unexposed resist is almost transparent with a transmission of larger than 96%. Compared to SU-8, ADEX shows a slightly lower optical transmission, but an overall similar spectral behavior [45].
According to Beer-Lambert's law of attenuation of light through a medium at negligible reflection, the optical intensity in the medium decays exponentially (see Fig. 2, inset). This results in a reduction of the optical dose with increasing distance from the resist surface. Therefore, higher lithography wavelengths, e.g. the h-line (405 nm), are expected to be suited to fully expose ADEX-layers up to mm thickness and make ADEX DFP also well-compatible with conventional mask-based optical lithography.
The benefit from using DFPs becomes apparent, when precise polymeric structures shall be fabricated on 3D patterned or even freestanding substrates. In a first study (see [41] for more details), we demonstrated that DFPs can be directly laminated onto freestanding silicon nitride microcantilevers yielding a homogeneous resist coating that is not possible with conventional spin-coating of liquid resists. This enabled the direct lithographic patterning of microcantilever beams. Figure 3 shows some examples of silicon-nitride microcantilever beams that were modified by DFP-lithography. To demonstrate the versatility of this technique, metal structures were created by lift-off (Fig. 3a), holes were created by dry etching (Fig. 3b), and polymeric pillars were made directly from DFPs (Fig. 3c). Such DFP pillar probes were already successfully used within AFM to investigate the elastic modulus of soft biological cells that could otherwise be damaged by conventional pyramidal-shaped cantilever tips [41]. Notably, even commercial microcantilevers already equipped with an AFM scanning tip could be successfully modified in this manner without any recognizable deterioration of the scanning tip. This DFP process is facile and is merely based on lamination and lithography, and therefore, well transferable to the industrial scale. Compared to an alternative resist deposition by socalled spray coating, direct and localized lamination of the DFP at exactly the required area minimizes, furthermore, the need for polymeric resources and the amount of waste. As we showed recently, microcantilevers can be also fabricated from DFP ADEX [47].
Multilayer DFPs for Microfluidic Channels
As demonstrated, DFPs can be readily patterned via scalable optical lithography. The use of DFPs is, furthermore, intrinsically not limited to single-layer patterning. Sequential lamination and lithographic patterning of multiple DFP layers (see scheme in Fig. 4a) enables in principle to create microfluidic channels as shown before [35]. Notably, DFP methods for microfluidic channel assembly seem still to be rather unknown to the microfluidic community. Changing this would foster the continuation of the rapid spread of microfluidics [24]. At this point one must admit that still further research is required aside from pure manufacturing aspects to study physical, chemical, and biological properties further to optimize them for improved quality and reliability. Figure 4b, c, d show some examples that demonstrate ADEX structures for the assembly of microfluidic channels. For the fabrication of microfluidic channels (Fig. 4b), the 37.5 µm thick channel walls were created at first from a 75 µm thick DFP that was laminated onto the substrate (about 6 mm/s, 65 °C), exposed via laser lithography (2 × exposed at 0.25 W, 1.25 mm 2 /min), post-exposure baked, and developed (20 min in cyclohexanone). Subsequently, the 5 µm thick lid structures were fabricated in an analogous manner without any adjustments with regard to the DFP-thickness (details in Sect. 2). In order to emphasize the versatility of this fabrication approach, 5 µm thick roof as an etching mask for RIE-etching of unexposed circles, and c as a material for making high-aspect-ratio polymeric structures (height: 25 µm). Further details are provided in Ref. [41] 1 3 structures were created with identical parameters, supported by 75 µm high 200 × 200 µm 2 pillars (Fig. 4c).
These structures were all manufactured from DFP ADEX as illustrated in Fig. 4a. It is hardly possible to realize such wall/lid-architectures yielding hollow channels and cavities reproducibly with homogeneous resist thickness by employing conventional spin-on resists such as SU-8. The channels would readily fill-up with the liquid photoresist. Beyond the successful proof-of-principle, the examples in Fig. 4 as well provide information about the limitations and persisting challenges of microfluidic fabrication with this strategy. First, the mechanical properties of ADEX DFP set some geometrical limitations for the overall channel design. The critical dimension (CD) of the patterning scale and the aspect ratio depend on the overall utilized lithographic technique as well as on the thickness of the DFP (according to product data sheet: 2 µm resolution for 5 µm thick DFP ADEX, 7 µm resolution for 50 µm thickness [48]). Wall thicknesses down to a CD of 5 µm and aspect ratios up to at least 15:1 (Fig. 3b) were realized for channel walls being 75 µm in height, which is a higher value for the aspect ratio than the ratio of 50:7 that is stated in the product data sheet [48]. The CD for the lid thickness was in our study merely limited by the thickness of the DFP due to a minimum available ADEX DFP thickness of 5 µm. Moreover, the bridgeable wall distance sets a critical constraint. Figure 4d shows a 100 µm high SU-8 wall that is covered by a 15 µm thick ADEX lid (fabrication parameters analog to Fig. 4b, c). Obviously, the lid is able to span across the distance without any noticeable bending for an intended channel width up to roughly 150 µm. If the bridged distance exceeds 200 µm, the lid tends increasingly with growing channel width to so-called sagging. At a bridged distance of 500 µm the maximum lid sagging occurring at its center is equal to the wall high of 100 µm for the here utilized 15 µm thick ADEX film. As shown in Fig. 4c, the 5 µm thick lid bridges a distance of 200 µm without any recognizable sign of sagging. Hence, the achieved aspect ratio without recognizable sagging for the lid was about 40:1. As shown and discussed for another DFP-material by Wangler et al. [36], an increased lamination roller gap, i.e. by reduced lamination pressure, and a reduced lamination temperature (possible range for ADEX between 50 and 70 °C [48]) are potential approaches to reduce the impact of sagging for DFP suspended structures. A clear dependence of the degree of sagging on the ADEX thickness used cannot be derived from the present set of experiments. Overall, the double-layer structures presented in Fig. 4 demonstrate a facile and rapid manufacturing of 3D-structures by means of multiple ADEX-layers but does not set a limit. DFPs represent already a central component in the shown DFP-based microfluidic channel assembly, but there is still the need for a channel carrier substrate like silicon, glass and polymeric foils, such as polyimide [49,50]. Therefore, the question arises whether or not these carrier substrates can be exchanged by patterned DFPs.
Lamination-Free Implementation of DFPs
Free-standing polymeric MEMS architectures were so far mainly realized by spin-coating SU-8 onto a so-called sacrificial layer that is finally removed to release the functional microstructure. This sacrificial layer technique is well established and was, for instance, used as well in the aforementioned reference [11]. Such a sacrificial layer approach can in principle also be applied to DFPs. However, introducing a sacrificial layer or substrate increases the process complexity, time-consumption and costs as well as the environmental impact. Hence, we propose here to employ DFPs directly as flexible and photostructurable substrate materials that makes further substrate materials and sacrificial layers obsolete.
In this regard, as-purchased DFPs comprising ADEX and SUEX were directly exposed via UV laser lithography without any lamination to a substrate; but the DFPs were still attached to their respective bottom PET liner (scheme Fig. 5a). Detailed studies of the fabrication parameters for both materials are attached in the supplementary materials in Sect. 3. Figure 5e, f show an exemplary polymeric grid test structure manufactured in this manner from 75 µm thick ADEX and 250 µm thick SUEX, respectively. This demonstrates that direct lithography is possible with ADEX and SUEX DFP. It is remarkable that even the thickest available SUEX-DFP with 250 µm thickness can be exposed in this manner. Figure 5e, f illustrate, furthermore, the optical appearance of these materials after exposure. Both materials keep a certain transparency. In contrast to SUEX that is almost colorless to slightly yellowish, ADEX turns reddish after exposure and development. The structures also remain flexible after exposure (Fig. 5c, d). The implemented lithography laser is focused on the resist surface, however, it spreads with increasing thickness from the surface, i.e. deeper into the resist layer. Thicker resists therefore lead to an increased structure width at the bottom surface, compared to the feature top. Also, scattering of the UV-light can increase the size of the exposed patterns with increasing depth in the DFP. These effects are reasons for the resolution limitation in thicker samples and are expected to depend on the DFP material, thickness, and exposure method. For the experiments, ADEX was exposed using a laser power of 3.6 mW, post-exposure baked, and developed.
Another significant benefit that comes along with this microfabrication approach, aside from a reduction of process steps and associated environmental impacts, is the ability to tailor the mechanical compliance of the film. This feature is so far hardly considered. However, it enables further degrees of freedom for future polymer-based MEMS architectures. First, the rigidity can be modified directly by adjusting the hardbake temperature applied after DFP exposure, here in the range of 150-200 °C for 1-2 h. Second, rigidity can be modified by patterning of the DFP substrate, as shown in Fig. 5, or by means of grayscale lithography, which as well enables to alter the mechanical properties based on the
Lamination-Free DFP Architectures
Lamination-free exposure of DFPs would in principle allow to create microfluidic channels and to embed various transducers from single devices to the aforementioned polymer mesh electronics [13]. We focus here in this regard to three crucial aspects of microdevice fabrication on top of DFPs, namely: creation of (1) alignment markers for multilevel processes, of (2) metal structures, e.g. for electrical contacts, and of (3) polymeric elements, e.g. to create further passive components or passivation layers. The studies were done for 75 µm thick ADEX DFP.
A central aspect of microfabrication are markers that enable precise alignment of patterns within multilevel processes. Alignment markers are frequently created by specifically designed metal deposits that provide a distinct contrast within photolithographic and electron-beam lithography alignment. By employing laser lithography directly on the DFP, we observed that in correlation with high laser power (e.g. 60 mW) a burn-in effect can be triggered in the DFP ADEX that changes the color locally from transparent reddish to black (supplementary materials Sect. 4). By exploiting this effect, alignment marker deposition processes were omitted and markers (e.g. letters, numbers, symbols) were directly created together with the exposure of the respective DFP layer. Figure 6c exemplifies a marker that was directly burned into ADEX DFP (a detailed parameter study is given in the supplementary materials in Fig. 3a). The exposure energy dose is notably about almost 100 times higher than the dose used for a regular exposure of ADEX. Hence, an according overexposure around the burned-in markers is inevitable. The overexposure must be therefore considered in the overall layout, e.g. by placing markers in sufficient distance (ca. 50-100 µm) to fine patterns. The CD for the burned-in patterns was about 2 µm. However, significant size deviations were observed between burned-in patterns of identical dose. This could be, for instance, a result of varying focus accuracy or surface quality, and might limit the reproducibility with respect to the size of small burned-in patterns. However, this should not alter the location of the center of a marker and, hence, not pose an issue for alignment accuracy with symmetrical markers. Compared to metallic markers, the advantages gained by eliminating several process steps are obvious. For example, 81 markers, as shown in Fig. 6c, were created on a sample field of 20 × 20 mm 2 within 5 min. Thus, this approach is a rapid, cheap, and green alternative for a marker generation on DFPs that yield still sub-micrometer alignment accuracy, as systematic tests of manual-optical alignment confirmed. Metal structures are an indispensable element within miniature systems and are, for instance, frequently required for electrical contacts to operate electrodes, piezo-elements and piezoresistors, capacitors, field-effect transistors, and further components. Considering our lamination-free DFP approach, metal structures must be manufactured after exposure of the DFP, but prior to the post-exposure bake and development if spin-coating resists, that generally require planar substrates without holes or openings, shall be implemented (e.g. for lift-off). Notably, longer storage under yellow light (up to several days) between DFP exposure and post-exposure bake did not deteriorate any of the following processes.
To create metal patterns, similar to the ones shown in Fig. 6b (100 nm thick gold patterns with 5 nm titanium as adhesion promotor) on ADEX DFP, the well-known lift-off process for micropatterning was utilized in the following manner that is schematically depicted in Fig. 6a. First, an aluminum layer, nominally 20 nm thicker than the intended metal patterns, were deposited on ADEX by thermal evaporation, implementing resistive heating rather than electronbeam evaporation. Notably, the overall thermal budget of ADEX must be kept as low as possible (e.g. below 28 °C) to avoid structural ADEX deterioration and radiation emitted during electron-beam evaporation can expose ADEX (see supplementary material Sect. 5) and thus, destroy preexposed patterns. The aluminum film serves as a reflective and non-transparent surface finish underneath the lift-off resist, which is here AZ 5214 E (thickness about 1.4 µm). The resist was spin-coated onto the ADEX/aluminum film substrate and patterned again by UV laser lithography. The alkaline resist developer (AZ 726-MIF) enables also direct aluminum patterning during prolonged resist development. Hence, the resist patterns were directly transferred to the aluminum film, which allows metal deposition, here evaporated gold, in the next step directly on ADEX. Alternatively, a 50% phosphoric acid solution can be used for aluminum patterning as well. Lift-off of the aluminum/resist/gold film was realized in the alkaline developer. This process can be adapted readily to metals that are stable in the alkaline photoresist developer, including gold, silver, platinum, nickel, and copper, but excluding aluminum, chromium, and vanadium [51]. The introduction of a sacrificial aluminum layer and the accompanied processes increase the fabrication complexity. Therefore, top antireflective coatings (AZ Exp. Aquaristi III 45, spin-on, MicroChemicals GmbH) were employed in first studies as aluminum replacement between ADEX and AZ 1512 and allowed creation of metal patterns on lamination-free ADEX.
Aside from metal structures, also polymer structures were successfully made on top of ADEX as required, for instance, as structural elements or for the passivation of metal contacts. ADEX itself cannot be applied on unsupported ADEX in combination with lamination so far, since the second DFP-layer would deform the substrate layer and obstruct a subsequent lithographic or thermal treatment. Hence, spinon photoresist SU-8 2000.5 was used. Post-exposure treatment was done together with the DFP substrate, comprising baking at 95 °C and developing in cyclohexanone for 1 min. Even fine openings of 2 µm diameter could be reliably written in a passivation layer with this facile method (see Fig. 6d). More examples for patterned SU-8-structures on DFP ADEX can be found in the appendix.
Microprobe Assembly
Based on the aforementioned lamination-free direct exposure of ADEX DFPs (Fig. 5b), the presented DFP microtechniques and the microprobe examples presented in the introduction, we developed an approach for the exemplary integration of bottom-up grown kinked silicon nanowires (nanofabrication described in the supplementary materials in Section 1) in a liquid-gate field-effect transistor configuration into a microprobe architecture as depicted in Fig. 7.
It is intrinsic to the bottom-up paradigm that nanowires can be grown for instance by the vapor-liquid-solid method [52] in vast numbers. However, for the assembly of nanosensors, only single nanowires are typically required. Hence, several techniques were studied to transfer nanowires from their original growth substrate and to align them on a secondary substrate [53][54][55]. In this study, as-grown silicon nanowires were released from the growth substrate by ultrasonication and dispersed in double-distilled water. The suspended nanowires were subsequently transferred to the DFP using plain drop-casting onto the substrate in proximity to predefined markers. The alignment markers were directly burned into the DFP and enabled to register the position of nanowires by means of computer-aided image analysis. Based on plain drop-casting, the nanowires are randomly distributed on the DFP, which is a similar case for many transferred nanostructures comprising also graphene and other 2D nanoflakes [56,57]. Therefore, the functional architecture, such as electrodes and passivation layers must be adopted to the randomly positioned nanowires of interest.
After nanowire deposition and position registration, the DFP was patterned by UV laser lithography such that selected nanowires are situated at the very end of a prospective probe tip, followed by contact metal deposition, by lift-off, and by contact passivation with patterned SU-8 resist. The individual probes were furthermore embedded into a simultaneously patterned DFP net to ease the overall handling. To facilitate the design alignment with respect to the nanowires, we developed software tools (previously published in [58]) that enabled computer-aided read-in of the alignment marker positions combined with a semi-automatic device design positioning. Based on these software 1 3 tools, we achieved an alignment accuracy below 1 µm, confirmed in multilayer alignment tests. Finally, the bottom DFP PET liner was removed and the DFP substrate and the SU-8-passivation were simultaneously post-exposure baked and developed. However, notice that this step is challenging due to significantly different development times between SU-8 2000.5 (ca. 1 min) and ADEX 75 (ca. 20 min). Figure 7 provides a schematic summary of all process steps that represent direct implementations of the aforementioned DFP techniques.
The resulting microprobes and some fabrication stages are shown in Fig. 8. Single probes can be obtained simply by manual knife cutting of the net support structure.
The probe functionality was successfully tested in first experiments by measuring the nanowire resistance as shown in Fig. 8d. The observed symmetrical non-linear current-voltage characteristics should be due to a Schottky barrier at the contact junction [59] originating from the metal-semiconductor contact between the gold metal contacts and the n-doped silicon nanowire.
Comparison with SU-8-Based Platforms
As mentioned before, Qing et al. [11] already showed a strategy for the integration of silicon nanowires based on spin-coated SU-8 as a polymeric substrate. Similar to the approach in Fig. 5a, they employed a sacrificial nickel layer and an SU-8 resist on top of a silicon wafer. After creating the intended architecture, including the metal contacts, and a passivation layer, this sacrificial layer was completely removed in FeCl 3 /HCl to create a released, self-supporting polymer structure [11]. A similar strategy was employed to create the aforementioned injectable mesh electronics [13].
Compared to these strategies, the DFP-based approach presented here reduces the number of required material layers by over 50%. A detailed comparison is shown in Table 1. The most significant achievements come from the elimination of a carrier substrate, the replacement of metallic markers, and the elimination of the sacrificial nickel layer, which improves the overall bio-compatibility by avoiding cytotoxic effects of nickel [60]. Even though nickel should be removed during the fabrication, remaining traces could be still an issue for biological samples. Similar to SU-8 [61], also DFP ADEX contains potentially cytotoxic antimony compounds. Previous investigation for the likewise epoxy-based SU-8, that showed only minimal leaching in PBS [61], should be transferable to ADEX due to the similar or lower content of antimony compounds below 5% [62]. Potential issues with these compounds could be, furthermore, avoided by using antimony-free DFPs like SUEX.
Alternative strategies for the release of SU-8 patterns from their substrate based on spin-on photoresists from the product family AZ ® (MicroChemicals GmbH) that has an alkaline developer, as a sacrificial layer [63] or fluorocarbon coatings as "non-sticky" surfaces [64,65] each suffer from severe shortcomings. Implementation of AZ-photoresists, on the one hand, sets limitations for further patterning with AZresists atop the SU-8 structure, as required for the creation Fig. 7 Schematic illustration of the microprobe assembly process using kinked silicon nanowires as charge transducers in a field-effect transistor configuration: a Photosensitive polymeric DFP ADEX with position markers directly burned into the material with a dose of 60 mW. b Randomly deposited kinked silicon nanowires near the position markers. c Exposure of the polymeric substrate, shaping the tip-shaped sensor as well as the network that stabilizes connects the sensors. d Manufacturing metal contacts in a lift-off-process. e Passivation of metal contacts with an exposed SU-8-coating. f Development of ADEX-substrate together with the SU-8 passivation layer of metal structures. The release of SU-8-structures from fluoropolymers by mechanical forces, on the other hand, is always a tradeoff between the adhesion quality and the yield of the final devices [65]. Another process-related advantage of using DFPs is their homogeneous film thickness. Due to the high viscosity of SU-8 resists, it is challenging to achieve a homogeneously thick layer via spin-coating.
It is finally worth mentioning again that the presented DFP strategies, mainly for ADEX, are in principle transferable to many other DFPs and naturally to other microsystem designs, too. We believe that we can inspire other Fig. 8 Electrical integration of silicon nanowires into microprobes: a A representative microprobe made from ADEX DFP. The lift-off quality can be still further improved and is mainly obstructed by the aforementioned issue of inadequate temperature control during thermal evaporation. b Initial ADEX substrate with alignment marker burned in via laser lithography and a transferred kinked silicon nanowire. c Integrated kinked silicon nanowire with metal contacts. d Electrical resistance measurement of a silicon nanowire integrated on a 75 µm ADEX DFP, indicating a Schottky-contact behavior researchers to study DFPs further within the frame of green microfabrication.
Conclusion
The dry film photoresist (DFP) ADEX as well as partly SUEX DFP were studied to be utilized for a rapid and green microassembly of prospective biomedical microdevices. As a polymeric DFP photoresist, ADEX can be structured by UV lithography without using harsh, toxic, and highly climate-active etchants, such as hydrogen fluoride and sulfur hexafluoride that are typically used for conventional silicon and glass patterning. We demonstrated three exemplary microfabrication approaches with ADEX DFP. First, the DFP-based post-modification of freestanding silicon nitride microcantilevers by dry etching, by localized metal deposition and by creation of functional polymeric structures. Second, the facile assembly of microfluidic channels. Third, the lamination-free use of ADEX via direct exposure by laser lithography. In this regard, we demonstrated the direct burnin of alignment markers for multilevel lithography, the deposition of metal structures by using a lift-off approach, and the creation of polymeric patterns on top of the DFP from SU-8. Finally, we exemplified a device fabrication based on the assembly of an ADEX-based microprobe with an embedded kinked silicon nanowire in field-effect transistor configuration. Similar probes might be usable for instance for future in-vitro cell studies. The presented DFP-based microfabrication strategies are in principle transferable to other DFP systems and device designs and, thus, foster the microdevice assembly in this diverse field that moves on towards a greener microfabrication.
|
2021-10-22T15:33:04.743Z
|
2021-09-03T00:00:00.000
|
{
"year": 2021,
"sha1": "6cd4c9cea09585d8aad2cf04cd2da717b9d241b7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40684-021-00367-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "0e593ddc102c4afeb04c57757ceb3443ff50492c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
269005511
|
pes2o/s2orc
|
v3-fos-license
|
Current and future cosmological impact of microwave background temperature measurements
The redshift dependence of the cosmic microwave background temperature, $T(z)=T_0(1+z)$, is a key prediction of standard cosmology, but this relation is violated in many extensions thereof. Current astrophysical facilities can probe it in the redshift range $0\le z\le6.34$. We extend recent work by Gelo {\it et al.} (2022) showing that for several classes of models (all of which aim to provide alternative mechanisms for the recent acceleration of the universe) the constraining power of these measurements is comparable to that of other background cosmology probes. Specifically, we discuss constraints on two classes of models not considered in the earlier work: a model with torsion and a recently proposed phenomenological dynamical dark energy model which can be thought of as a varying speed of light model. Moreover, for both these models and also for those in the earlier work, we discuss how current constraints may be improved by next-generation ground and space astrophysical facilities. Overall, we conclude that these measurements have a significant cosmological impact, mainly because they often constrain combinations of model parameters that are orthogonal, in the relevant parameter space, to those of other probes.
Introduction
One of the best-known properties of our universe is the present-day value of the cosmic microwave background (CMB) temperature, which is T 0 = 2.7255±0.0006K [1].Considerably less known is the fact that, assuming that the CMB spectrum was originally a black-body, the expansion of the Universe is adiabatic, and the photon number is conserved, the CMB temperature will evolve as While this holds true in the standard cosmological model, it does not in many extensions thereof, and several possible physical mechanisms can be responsible for deviations from it [2,3].Observational evidence for deviations from this relation will constitute evidence for new physics, which would need to be subsequently characterized by additional observational and experimental probes.This provides a motivation for measurements of this temperature at the broadest possible range of redshifts.
At present, such measurements can be obtained using two different observational techniques.At lower redshifts (typically z < 1), one relies on the the thermal Sunyaev-Zel'dovich (SZ) effect, while in the approximate range 1 < z < 3 one can rely on high-resolution spectroscopy of suitable molecular or atomic species whose energy levels can be excited by CMB photons.After several decades of obtaining upper limits, the first such measurement was obtained by Srianand et al. [4], and more that two further decades of effort on this subject have led to a set of several tens of measurements, with a redshift range which has recently been enlarged to reach z ∼ 6.34 [5].
While these measurements have occasionally been used to constrain specific models, a recent work [6] has provided a first comparative assessment of the cosmological constraining power of this data, specifically comparing it to that of other commonly used low redshift background data sets.That work focused on three different model classes: the decaying dark energy model of Jetzer et al. [7], the scale invariance model of Canuto et al. [8], and the recently proposed fractional cosmology model [9].The main result emerging from that analysis is that, broadly speaking, the constraining power of the CMB temperature measurements is comparable to that of other low redshift cosmological data sets, with the caveat that there are model dependencies, ranging from cases where these measurements provide a 10% to 20% improvement in constraints to those where they rule out otherwise viable models.
For our present purposes, all models beyond the canonical ΛCDM can be divided into two main classes.The first class comprises models which are genuine parametric extensions of ΛCDM, in the sense the model reduces to ΛCDM for some particular choice of its parameters.Sometimes one of these parameters is (or can be rephrased as) a dark energy equation of state, and this parameter tends to also impact the temperature-redshift relation.The second class comprises models which, nominally at least, have no parametric ΛCDM limit, relying instead on a different physical mechanism to account for the recent acceleration of the universe.However, they can also be turned into parametric extensions of ΛCDM by allowing for the possibility of a non-zero cosmological constant.If so these models will contain two possible mechanisms for acceleration, which may still be observationally distinguishable, although they may well be strongly degenerate, For models in this second class the case without a cosmological constant is typically ruled out by observations, so only the latter case is worth considering.Examples of this can be seen in [6,10], and we will encounter additional ones in what follows.
Here we extend the work of Gelo et al. [6] in two different ways.Firstly we carry out analogous studies for two other model classes, which stem from physical motivations which differ from those of the three previously studied ones: a model including torsion [11,12] and a recent phenomenological dynamical dark energy model [13], which can be thought of as a varying speed of light model and turns out to have various similarities with some of the other models under consideration.Secondly, for all five models, we go beyond currently available data and discuss how current constraints may be improved by hypothetical next-generation ground and space astrophysical facilities providing, among others, improved measurements of the CMB temperature.Our analysis leads us to expect that these measurements will continue to have a significant cosmological impact in the next decade.Throughout this work we use units with c = ℏ = 1.
Current and future data
In this section we briefly describe the current (publicly available) background cosmology data and also the analogous future (simulated) data which is used in what follows.In the subsequent sections this data will be used in standard likelihood analyses, as described e.g. in [14], for several different cosmological models.
The likelihood is defined as where p symbolically denotes the free parameters in the model being considered.The chi-square for a relevant redshift-dependent quantity E(z) has the explicit form where the obs and mod subscripts denote observations and model respectively, and C is the covariance matrix of the dataset (which may, in some cases, be trivial).
Our analyses are grid-based, and unless otherwise is stated we use uniform priors for the various model parameters, in the plotted ranges.We have tested that these assumptions do not impact our results, Two independent codes have been used (one written in Matlab, the other in Python), validated and verified against each other and previous results in the literature.
Current data
The existing CMB temperature measurements are listed in Tables 1 and 2 for the reader's convenience.They were also used for cosmological purposes in the aforementioned work [6].The first of these tables contains the low redshift measurements, coming from the thermal SZ effect.Specifically these come from a set of 815 Planck clusters organized into 18 redshift bins as discussed in [15], as well as a sample of 158 SPT clusters organized in 12 redshift bins as discussed in [16].We refer the reader to these works for further details on how each of the samples has been acquired and on how the data analysis leading to the quoted measurements was done.
The second table contains the (usually higher redshift) spectroscopic measurements, which are obtained from observations at various wavelengths (from optical to radio) and using several molecular or atomic species.Part of the data comes from a recent compilation [17], which has updated several earlier analyses [18,19,20,21], specifically by accounting for the contribution of collisional excitation in the diffuse interstellar medium to the excitation temperature of the tracer species.The highest redshift measurement currently available (though one providing a relatively weak constraint) is at z ∼ 6.34, but most of them are at considerably lower redshifts, z ∼ 2.5.In passing, we note that spectroscopic methods also provide some additional upper limits on T (z), at comparable redshifts; a recent compilation can be found in [17].We do not include these upper limits since they would carry negligible statistical weight in our analysis.
Since, in our context, the CMB temperature measurements are effectively background cosmology data, our assessment of their constraining power will be done against two other background cosmology data sets.The first is the Pantheon compilation [26,27].This is originally a 1048 supernova data set, subsequently compressed 1 into 6 correlated measurements of E −1 (z) (where E(z) = H(z)/H 0 is the dimensionless Hubble parameter) in the redshift range 0.07 < z < 1.5.The second is the compilation of 38 Hubble parameter measurements of Farooq et al. [28]: this 1 Strictly speaking, this compression also relies on 15 Type Ia supernovae from two Multi-Cycle Treasury programs, and it assumes a spatially flat universe.The compression methodology and validation are detailed in Section 3 of [27].[17] 6. 34 23.1 +7.1 −6.7 [5] is a more heterogeneous set includes both data from cosmic chronometers and from baryon acoustic oscillations.
Both of these are canonical data sets and have been extensively used in the literature, including [6].We retain them in the present work, to enable a fair comparison to the earlier one.Together, the two cosmological data sets contain measurements up to redshift z ∼ 2.36 (thus a redshift range comparable to that of the CMB measurements), and when using the two in combination we will refer tho this as the cosmological data.
Note that in all our analyses the Hubble constant is never used as a free parameter; instead it is always analytically marginalized, following the procedure described in [29], so our results do not depend on possible choices of this parameter.Since one can trivially write H(z) = H 0 E(z), one notices that H 0 is purely a multiplicative constant, and can be analytically integrated in the likelihood.
Towards this end one computes quantities
where the σ i are the uncertainties in observed values of the Hubble parameter.Then, the chi-square is given by where Er f is the Gauss error function and ln is the natural logarithm.The same remark holds for the present-day value of the CMB temperature, T 0 , which is also analytically marginalized, In passing, we note that although measurements of the CMB temperature cannot per se, address the Hubble tension, they may indirectly help shed some light on it by improving constraints on cosmological model parameters degenerate with H 0 .
Simulated future data
In this work we also present forecasts for future constraints on the various models under consideration.For this we will replace the data discussed in the previous subsection by three analogous simulated ones, which represent (in an admittedly simplified way) the expected performance of several next-generation astrophysical facilities-some under construction, others merely proposed.We will proceed by generating mock data, always including Gaussian noise, and using the same analysis pipelines providing the current constraints to obtain the forecast constraints.
For generating all this mock data, a flat ΛCDM fiducial model, with a matter density Ω m = 0.3 and a Hubble constant H 0 = 70 km/s/Mpc, is assumed throughout.This choice stems from the fact that there is currently no substantive evidence of deviations from this.One could simulate additional non-ΛCDM cases for each model, but that would make the work much longer, without necessarily providing much additional insight.While the degeneracies between model parameters imply that the expected future parameter constraints will depend on the fiducial values, such dependencies are expected to be small since any plausible models must be (in a loose sense) close to ΛCDM in parameter space, at least for low redshifts.
Starting with the CMB temperature and the SZ measurements, the data in Table 1 is replaced by an assumed sample of high signal-to-noise clusters from a next-generation space mission such as CORE [30].For simplicity we assume them to be uniformly spaced in 20 bins in the redshift range z ∈ [0.05, 1.00], with an absolute uncertainty in the temperature ranging from 0.005 K to 0.05 K at the lower and higher redshifts respectively.
As for the spectroscopic measurements in Table 2, considering the difficulty of finding additional lines of sight where these measurements can be made, we conservatively assume a data set consisting of the 13 absorption systems for which the measurements are done in the optical/UV (that is, all except the ones at lowest and highest redshift), but with an improved absolute temperature uncertainty of 0.1 K, as one may expect with the ANDES high-resolution spectrograph at the ELT, formerly known as ELT-HIRES [31,32].For context, we note that one can see in Table 2 that current best constraints of this type have uncertainties of about 0.7 K, and that measurements with ESPRESSO, which recently became operational at the VLT, is expected to reach sensitivities of ca.0.4 to 0.5 K.
Regarding cosmological data, as a replacement for the Pantheon compilation we assume a future data set of supernova measurements from the proposed Roman Space Telescope, formerly known as WFIRST.Their preliminary study of these authors [27,33] reports the following values for percent errors on the dimensionless Hubble parameter E(z): σ e = 1.3, 1.1, 1.5, 1.5, 2.0, 2.3, 2.6, 3.4, 8.9 for the nine redshift bins centered at redshifts z = 0.07, 0.20, 0.35, 0.60, 0.80, 1.00, 1.30, 1.70, 2.50, respectively.While these measurements are not fully independent, the authors also state that pairwise correlations between the measurements are small (and are not explicitly provided); for simplicity we therefore ignore them in the analysis that follows.We further note that their simulated data set was also obtained under the assumption of a flat universe, but this is compatible with our choice of fiducial model.
Finally, as replacement for the Hubble parameter measurements compilation of [28], we will take a future set of measurements of the redshift drift [34,35,36], expected to be carried out by ANDES at high redshifts [37,31,32] and by the SKA at lower redshift [38], This is a direct model-independent probe of the expansion of the universe; the redshift drift of an astrophysical object following the Hubble flow is given by [34] where τ exp is the experiment time span (not to be confused with the on-sky observation time), although the actual observable is a spectroscopic velocity which, for subsequent convenience, we expressed in terms of E(z) and h = H 0 /(100 km/s/M pc); we also introduced the normalization constant k, which has the value k = 3.064 cm/s if τ exp is expressed in years.For the SKA, following the results in [38], we assume a set of 10 measurements, equally spaced between redshifts z = 0.1 and z = 1.0 and with the associated spectroscopic velocity uncertainties equally spaced between 1% and 10% respectively.For ANDES we assume the recently proposed Golden Sample of [39], consisting of seven measurements with spectroscopic velocity uncertainties σ v = 10.4,12.5, 11.1, 9.8, 7.8, 11.2, 12.1 at redshifts z = 2.999, 3.240, 3.535, 3.897, 3.960, 4.147, 4.760, respectively.
Steady-state torsion models
From a mathematical (geometrical) perspective, one of the most natural extensions of Einstein's General Theory of Relativity is the inclusion of space-time torsion [40,41,42].It is moreover possible to choose the torsion tensor (defined as the antisymmetric part of the affine connection) such that the homogeneity and isotropy of Friedmann-Lemaitre-Robertson-Walker (FLRW) universes is preserved, as shown by [43].Following suggestions that such models might explain the recent acceleration of the universe [11,12], under the so-called steadystate torsion assumption of a constant fractional contribution of torsion to the volume expansion, constraints on these models have been obtained from the Type Ia Supernova and Hubble parameter measurements introduced in Sect.2.1 [10].In summary, these show that such models are an example of the second class of models described in the introduction.Although they were originally proposed as genuine alternatives to ΛCDM (in the sense that torsion alone would be expected to yield the current acceleration of the universe), that scenario is observationally ruled out.Instead if one treats the models as one-parameter extensions of ΛCDM, by allowing for the presence of a cosmological constant, the relative contribution of torsion is constrained to the level of a few percent of the energy density budget.Here we show how these constraints are improved by the addition of CMB temperature measurements.
The general field equations including torsion are known as the Einstein-Cartan equations.In order to obtain homogeneous and isotropic universes the torsion tensor must depend on a scalar function ϕ which depends only on time but is otherwise arbitrary.Under these assumptions one obtains the following Friedmann, Raychaudhuri and continuity equations [11,12] where κ = 8πG; in what follows we will assume K = 0 and a barotropic fluid with a constant equation of state p = wρ; we note that this is the equation of state of matter, so the canonical case corresponds to w = 0.It is convenient to introduce the canonical present-day fractions of matter and dark energy, Ω m = κρ 0 /3H 2 0 and Ω Λ = Λ/3H 2 0 , and we can analogously define a torsion contribution In steady-state torsion models [11] one further assumes that the relative torsion contribution to the expansion remains constant in time, and one can therefore express observational constraints in terms of the dimensionless model parameter λ, the two parameters being related via Ω ϕ = −4λ(1 + λ).
For numerical purposes, it is also convenient to define a dimensionless density, denoted r, via ρ = rρ 0 , and to write the Friedmann and continuity equations as a function of redshift as follows (1+z) where we have made use of the flatness condition (leading to a relation between λ, Ω m and Ω Λ which allows us to eliminate the latter) and are also making the quite safe assumptions that λ −1/2 and Ω m 0.Last but not least, we have also neglected the radiation density, since we are dealing with low-redshift data.
Importantly, the CMB temperature data also constrains this model.As shown by [44], in this model one has a violation of the distance duality relation (which relates the luminosity and angular diameter distances), given by Following [45], and assuming adiabatic processes, we therefore expect the corresponding violation of the temperature-redshift relation to have the form note that this is independent of the matter density and depends only on λ.It can therefore be used to break the degeneracy between this parameter and the matter density.
Current constraints
The top part of Table 3 summarizes the constraints on the model reported by [10] under various assumptions on the model parameters.These were obtained using the supernova and Hubble parameter data discussed in Sect.2.1 (which we collectively refer to as cosmological data), sometimes complemented by a Planck-like prior on the matter density, Ω m = 0.315 ± 0.007 [46]; for all the other parameters, uniform priors were used.We have repeated this analysis, with and without the addition of the CMB temperature, the latter being a simple means of validating the code by reproducing the results of the earlier work (although the two codes are not fully independent).The new results are summarized in the middle part of table 3, and show a discernible effect of the addition of the CMB data.
Figure 1 shows relevant results for the conservative case where matter is assumed to have the standard equation of state, w = 0. Without the Planck prior, the most salient aspect is that there is a degeneracy between the matter density and the dimensionless torsion parameter λ, which allows for a reduced matter density (by comparison to its standard value), compensated by a slightly negative value of λ.The addition of the CMB temperature data strongly restricts this possibility, improving the constraint on λ by more than a factor of two.Naturally, the Planck prior on Ω m is an alternative way to break this degeneracy (and in that case the constraint on the matter density simply recovers the prior), but even in this case the CMB temperature data provides a slight improvement in the λ constraint.
Figure 2 shows the analogous results for the more general case where matter is allowed to have a non-zero (but constant) equation of state.Qualitatively the results are the same, though one should note that in this case the extended parameter space implies that, in the absence of the Planck prior and the CMB temperature data, the matter density is fully unconstrained.The addition of the CMB data changes this, and enables a non-trivial (though still comparatively weak) constraint to be obtained.The constraints on λ and w are also improved by the CMB data.In the former case this is a direct effect, since the CMB temperature is explicitly dependent on λ; in the latter it is an indirect and therefore weaker effect, due to the partial breaking of degeneracies with the other parameters.We also note that the constraints on w are qualitatively comparable to those in previous analogous works, such as [47], although no direct quantitative comparison is possible since the two works address different types of models.
Forecasts
We now repeat the analysis of the full threedimensional parameter space of the model, (Ω m , λ, w), using instead the future simulated data described in Sect.2.2.Here we also include a Planck-like prior with and uncertainty σ Ω m = 0.007.We start by noting that [10] already provided some forecasts, though under slightly different assumptions: their future supernova data is the same as ours, but their assumptions on the redshift drift are different (they used no SKA data, and different ELT-ANDES data), and they used no CMB temperature data.With a Planck-level prior on the matter density, their forecast provided the following one-sigma uncertainties σ λ = 0.009 (19) σ w = 0.013 ; (20) the former corresponds to a one-sigma upper bound on the torsion contribution of Ω ϕ < 0.036.In the present work, in addition to using CMB temperature and SKA redshift drift data, we also update the ELT redshift drift data, relying on the recently proposed Golden Sample of [39].
Figure 3 and the bottom part of Table 3 summarize the results of our new forecast.This confirms the constraining power of the CMB temperature data, which improves the constraints on λ by about a factor of 3 with respect to the cosmological data (without these measurements) and by a factor of 6 with respect to current constraints.One may notice that the posterior distribution of λ for the cosmology data differs from the analogous one in Fig. 2. The reason for this is that the current H(z) dataset is replaced in our forecasts by a set of redshift drift measurements, which are analogous to (but not exactly the same as) H(z) measurements, leading to different sensitivities the model parameters.For this particular model, the strong degeneracies imply that small values of λ can't be well constrained by cosmology data alone.One can also note that the corresponding posterior distribution of λ for the current data is quite non-Gaussian, the simulated future data merely enhances this effect.
As expected the impact on the equation of state w is more modest-about a factor of two-since in this class of models the CMB temperature is not directly sensitive to it.It is also interesting to note that we now obtain a tight constraint on the matter density even without using the Planck prior-indeed this constraint, which improves the current one by about a factor of seven, is comparable to (though slightly larger than) that prior.
Forecasts for previously constrained models
Constraints on three classes of cosmological models for which the standard temperature-redshift relation is violated have recently been provided by [6], using the same background cosmology and CMB data being used in our present work.In this section we discuss how these current constraints are improved by the simulated future data, again with the goal of quantifying the gains in sensitivity to be expected from the next generation of astrophysical facilities, and how these gains may depend on the parameter space of each model.
In each of the following three subsections we start by a concise introduction to each model, and then compare the current constraints obtained in [6] with our forecasts.Our description of these models is by no means exhaustive: our goal is merely to highlight the feature of each of them (such as the Einstein equations) which and relevant for our analysis and/or for a comparison to other models, and we the reader to the original works for additional details on each model.We also note that in this section no Planck prior is used, and that we will again neglect the contribution of the radiation density to the Friedmann equation, since we are only considering low-redshift data.
A decaying dark energy model
This model was developed by Jetzer et al. [7,48] based on earlier work in [49].It is effectively a decaying dark energy model, where continuous photon creation is possible, which obviously implies a non-standard temperatureredshift relation.In flat FLRW universes and with the previously mentioned assumptions, the Friedmann equation is where m is a dimensionless model parameter, which can equivalently be expressed as an effective dark energy equation of state The model includes a coupling to radiation, which must necessarily be weak [50], as otherwise it would be already ruled out, so at a phenomenological level we may consider it a small perturbation on the standard ΛCDM behaviour.These assumptions lead to the temperature-redshift relation where we have also relaxed the assumption of a strict radiation fluid, by allowing for a radiation-type fluid with a generic (but still assumed constant) equation of state w r .Clearly, with m = 0 w r = 1/3 we recover the ΛCDM case, so this model falls into the first of the model classes discussed in the introduction.Unlike the model in the previous section, here the temperature-redshift relation does depend on the matter density, but this dependency is quite mild-as we will see presently, those on m and w r are much stronger.
The analysis of [6] reports, for the case with standard radiation (w r = 1/3).the one-sigma constraints Ω m = 0.30 ± 0.02 ( 24) while if one relaxes the w r = 1/3 assumption and includes this as a third free parameter, the constraints become w r = 0.35 ± 0.02 .
Overall, there are no statistically significant deviations from the ΛCDM behaviour.A simpler analysis, assuming a fixed matter density, Ω m = 0.315, can be found in [5].
Figure 4 summarizes our forecast results for the w r = 1/3 case, while Figure 5 shows the results for the case where w r is an additional free parameter.In both figures, the blue dashed lines depict the current constraints, obtained by [6].For the former case we find the one-sigma constraints Ω m = 0.299 ± 0.002 ( 29) while for the latter the constraints become w r = 0.335 ± 0.002 .
This model's parameter space is slightly different from the one for the torsion model, but one common point is that the CMB temperature has very little sensitivity to the matter density but high sensitivity to the other parameters, thus breaking the degeneracy between each of them and Ω m .In this case, our simulated data would significantly improve the constraints on all three parameters, with respect to the current constraints: by a factor of ten for the matter density and w r , and by a factor of about five for m.This would also translate into sub-percent level constraints on the effective equation of state dark energy, which are compatible with those expected from contemporary cosmological probes assuming more standard cosmological models.
A scale invariance model
This model was developed by Canuto et al. [8,51] and stems from the notion that although the effects of scale invariance must vanish in the presence of particles with non-zero rest masses, one may assume that on cosmological scales empty space could still be scale invariant.A mathematical formulation of this idea leads to a bimetric type theory, with a time-dependent function λ playing the role of a scale transformation factor between the ordinary matter frame and a separate scale invariant frame.The first of these frames can be thought of as the atomic (or physical) frame, while the second is a gravitational frame, in which the ordinary Einstein equations would still hold [51].
Figure 4: Forecast constraints on the decaying dark energy model with w r = 1/3.The middle panel shows the one, two and three sigma constraints in the two-dimensional Ω m -m parameter space, and the side panels show the one-dimensional (marginalized) posterior likelihoods for each of the parameters.Green dash-dotted, red dotted and black solid lines depict the cosmological, CMB and combined forecasts for our simulated future data.For comparison, the dashed blue lines show the current constraints, already obtained in [6].For a flat FLRW universe, with the further simple but natural assumption of a generic power-law function of the form λ(t) = (t/t 0 ) p = x p , where t 0 is the present age of the universe (and we have defined a dimensionless age of the universe, x), the Friedmann equation is where we have also introduced a dimensionless parameter Ω λ = 2/(t 0 H 0 ).This shows that the present age of the universe is also a free parameter of the model.Note that there is a consistency condition (1 + pΩ λ /2)2 = Ω m + Ω Λ , and that setting λ(t 0 ) = 1 ensures that ΛCDM is recovered for p = 0.
For numerical convenience this can be re-written as together with with the initial condition x = 1 at z = 0. Finally, the temperature-redshift relation is which depends only on p, but not on the matter density.
As in the torsion case, this is an example of a model in the second of the classes mentioned in the introduction.
Setting Ω Λ = 0 would make it a genuine alternative to ΛCDM (i.e., there would be no parametric limit which recovers it), but previous work [52] has shown that such a scenario is ruled out.Therefore, we limit ourselves to the case where Ω Λ 0, in which the model is a parametric extension of ΛCDM.
This model was constrained using the same current data as we are using [6], under the assumption of a standard equation of state for matter 2 , w = 0, and a uniform prior on Ω λ corresponding to a present age of the universe from 13.5 to 27 Gyr.The choice of the lower limit corresponds to the age of the oldest identified galaxy, GN-z11 [53], further corroborated by estimates from galaxy clusters [54].The analysis was also restricted to the physically more plausible case p ≤ 0, in which case the model effectively has a decaying cosmological constant, and therefore shares some qualitative similarities with the model in the previous subsection.With these assumptions [6] obtained the first being a one-sigma constraint and the second a two-sigma lower limit.As in the previous model, in Green dash-dotted, red dotted and black solid lines depict the cosmological, CMB and combined forecasts for our simulated future data.For comparison, the dashed blue lines show the current constraints, already obtained in [6].
this case the CMB temperature data does not significantly constrain the matter density, but it does help to break degeneracies between the model's parameters.
To make the comparison with the constraints obtained in the earlier work reliable, we retain all the above assumptions, which in any case do not significantly impact the results.Figure 6 shows the results of our forecast analysis using the simulated data, which leads to the improved constraints Ω m = 0.297 ± 0.003 (41) p > −0.0135 .
This improves the matter density constraint by about a factor of six, and the lower limit on the power-law dependence of the scale transformation factor p by about a factor of ten.
The fractional cosmology model
The third model previously studied in [6] is the fractional cosmology model [9], which relies on the mathematical formalism of fractional calculus [55].Conceptually, the later can be thought of as a parametric extension of the usual concepts of differentiation and integration.As in the case of the model in the previous sub-section, this was was originally envisaged as an alternative to ΛCDM, in the sense that it is assumed to contain no cosmological constant.However, [6] have shown that this scenario is ruled out, precisely due to the CMB temperature data: the best-fit values of the model parameters obtained separately from cosmology and from CMB temperature date are mutually incompatible at many standard deviations.Therefore, in what follows we only address the model's most favourable scenario, which includes a cosmological constant and also fixes the present age of the universe to its standard value 3 .For a flat FLRW universe, the Friedmann equation for this model has the form where t 0 and x have the same definitions as in the previous sub-section, and α is the fractional calculus parameter: in standard calculus one has α = 1.Importantly, since E(z = 0) = 1, we have thus the requirement of a positive age of the universe implies that for The mathematical similarities between this model and the one in the previous subsection should be clear, and they also extend to the fact that there is a numerically more convenient expression for the Friedmann equation.Fixing the age of the universe to the standard value, we can write together with Finally, in this model the temperature redshift relation is all the above reduce to the standard flat ΛCDM behaviour for α = 1.Once again, note that the temperature-redshift relation is only sensitive to the beyond-ΛCDM parameter α.
The earlier analysis, using the data in Sect.2.1, led to the following one-sigma constraints Figure 7 shows the results of our forecast for this model, for which we obtain the constraints Ω m = 0.299 ± 0.003 (50) α = 1.00 ± 0.02 .
Qualitatively the behaviour is similar to that of the previous models, though the gains in sensitivity for each of the model parameters are quantitatively different.More specifically, in this case the constraint on the matter density is increased by a factor of six, while that on α is only improved by a factor of two.
A further dynamical dark energy model
We finally consider a dynamical dark energy model recently proposed in [13].This is a phenomenological model, in the sense that it cannot be derived from an action, but it turns out to also have some similarities with the models discussed in the previous section.The model stems from the possibility, discussed in [56], of a time varying temporal metric coefficient described by a function f (t) which can be thought of as analogous to the scale factor a(t), and therefore it could also be thought of as a particular type of varying speed of light model [57].It is meant to be an alternative to the standard cosmological model, in the sense that a cosmological constant is not expected.Considering as usual flat FLRW models, and retaining the assumption of [13] on the analytic form of the function, one obtains the following Friedmann equation note that here the model parameter α (not to be confused with the fine-structure constant) has dimensions of inverse time.Further assuming fluids with constant equations of state w = p/ρ = const, the continuity equation leads to For a low-redshift universe dominated by ordinary matter (w = 0), and further defining the two dimensionless parameters β = αt 0 and ϵ = H 0 t 0 , we can write the Friedmann equation in the numerically convenient form together with Additionally, the temperature-redshift relation has the form Note that in this model allows for a non-standard age of the universe, determined by the parameter ϵ.Structural phenomenological similarities of this model with the scale invariance and fractional cosmology models discussed in the previous section should be clear: specifically, visual inspection of the Einstein equation in all three cases immediately shows their similar forms.In passing we also note that in this model the standard distance-duality relation is also violated.Green dash-dotted, red dotted and black solid lines depict the cosmological, CMB and combined forecasts for our simulated future data.For comparison, the dashed blue lines show the current constraints, already obtained in [6].
Without a cosmological constant
We start with the baseline case without a cosmological constant, and also assume a standard age of the universe, in which case we have a two-dimensional parameter space.Figure 8 shows the results of our analysis.One notices that a degeneracy between the two parameters exists for the cosmological data, while the CMB data is only sensitive to β. Importantly, the constraints from the cosmological and CMB data, which are shown separately, are mutually incompatible, implying that the model is ruled out (and that a combination of the two data sets is not meaningful).Specifically, the cosmology data yields the constraints The preferred matter density is of course problematic on its own, being much smaller than the baryon density, but the constraint on β also conflicts with the one obtained from the CMB data, which is One may also extend the model, allowing for a nonstandard age of the universe.This introduces a third free parameter in the model, which can then be marginalized.As was done for the fractional cosmology model in the previous section, for this age we assume a uniform prior from 13.5 to 27 Gyr.Figure 8 also depicts the results of this analysis, which does not lead to any major changes.The error bars on the model parameters are slightly increased (as expected) but the best-fir values are not significantly changed, and the model is still ruled out.In this case the cosmology data leads to Ω m = 0.010 +0.009 As for the age of the universe, there is a very mild (and not statistically significant) preference for larger values, but the parameter remains unconstrained.
Including a cosmological constant
Given that the model cannot be a genuine alternative to the standard one, we may ask how it behaves if one allows for the presence of a cosmological constant, which again would make the model a parametric extension of ΛCDM.Clearly this is a purely phenomenological addition, but it is nevertheless interesting to see how much such a model is allowed (by current or future data) to deviate from ΛCDM.
According to Eq.( 54), for a fluid with an equation of state w = −1, its density will evolve in this model as Including this term, and further assuming flatness, the Friedmann equation can be rewritten, Figure 9 shows our resulting constraints, assuming a standard age for the universe.The two data sets are now mutually consistent, and as expected the results are fully consistent with ΛCDM.The one-sigma combined constraints are It is also worthy of note that in this case the cosmology and CMB data have, on their own, comparable constraining power on β, though as usual only the former constrains the matter density.
We have also verified that allowing the age of the universe to be a further independent parameter does not have a significant impact, other than slightly relaxing the constraints on the parameter β.Specifically, in this case we find As in the case without a cosmological constant, there is a modest and not statistically significant preference for larger than standard values of the universe's age.Finally, we discuss the constraints on this model expected from the future simulated data.We restrict ourselves to the case with a cosmological constant and the standard age of the universe, for the reasons already explained.These results are presented in Fig. 10, to be compared with the current constraints in Fig. 9.
(71) corresponding to improvements by factors of about six and five respectively.
Conclusions
We have extended the recent analysis by [6] of the cosmological impact of measurements of the CMB temperature at non-zero redshift, both by studying two additional models, steady-state torsion [11,12] and a specific type of varying speed of light model [13].Both of these stem from different physical assumptions than the models considered in [6], although the second turns out, in practice, to be fairly similar to earlier models.We also provided forecasts of the gains in constraining power to be expected from next-generation facilities.These gains, which are summarized in Table 4, are clearly model-dependent, but are typically of factors of a few to ten, for each of the model parameters.Still, it is worthy of note that there is always a significant improvement in the constraints on the matter density.
We note that in our current constraints analysis we have chosen to use the same background cosmology datasets as the previous works we build upon.In principle these could be slightly updated, but that would make the comparison to the previous works less easy (and also somewhat less fair), while improvements might only be modest.For example, replacing Pantheon by Pantheon+, the improvement in the final constraints would be of order ten percent or less.
Our analysis confirms the earlier results, showing that there is a wide class of models for which this data has a constraining power that is comparable to that of other lowredshift background cosmology data.At the broad conceptual level, the main reason for this is that constraints from cosmological data exhibit significant degeneracies between model parameters-typically including the matter density and one or more parameters quantifying the model's deviation from the standard ΛCDM behaviour.
In this wide class of models, the temperature-redshift relation is modified and depends predominantly (or, in some cases, even exclusively) on the new model parameters.
The CMB data therefore leads to stringent constraints on such parameters, thereby breaking the aforementioned degeneracies.
Admittedly, our forecasts rely on a number of simplifying assumptions regarding next-generation ground and space based astrophysical facilities which at present are in various different stages of development.The SKA and Roman Space Telescope are under construction, with first light foreseen for ca.2027, while the ANDES spectrograph is currently in Phase B and CORE is simply a mission concept at this stage.Forecasts for the redshift drift are particularly uncertain at this point since the signal has not been detected so far, though it should be detectable by both ANDES and the SKA.The first dedicated redshift drift experiment is presently ongoing, using the ESPRESSO spectrograph, and its first results (expected in the coming months) will provide an important assessment of the feasibility of future detections.In any case, our results show that temperature measurements will remain a competitive probe of new physics, and motivate efforts for their continuing improvement.
Figure 1 :
Figure 1: Current constraints on steady-state torsion with standard matter, w = 0.The middle panel shows the one, two and three sigma constraints in the two-dimensional Ω m -λ parameter space, and the side panels show the one-dimensional (marginalized) posterior likelihoods for each of the parameters.Blue dotted, green dash-dotted, red dashed and black solid curves depict the results for the Cosmology, Cosmology + Planck prior, Cosmology + CMB, and Cosmology + CMB + Planck prior data respectively.
Figure 2 :
Figure 2: Current constraints on steady-state torsion with w = const (but not necessarily vanishing) and Ω m marginalized.The middle panel shows the one, two and three sigma constraints in the two-dimensional w-λ parameter space, and the side panels show the one-dimensional (marginalized) posterior likelihoods for each of the parameters.Blue dotted, green dash-dotted, red dashed and black solid curves depict the results for the Cosmology, Cosmology + Planck prior, Cosmology + CMB, and Cosmology + CMB + Planck prior data respectively.
Figure 3 :
Figure 3: Same as Figure 2, for the simulated future data discussed in the text.
Figure 5 :
Figure 5: Same as Figure4for a more general equation of state of radiation w r , assumed to be constant.The one-dimensional (marginalized) posterior likelihood for w r is also shown.
Figure 6 :
Figure 6: Forecast constraints on the scale invariance model.The middle panel shows the one, two and three sigma constraints in the twodimensional Ω m -p parameter space, and the side panels show the one-dimensional (marginalized) posterior likelihoods for each of the parameters.Green dash-dotted, red dotted and black solid lines depict the cosmological, CMB and combined forecasts for our simulated future data.For comparison, the dashed blue lines show the current constraints, already obtained in[6].
Figure 7 :
Figure 7: Forecast constraints on the fractional cosmology model.The middle panel shows the one, two and three sigma constraints in the twodimensional Ω m -α parameter space, and the side panels show the one-dimensional (marginalized) posterior likelihoods for each of the parameters.Green dash-dotted, red dotted and black solid lines depict the cosmological, CMB and combined forecasts for our simulated future data.For comparison, the dashed blue lines show the current constraints, already obtained in[6].
Figure 8 :
Figure 8: Current constraints on the Gupta model, for Ω Λ = 0.The middle panel shows the one, two and three sigma constraints in the twodimensional Ω m -β parameter space, and the side panels show the one-dimensional (marginalized) posterior likelihoods for each of the parameters.The constraints are shown separately for the cosmological and CMB data, and 2D and 3D denote the cases with the age of the universe having the standard value and being an additional free parameter, respectively.
Figure 9 :
Figure 9: Current constraints on the Gupta model, including a cosmological constant and with a standard value of the age of the universe.The middle panel shows the one, two and three sigma constraints in the two-dimensional Ω m -β parameter space, and the side panels show the onedimensional (marginalized) posterior likelihoods for each of the parameters.Blue dotted, red dashed and black solid lines depict the cosmological, CMB and combined constraints respectively.
Figure 10 :
Figure 10: Same as Figure 9, for the simulated future data discussed in the text.
Table 3 :
[10]ent constraints and forecasts for the steady-state torsion model, under various assumptions on the model parameters and data.Cosmo refers to the current Type Ia supernova and Hubble parameter data, CMB to the current temperature data, and Forecast to a combination of future data sets.The top four rows are the results of[10], and are reproduced here to facilitate the comparison to our updated constraints, which are shown in the next four rows; the last two rows show our forecasts.CMB w = const.,PlanckΩ m prior Recovers prior −0.010 ± 0.015 −0.05+0.03 w = const.,Planck Ω m prior Recovers prior −0.001 ± 0.003 −0.01 ± 0.01 This work
Table 4 :
Summary of the expected gains, for the models considered in this work, in the sensitivity of the one sigma posterior constraint on the matter density and on the beyond-ΛCDM model parameter(s), for the simulated future data with respect to the current constraints.
|
2024-04-09T06:44:54.873Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "68d4eb726d9f0edc546de541aaaca60dc7aa2943",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1016/j.dark.2024.101494",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "68d4eb726d9f0edc546de541aaaca60dc7aa2943",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
1233201
|
pes2o/s2orc
|
v3-fos-license
|
Steps to Health in Cognitive Aging: Effects of Physical Activity on Spatial Attention and Executive Control in the Elderly
The purpose of this study was to investigate whether physical activity (PA) habits may positively impact performance of the orienting and executive control networks in community-dwelling aging individuals and diabetics, who are at risk of cognitive dysfunction. To this aim, we tested cross-sectionally whether age, ranging from late middle-age to old adulthood, and PA level independently or interactively predict different facets of the attentional performance. Hundred and thirty female and male individuals and 22 adults with type 2 diabetes aged 55–84 years were recruited and their daily PA (steps) was objectively measured by means of armband monitors. Participants performed a multifunctional attentional go/no-go reaction time (RT) task in which spatial attention was cued by means of informative direct cues of different sizes followed by compound stimuli with local and global target features. The performance efficiency of the orienting networks was estimated by computing RT differences between validly and invalidly cued trials, that of the executive control networks by computing local switch costs that are RT differences between switch and non-switch trials in mixed blocks of global and local target trials. In regression analyses performed on the data of non-diabetic elderlies, overall RTs and orienting effects resulted jointly predicted by age and steps. Age predicted overall RTs in low-active individuals, but orienting effects and response errors in high-active individuals. Switch costs were predicted by age only, with larger costs at older age. In the analysis conducted with the 22 diabetics and 22 matched non-diabetic elderlies, diabetic status and daily steps predicted longer and shorter RTs, respectively. Results suggest that high PA levels exert beneficial, but differentiated effects on processing speed and attentional networks performance in aging individuals that partially counteract the detrimental effects of advancing age and diabetic status. In conclusion, adequate levels of overall PA may positively impinge on brain efficiency and attentional control and should be therefore promoted by actions that support lifelong PA participation and impact the built environment to render it more conducive to PA.
The purpose of this study was to investigate whether physical activity (PA) habits may positively impact performance of the orienting and executive control networks in community-dwelling aging individuals and diabetics, who are at risk of cognitive dysfunction. To this aim, we tested cross-sectionally whether age, ranging from late middle-age to old adulthood, and PA level independently or interactively predict different facets of the attentional performance. Hundred and thirty female and male individuals and 22 adults with type 2 diabetes aged 55-84 years were recruited and their daily PA (steps) was objectively measured by means of armband monitors. Participants performed a multifunctional attentional go/no-go reaction time (RT) task in which spatial attention was cued by means of informative direct cues of different sizes followed by compound stimuli with local and global target features. The performance efficiency of the orienting networks was estimated by computing RT differences between validly and invalidly cued trials, that of the executive control networks by computing local switch costs that are RT differences between switch and non-switch trials in mixed blocks of global and local target trials. In regression analyses performed on the data of non-diabetic elderlies, overall RTs and orienting effects resulted jointly predicted by age and steps. Age predicted overall RTs in low-active individuals, but orienting effects and response errors in high-active individuals. Switch costs were predicted by age only, with larger costs at older age. In the analysis conducted with the 22 diabetics and 22 matched non-diabetic elderlies, diabetic status and daily steps predicted longer and shorter RTs, respectively. Results suggest that high PA levels exert beneficial, but differentiated effects on processing speed and attentional networks performance in aging individuals that partially counteract the detrimental effects of advancing age and diabetic status. In conclusion, adequate levels of overall PA may positively impinge on brain efficiency and attentional control and should be therefore promoted by actions that support lifelong PA participation and impact the built environment to render it more conducive to PA.
INTRODUCTION
The rectangularization of the life expectancy curve and the increasing proportion of 'graying' population (Spirduso et al., 2005) urges societies toward a more comprehensive understanding of how to ensure health and quality of life of aging people. Awareness is increasing that physical activity (PA) is one of the major lifestyle-related health determinants with benefits that go beyond physical health also in advanced age (Netz et al., 2005;World Health Organization, 2007;Ballesteros et al., 2015). Over the past decades, there has been a rise and fall of interest for the different facets of PA-elicited health outcomes due to reasons ranging from epidemiological trends to methodological advancements. The worldwide overweight and insulin resistance epidemic has led to advocate for PA in aging with the aim to ensure health-appropriate levels of PA and caloric expenditure (Ryan, 2010). On the other hand, methodological advancements in cognitive and especially neuroscientific research have allowed to accumulate evidence on the beneficial impact of PA on several aspects of brain health and cognitive efficiency in the aging population with or without chronic diseases (Bherer et al., 2013;Prakash et al., 2015;Young et al., 2015;Gajewski and Falkenstein, 2016).
Age-related chronic diseases are frequently associated with cognitive impairment. Population aging appears to be the most important demographic change to the prevalence of diabetes, one of the four main types of non-communicable diseases across the world (World Health Organization, 2015) projected to double from 2000 to 2030 (Wild et al., 2004) and demonstrated to be a risk factor for cognitive decline and dysfunction as early as middle-age (Kodl and Seaquist, 2008;Nooyens et al., 2010;Luchsinger, 2012;Umegaki, 2014). Thus in recent years, the focus of PA interventions has been extended from sole physical to brain health effects. Designed, structured PA interventions (Espeland et al., 2016), as well as physically active habits as simply walking (Yaffe et al., 2001;Abbott et al., 2004) seem beneficial to counteract cognitive aging of older adults with and without diabetes, even though health-related covariates may limit effect size (Devore et al., 2009).
Echoing the title of a European framework to promote PA for health ("Steps to health, " World Health Organization, 2007), we extend the notion of the health-enhancing effects of PA to an aspect of cognitive health in aging people -the efficiency of attentional control -that has still received scarce consideration in research on the influence of PA on cognition. Particularly, the present study investigates the relation of PA habits of aging individuals with and without diabetes, as objectively assessed in terms of daily steps, to the efficiency of the attentional systems responsible for the orienting of attention and the executive control, which seem to undergo different trajectories of agerelated deterioration from middle-age to older adulthood (Zhou et al., 2011).
Recent advancements in neurosciences suggest that these attentional systems rely on two interactive, but anatomically distinct networks each (Dosenbach et al., 2008;Petersen and Posner, 2012;Vossel et al., 2014). The orienting of attention is handled by both a more dorsal and a ventral network that act cooperatively to enable individuals to flexibly control attention in relation to top-down goals and bottom-up sensory stimulation (Corbetta and Shulman, 2002;Vossel et al., 2014). The dorsal network, including parietal regions, as the intraparietal sulcus, but also a small set of frontal locations, particularly in the frontal eye fields, allows for strategic control over attention according to the information delivered by environmental cues. The ventral network, including the ventral frontal cortex and the temporoparietal junction, comes into play when the focus of attention is erroneously engaged by misleading cues and must be therefore disengaged and shifted in a task-relevant direction (Petersen and Posner, 2012).
Executive control is guaranteed by the interplay of two distinct networks too: the fronto-parietal and the cinguloopercular components. The first seems responsible for the adaptability, the second for the stability of top-down task control (Dosenbach et al., 2008). Particularly, the fronto-parietal network, including lateral frontal and parietal regions and particularly the dorsolateral prefrontal cortex, is supposed to initiate executive control and handle its ongoing adjustment for conflict resolution on a trial-by-trial basis. The cingulo-opercular network, including the anterior cingulate cortex, the anterior insula and frontal regions as the frontal operculum, appears to ensure a stable 'set maintenance' over trials by monitoring the preparatory allocation of attention especially in presence of competing attentional sets (Luks et al., 2002;Petersen and Posner, 2012).
After the seminal meta-analysis by Colcombe and Kramer (2003), showing that executive functions of older adults are more improved by PA than lower-level functions, researchers have confirmed such larger or selective effects and devoted noticeable efforts to further differentiate PA effects on specific aspects of executive vs. non-executive function (Gajewski and Falkenstein, 2016). Executive function is responsible for crucial aspects of cognition as planning of goal-oriented actions, monitoring of cognitive operations and behavioral adaptability (Diamond, 2013). Thus, this special focus on PA effects on executive function is well justified, but has led, with specific regard to the PA-attention relationship, to a disproportional interest for the attentional networks responsible for executive control and to a relative neglect of the other attentional networks the executive control network is strictly intertwined with. The study of PA effects on attentional orienting is mainly limited to the effects of acute bouts of exercise in young active adults (Pesce et al., 2007b;Huertas et al., 2011;Sanabria et al., 2011;Luque-Casado et al., 2013;Chang et al., 2015;Llorens et al., 2015). The only two attentional orienting studies performed with older adults have tested the moderation of acute exercise effects by chronic PA participation (Pesce et al., 2007a. The lack of aging studies that examine the separate and joint effects of chronic PA on the orienting and executive control networks is surprising, because such networks contribute in an intertwined manner to the ability to allocate attention in response to expectations and environmental stimuli (Petersen and Posner, 2012) that is relevant to functioning and safety of older adults in everyday life (Bédard et al., 2006). Several situations, as in house hold or road traffic, require the ability to decide where and what to pay attention in advance, as a traffic light for pedestrians, coupling go/no-go actions, but also to re-orient attention rapidly to changes in the environment, as a car unexpectedly approaching, or selecting from the set of possible locomotor actions the most adequate to key features of the situation, avoiding distraction from other irrelevant sources.
Thus, the primary aim of the present study was to investigate whether active PA habits have a similar or differentiated impactif any -on the performance of the orienting and executive control networks in aging. Of the large body of cognitive research that has investigated the interplay between age and PA (Young et al., 2015;Gajewski and Falkenstein, 2016), most studies have used age and PA levels as categorical variables. When, rarely, direct/indirect measures of PA were used as continuous predictors (e.g., Bixby et al., 2007), age was accounted for as a covariate, thus neglecting the potential interaction between PA and age. Also as regards age, mostly younger and older age classes were compared, whereas the interesting transition phase of late middle-age, characterized by a unique interplay between covert neural and overt behavioral changes (Berchicci et al., 2012) remains relatively underinvestigated. Since specific aspects of the attentional orienting (Pesce et al., 2007a) and executive control networks Themanson et al., 2006) seem benefited by PA at old age, we hypothesized that aging and PA level may have interactive effects on the different facets of the attentional performance across a wide range of ages from late middle-age to old adulthood.
The second aim, related to the first one, was to investigate whether aging diabetics, who are at risk of poor cognition (Luchsinger, 2012), may profit from the expected attentional benefits of being physically active. Although PA and exercise training is considered a major therapeutic modality for type 2 diabetes, persons affected by this pathology usually exhibit lower levels of PA and related lower levels of cardiovascular fitness (Albright et al., 2000). This may prevent diabetics from successful cognitive aging, which is linked to physically active habits (Young et al., 2015) through different mechanisms and above all the enhancement and maintenance of cardiovascular fitness (Stillman et al., 2016). Recent evidence suggests that the mechanisms through which PA affects cognitive function may differ for aging persons by diabetes status, since beneficial cognitive outcomes of PA were found in diabetic elderlies, but not in co-aged individuals without diabetes (Espeland et al., 2016). Thus, we hypothesized to find more pronounced benefits in diabetics than in non-diabetics. Espeland et al.'s (2016) study addressed global cognition and memory. Nevertheless, of particular concern is evidence showing that among the broad range of cognitive functions impaired by type 2 diabetes, there is executive function (Qiu et al., 2006;Okereke et al., 2008). Given the critical role that executive control plays in functional abilities relevant for everyday life of aging people (Rucker et al., 2012;Forte et al., 2013Forte et al., , 2015, we deemed relevant to examine if a physically active lifestyle counteracts the deterioration of the ability to exert executive control over attention in this special population.
MATERIALS AND METHODS
This study was carried out in accordance with the recommendations of "Umberto I" hospital of the First Rome University with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Ethics Committee of the "Umberto I" hospital of the First Rome University.
Participants
Hundred and thirty participants (68 females and 62 males) were recruited according to the following eligibility criteria: (i) age between 55 and 84 years; (ii) not self-reported diagnosis of psychiatric or somatic illnesses, (iii) normal or corrected-tonormal vision. Also, they were stratified sampled for age class (Spirduso et al., 2005): late middle-aged (55-64 years = 48), young-old (65-74 years = 44), and old adults (75-84 years = 38). Within each age class, they were further stratified sampled for their declared PA level to ensure a balanced presence of sedentary and physically active individuals and master athletes (runners and swimmers) engaged in regular structured physical PA/training for ≥3 (n = 42), ≥2 (n = 46), or < 2 (n = 42) sessions/week, respectively.
To address the secondary aim of this study, also a sample of 22 late middle-aged, young-old, and old adults with type 2 diabetes (55-64 = 5; 65-74 = 12; 75-84 = 5) was recruited. Accordingly to literature (Hu et al., 1999), a case of diabetes was considered confirmed if at least one of the following criteria was reported: (1) one or more classic symptoms (excessive thirst, polyuria, weight loss, hunger) and fasting plasma glucose levels of at least 140 mg/dL (7.8 mmol/L), or random plasma glucose levels of at least 200 mg/dL (11.1 mmol/L); (2) at least two elevated plasma glucose concentrations on different occasions [fasting levels of at least 140 mg/dL (7.8 mmol/L), random plasma glucose levels of at least 200 mg/dL (11.1 mmol/L), and/or concentrations of at least 200 mg/dL after 2 h or more shown by oral glucose tolerance testing] in the absence of symptoms; (3) treatment with hypoglycemic medication (insulin or oral hypoglycemic agent).
Health, Physical Activity, and Anthropometric Assessment
Participants answered the AAHPERD (American Alliance for Health, Physical Education, Recreation and Dance) exercise/medical history questionnaire (Osness et al., 1996) ascertaining their activity level, educational background, dietary habits, tobacco smoking and alcohol consumption, medication use and history of PA.
Daily PA was measured under free-living conditions using the SenseWear Pro3 armband (BodyMedia, Pittsburgh, PA, USA). The use of SenseWear Pro armband has been already validated in older adults (Mackey et al., 2011). The armband is a monitor that integrates the information gathered by the two axis accelerometers and sensors (i.e., skin and near body temperature, heat flux, and galvanic skin response) with sex, age, height, weight, smoking status, and handedness of the user. It provides proprietary algorithms to give quantitative information (e.g., number of daily steps, locomotor activity intensity, and energy expenditure; Di Blasio et al., 2016) about an individual's habitual PA involving any form of locomotion as activities at workplace, sports, conditioning, house holding. The descriptive characteristics of the participants were entered into the software program (SenseWear Professional 8; BodyMedia) before the monitoring was initialized. The participants wore the armband on the right arm over the triceps muscle at the midpoint between the acromion and olecranon processes. According to reliability criteria reported in the literature, participants wore the armband for seven entire and consecutive days, 24 h a day except during water-based activities (Scheers et al., 2012), with a wear time of at least 540 min/day on weekdays and 480 min/day on weekend days (Di Blasio et al., 2016). From the default information given by the software, the mean value of steps of 7 days was used for the statistical analysis.
Standing height to the nearest 0.1 cm and body mass to the nearest 0.1 kg, were measured using a portable stadiometer (Seca 220, GmbH & Co., Hamburg, Germany) and a balance scale (Seca 761, GmbH & Co., Hamburg, Germany), respectively. Body mass index (BMI, kg * m −2 ) was computed. Background information on the participants, as main health, lifestyle and anthropometric characteristics are reported in Table 1 separately for non-diabetic and diabetic individuals and for the three age classes (late middle-aged, young-old, old).
Attentional Assessment
The attentional test, developed by Pesce et al. (2003) by means of the Experimental Run Time System (ERTS, BeriSoft Cooperation), has been applied in aging research to investigate the effects of acute bouts of exercise and those of chronic PA participation on performance of the attentional networks responsible for orienting (Pesce et al., 2007a and executive control (Pesce and Audiffren, 2011). The testing took place either in the morning or in the afternoon, according to participants' availability, avoiding the time before 9 am, between 1 and 3 pm, and after 7 pm to minimize undesired reaction time (RT) variability due to circadian vigilance fluctuation.
Apparatus and Stimuli
Participants were seated in a dimly lit room at a distance of 60 cm from a PC-driven video screen. Four visual displays were used: the instruction, presented on the screen only one time at the beginning of the experimental session, and three types of stimuli, sequentially presented on the screen at each trial. They were a central fixation point, a spatial cue of variable size, and a compound stimulus. The fixation point was a tilted "T" of 0.4 • × 0.4 • and the spatial cue was an empty box of 1 • × 1 • or 5 • × 5 • . The compound stimulus was a large letter (4.6 • × 4.6 • ) made of 13-17 small letters (0.6 • × 0.6 • ) spaced 0.4 • in a 5 × 5 matrix. The large letter and its small elements represented the global and local level of the compound stimulus, respectively. The large letter could be an A, E, F, or H; the small elements were the remaining letters. The fixation point, the large box and the following compound stimulus were centered on the screen; the small box could randomly appear at one of the locations of the elements composing the compound stimulus.
The Attentional Task
Each trial consisted of the sequence of events represented in Figure 1. In five sixths of the trials (go trials), the compound stimulus contained a target letter (e.g., "H, " Figures 1, 2) either at the global or at the local level. Participants had to react as soon as possible to the target letter by pressing a RT-key with the right index finger while gazing at the fixation point. In the remaining trials (no-go trials), the compound stimulus did not contain the target letter and participants had to refrain from responding. Responses to no-go trials or responses with RTs shorter than 200 ms or longer than 2,500 ms were considered errors (anticipations and delayed responses, respectively) and were discarded. The response caused the offset of the compound stimulus for the next trial to begin after an inter-trial interval of 1,000 ms.
In 80% of the go trials, the size of the cue and that of the upcoming target were matched: a large cue was followed by a global target and a small cue by a local target at the same location (validly cued trials, Figure 2 left). In the remaining 20% of trials, cue and target size were mismatched (invalidly cued trials, Figure 2 right). Before the experiment, participants were instructed to focus their attention on the area of the visual field delimited by the spatial cue, without shifting their gaze, in order to react as soon as possible to a predefined target letter that would probably match cue size. Further instructions were aimed at forcing participants, in the case of cue-target mismatching, to directly switch from the global to the local level (attentional zooming in) or from the local to the global level (zooming out), avoiding visual search strategies. It was explained that when a large cue was not followed by a global target letter, the target was the local letter at the center of the screen; when a small cue was not followed by a local target letter at the cued location, the target was the global letter (Figure 2, right).
There were two blocks of 76 trials, one with short (150 ms) time interval between the onset of the cue and the onset of the target stimulus that follows (cue-target Stimulus-Onset-Asynchrony, SOA) and one with long (500 ms) SOA, lasting 3-4 min depending on SOA and reaction speed of the participant. Each block included four warm-up trials, 60 go trials and 12 no-go trials. Short and long SOAs were blocked and not randomized within blocks to avoid a bias in target expectancy. If SOAs were randomized within blocks, the probability (and therefore expectancy) of the target would increase after the short SOA was passed without target occurrence.
Testing was preceded by one block of practice trials to ensure that set acquisition reached a learning asymptote in both younger and older individuals. The minimum amount of practice (40 trials) could be automatically prolonged until a criterion frequency of 80% correct responses was reached. The order of the two blocks of trials with short and long SOA within each task, as well as the use of two of four possible target letters were counterbalanced across participants. Also, to reduce potential threats to internal validity deriving from the use of four different letters, all possible combinations with the remaining non-target Step ( letters at the global and local level of the compound stimuli were balanced and randomized within blocks. Cue sizes and target levels were balanced and randomized within blocks. Particularly, the 50% frequency of global or local target occurrence allowed to balance the priming effects between consecutive global-or local-target trials (Robertson, 1996) that were estimated as a measure of switch costs (see "Switch Costs").
PRELIMINARY COMPUTATIONS AND ANALYSES
Trials with response errors (responses with RTs shorter than 200 ms or longer than 2,500 ms) were discarded and median RTs were computed for correct trials separately for each type of trial. Median instead of mean RTs were used because of the disproportional contribution of outliers on mean RTs and the appropriateness of median values for positively skewed distributions, as RTs usually are, as long as RT differences, not absolute RTs, are relevant (Pesce et al., 2003). Thus, computations of RT differences of interest were performed on median RT data of correct trials to isolate the performance of the (i) orienting and (ii) executive control networks from the performance of the processing systems that, handling incoming stimuli and producing outputs, contribute to general information processing speed. Specifically, we computed (i) RT differences that reflect the efficiency of the exogenous (automatic) and endogenous (intentional) control of attentional orienting (Lauwereyns, 1998) and (ii) switch costs that reflect how an individual is able to cope with the cognitive flexibility requirements of the attentional task (Rogers and Monsell, 1995).
Attentional Orienting Effects
As common in spatial cueing paradigms (Chica et al., 2014), attentional orienting effects were generated by manipulating the validity of the spatial cue: cue and target size where most probably matching and only rarely mismatching (80 and 20% probability, respectively). Consequently, participants were expected to react faster on validly cued trials with targets matching in size the antecedent cue (Figure 2, left) and slower on invalidly cued trials with cue-target mismatching (Figure 2, right). To estimate the time needed to refocus attention when a misleading cue leads to focus attention at the wrong spatial scale, RT differences between invalidly and validly cued trials were computed as follows (Figure 2): 1. RT (small cue − global target) − RT (small cue − local target) = zooming out effect 2. RT (large cue − local target) − RT (large cue − global target) = zooming in effect FIGURE 1 | Timing of the event sequence within a trial. As an example, the cue is large and the target letter ("H") matches in size with the cue.
The attentional task was originally designed to tap the exogenous and endogenous control of attentional orienting jointly within the same task (Pesce et al., 2003). The abrupt onset of the direct cue was expected to elicit an automatic, short-lasting orienting of attention toward the cued area affecting performance at short SOA (Stoffer, 1993;Lamb et al., 2000). The informative value of the direct cues as to where the upcoming target should occur was expected to generate a lower-rising spatial expectancy affecting performance especially at longer SOA. Traditional views attributed the exogenous, stimulus-driven and the endogenous, intentional control of attention allocation to the ventral and dorsal networks, respectively. In recent years, this dichotomy has been replaced by a more interactive view attributing to the dorsal and ventral networks a joint role in both exogenous and endogenous control of attentional orienting to locations and features (Macaluso and Doricchi, 2013;Vossel et al., 2014). Thus, to have an overall estimate of the joint activity of the two networks responsible for attentional orienting, the above RT differences were computed merging short-and long-SOA trials. Means and standard deviations of median RTs calculated for the four types of trials used for the calculation of attentional orienting effects and the RT differences that reflect zooming in/out effects are presented in Table 2.
Switch Costs
The structure of the present attentional task allowed assessing executive function by computing a classical index of cognitive flexibility and executive control of cognitive processes, labeled specific (or local) switch cost (Rogers and Monsell, 1995;Kiesel et al., 2010). Since the attentional task was composed of equally frequent trials with global or local target stimulus dimensions, presented in a random order within heterogeneous blocks, participants had to switch between global and local attending. In general terms, in tasks involving the switching between two tasks A and B within heterogeneous trial blocks, trial n + 1 may be a repetition of task A or B (A-A or B-B) or an alternation of tasks A and B (A-B or B-A). Specific switch costs are computed as the difference between the RT for repetition trials and the RT for switch trials. Especially when task switching is explicitly cued, switch costs are proven to index the duration of a true executive control process of task set reconfiguration that must suppress the proactive interference from the previous, no longer appropriate stimulus-response mapping and activate a new relevant task set (Jost et al., 2008).
In the present experiment, switches between global and local target features of complex visual stimuli were explicitly cued by the preceding spatial cue. To isolate the switch costs from differential attention orienting effects of validly vs. invalidly FIGURE 2 | Schematic representation of four types of trials (with "H" as target letter for example). Left: invalidly cued trials with cue-target mismatching, right: validly cued trials with cue-target matching. Bottom: computation of RT differences between invalidly and validly cued trials as estimates of attentional zooming in (right) and zooming out (left) effects.
cued targets, only trials with cue-target matching (80% of trials, comprising equally frequent large cue-global target and small cue-local target trials) were used for switch costs computation. Each trial was coded as "switch trial" or "non-switch trial" according to whether it was preceded by a trial with a target at the different or the same object level, respectively. Thus, four types of trials were identified (Figure 3): (1) switch to global (STG, i.e., a global target trial preceded by a local target trial); (2) non-switch global (NSG, i.e., a global target trial preceded by a global target trial); (3) switch to local (STL, i.e., a local target trial preceded by a global target trial); (4) non-switch local (NSL, i.e., a local target trial preceded by a local target trial).
Median RTs were computed separately for each type of trial. Means and standard deviations of median RTs calculated for the four types of trials are presented in Table 3. Switch costs were calculated as RT differences between switch trials and non-switch trials, representing an estimate of the time required to switch from attending to the global level of a visual object on trial n to attending to the local level on trial n + 1, or vice versa: 1. local-to-global switch cost = RT STG -RT NSG (Figure 3, top). 2. global-to-local switch cost = RT STL -RT NSL (Figure 3, bottom). LG, large cue-global target. Data are collapsed across short and long Stimulus-Onset Asynchronies (SOAs).
Error Rates
Three types of error rates were calculated: real response errors (responses to no-go trials), anticipated responses (RTs shorter than 200 ms), and delayed responses (RTs longer than 2,500 ms). They were computed both as overall error rates and separately for the different types of experimental conditions used to obtain attentional orienting effects and switch costs ( Table 4). Since anticipated responses were overall very low (2%), they were not analyzed further.
Effects of Age and Physical Activity in Aging Individuals
The first question regarded whether the efficiency of the attentional networks responsible for attentional orienting and executive control are differentially affected by transitions from late middle-aged to young-old and old adulthood and whether physically active habits may counteract the hypothesized age-related deterioration. To address this question, RT differences of community-dwelling aging individuals, computed to estimate attentional orienting effects and switch costs, were regressed on age with daily steps as moderator and gender and BMI as covariates. The rationale for including BMI as a covariate was that we aimed at disentangling the role played by habitual PA from weight status, whose independent or joint influence on cognition across the lifespan and in aging is still an issue of debate (Memel et al., 2016;Chang et al., 2017), but goes beyond the aim of the present study. This moderated regression model entailed the following steps: (1) computing the interaction variable by multiplying age and daily steps (after centering them); (2) performing a hierarchical multiple regression analysis for the prediction of RT by age, daily steps, and their interaction term. Gender and BMI were statistically controlled for by entering them in a first block, while the individual predictors (age and daily steps) were entered LG, large cue-global target.
in a second block and their interaction term in a third block.
(3) In case, the interaction term significantly predicted RT, post hoc analysis through simple slope test was performed (Aiken and West, 1991). The statistical significance was set at p < 0.05. Also overall reaction speed (absolute RTs), as well as accuracy (response errors and delayed responses) data were submitted to the same regression analysis models. This allowed ensuring that larger RT differences of interest would not be merely due to longer RTs in absolute terms, or that smaller RT differences would merely reflect a shift in speed-accuracy tradeoff setpoint. For instance, if older adults would show, as expected, longer RTs, this might lead to proportionally larger zooming effects and switch costs, which are RT differences. If such longer RTs and correspondently larger RT differences would be paralleled by lower rates of responses to no-go trials, it might be just due to the fact that older individuals traded speed for accuracy.
Since no reference data for a priori power analysis for multiple regression were available from aging studies with the employed attentional variables, post hoc achieved power (1−β) was computed with the G * Power program (Faul et al., 2009).
Reaction Speed
The results of the analysis performed on overall RT are presented in Table 5, left. There was a gender difference in RT in favor of females (744 ± 126 vs. 782 ± 141 ms) and a direct relationship between age and RT, indicating that RT slows down with increasing age. However, there was a further small, but significant percentage of variance explained by the interaction between age and daily steps, suggesting that the effect of age on RT was moderated by the activity level. Simple slope testing (Figure 4) showed a buffering effect of the moderator on the predictor. While in low-active adults, older age predicted longer RT, this negative effect of age on RT was not present in high-active adults along the entire age range from late middle-age to old adulthood. Post hoc observed power (1−β) was 0.99.
Attentional Orienting Effects
The results of the analysis performed on attentional orienting effects are presented in Table 5, middle, separately for the two directions of the attentional zooming. This distinction was deemed necessary because the size of the two effects differed greatly, with zooming out effects being averagely almost absent with a huge interindividual variability ( Table 2). Regardless of zooming direction, there was a significant prediction by age. Additionally for zooming out, there was a further small, but significant percentage of variance explained by the interaction between age and daily steps. In contrast to what observed in the case of overall RT, simple slope testing showed an inverse relationship between age and the size of the attentional zooming effect and an amplifying effect of the moderator (Figure 5A). While low-active adults showed an averagely almost absent zooming out effect regardless of age, high-active adults showed such effect, but with an agerelated decrement from late middle-aged to old adulthood. Visual inspection of single slopes showed a similar, but non-significant pattern of results for zooming in effects ( Figure 5B). Post hoc observed power (1−β) from the analysis of zooming out and zooming in effects was 0.98 and 0.88, respectively.
Switch Costs
The results of the analysis performed on switch costs are presented in Table 5, right, separately for the two directions of local-to-global and global-to-local switches. Similar to what explained for the zooming effects, this distinction was deemed necessary also for switches of attention between global and local features of visual objects. Also in this case, the effect in one switch direction was averagely not detectable (i.e., small negative value, Table 3). Results of the regression analysis evidenced only a small, but significant prediction by age of local-to-global switch costs, with increasing switch costs at older age. This direct relationship was not moderated by PA level (Figure 6). Post hoc observed power (1−β) from the analysis of localto-global and global-to-local switch costs was 0.82 and 0.54, respectively.
Error Rates
The same model of regression analysis performed on delayed responses yielded a large percentage of variance explained by age (R 2 = 0.24, std β = 0.49, t = 5.86, p < 0.001). The older the person, the larger the amount of delayed responses ( Figure 7A). This age effect was not moderated by PA level, whereas an interactive prediction by age and PA level emerged from the analysis of response errors (R 2 = 0.10, std β = 0.23, t = 2.62, p = 0.010). Simple slope testing ( Figure 7B) showed that high-active adults, as compared to their low-active counterparts, had lower rates of responses to no-go trials at late middleage, but higher rates at old adulthood, due to the presence of an incremental trend as a function of age in high-active participants only. Post hoc observed power (1−β) from the analysis of delayed responses and response errors was 1.0 and 0.97, respectively.
Total R 2 explained and standardized β coefficients with t-values and significance level are reported. out, in = spatial attentional zooming out or zooming in effects; LtG, GtL = local-to-global or global-to-local switch costs. FIGURE 6 | Prediction of local-to-global switch costs accrued by age without any significant moderation by PA level (daily steps). Solid lines: non-significant change in the slope of the predictor for high vs. low PA levels (1 SD change); β value and its significance are reported for the main slope (dotted line) of the non-moderated prediction.
Effects of Diabetic Status and Physical Activity in Aging Individuals
The second question of the present study regarded whether the diabetic status affects the cognitive functions of interest and PA level may buffer diabetes-related cognitive impairments. To address this question, further regression analyses were performed on the same dependent variables, but contrasting the data of the 22 late middle-aged, young-old, and old adults with type 2 diabetes recruited for this study with those of a subsample of 22 non-diabetic individuals selected from the main sample.
Matching criteria for selection were: gender, age (± 1 year), and mean number of daily steps closest to that of the diabetic and non-diabetic participants (correlation between daily steps of age-and gender-matched pairs of diabetic participants: r = 0.96, p < 0.001). By matching diabetics and non-diabetics for daily steps, we aimed at isolating the hypothesized attentional differences due to the diabetic status from those expectedly due to lower PA levels and related lower fitness in diabetics (Albright et al., 2000), according to the cardiovascular fitness hypothesis of chronic PA effects on cognition (Stillman et al., 2016). In a moderated regression model, BMI was statistically controlled for by entering it in a first block, while the individual predictors (diabetic/non-diabetic status and daily steps) were entered in a second block and their interaction term in a third block.
Reaction Speed
The results of the analysis performed on overall RT showed that diabetic status, and daily steps inversely predicted reaction speed (R 2 = 0.16; diabetic status: std β = 0.29, t = 2.15, p = 0.038; daily steps: std β = −0.27, t = −2.09, p = 0.043), after accounting for the significant prediction accrued by BMI (R 2 = 0.16, std β = 0.30, t = 2.21, p = 0.033). Diabetics showed longer RTs than their non-diabetic counterparts (Figure 8A), but a similar, inverse relationship between higher PA level and shorter RT (Figure 8B). The relationship between BMI and RT was direct, with a higher weight predicting longer RT in both diabetics and their non-diabetic counterparts, who marginally (p = 0.06) differed in BMI (diabetics: 29.4 ± 4.4; non-diabetics: 27.3 ± 4.43). Post hoc observed power (1−β) was 0.99. Moreover, to estimate if the absence of interaction between diabetic status and daily steps reflected truly independent effects, or a lack of power , a further post hoc power analysis for differences between slopes in moderated regression with diabetic/non-diabetic status as a dichotomous moderator was computed. Its low value (0.26) indicated lack of power.
Attentional Orienting Effects, Switch Costs, and Error Rates
Although descriptive statistics show noticeable differences in zooming out effects and local-to-global switch costs, regression analyses did not reveal any significant prediction of RT difference and accuracy variables accrued by health status and/or PA level.
Post hoc observed power (1−β) from the analysis of zooming out and zooming in effects, local-to-global and global-to-local switch costs was 0.46, 0.49, 0.61, and 0.42, respectively.
DISCUSSION
The present study aimed to investigate the independent and interactive effects of aging and objectively measured PA levels on performance of the orienting and executive control networks in community-dwelling aging individuals and diabetics. In sum, the results show a pattern of effects suggesting that there is a generalized detrimental impact of aging on information processing speed, attentional effects, and performance accuracy. However, being physically active seems to partially dampen this age-related deterioration, exerting a protective effect on processing speed and on the ability to orient attention toward locations and objects in the visual field, as well as avoiding an age-related shift toward accurate, but slowed performance on the speed-accuracy trade-off. Instead, different from the general claim that PA is especially beneficial to executive function (Colcombe and Kramer, 2003;Gajewski and Falkenstein, 2016), physically active habits appear to neither outweigh, nor attenuate the detrimental effect of aging on the executive control processes involved in task set reconfiguration. Furthermore, diabetic status and PA level resulted to affect processing speed in opposite directions, whereas they did not affect the performance of the orienting and executive control networks as reflected in orienting effects and switch costs.
To our knowledge, this was the first study of PA effects on cognition in aging people to investigate the performance of the orienting and executive control networks in combination in one task. Previous research combining the investigation of different attentional networks have been performed only in the area of acute exercise research by adopting Posner and Petersen's (1990) attention network test that combines in one task warning signals prior to targets (alerting), cues that direct attention toward potential target locations (orienting) and target stimuli surrounded by congruent or incongruent flankers (executive control; Huertas et al., 2011;Chang et al., 2015). Differently, the attentional test developed by Pesce et al. (2003) and used for the present study merges typical features of the spatial orienting paradigm (Chica et al., 2014) with hierarchically built visual objects that contain global or local target features (Navon, 1977). The use of direct and informative cues with different cuetarget SOAs and a low percentage of misleading cues allows tapping the initially exogenous and then strategic control over attention according to the informative value of the cue and the attentional re-orienting following miscued targets, led by the dorsal and ventral networks, respectively (Corbetta and Shulman, 2002;Petersen and Posner, 2012). The initial blocked task instruction, followed by the random presentation of global and local targets allows tapping true executive processes of stable set maintenance and adjustment of executive control on a trial-by-trial basis to switch attention between global and local attending led by the cingulo-opercular and fronto-parietal networks, respectively.
First, older age predicted lengthened RT, but only in the case of low-active individuals (Figure 4). This is in line with evidence of generalized slowing of information processing speed at old age (Birren and Fisher, 1995) and results of previous aging studies performed with the present attentional paradigm, which showed faster reaction speed in older athletes than in sedentary co-aged individuals (Pesce et al., 2005(Pesce et al., , 2007a. However, results also suggest that from late middle-age to old adulthood there is a differential shift in speed-accuracy tradeoff setpoint between low-active and high-active individuals. In fact, high-active late middle-aged individuals showed averagely longer RTs (Figure 4), but lower rates of responses to no-go trials ( Figure 7B). The pattern of results was reversed at older age, since high-active individuals were faster in responding than their low-active counterparts, but made more response errors. It seems that with increasing age, low-active older adults trade speed for maintaining accuracy as a compensatory strategy (Spirduso et al., 2005), whereas high-active individuals trade accuracy for maintaining speed of performance.
High PA levels seem also to dampen the age-related decline of efficiency of the orienting system. It must be pointed out that the size of the RT differences computed to estimate orienting effects and switch costs has an opposite meaning. As regards the orienting (zooming) effect, it represents the difference in RT between validly and invalidly cued trials. A large RT difference means that the individual was able to strategically orient attention toward the cued area, thus shortening the RT to validly cued target and had to pay a RT cost in the rare cases of miscued targets. This ability seems relatively scarce in low-active individuals already at late middle-age, when highactive individuals instead show a preservation of orienting ability reflected in a higher orienting effect ( Figure 5A). Nevertheless, the active lifestyle no longer seems to buffer the age-related deterioration in old adulthood. The negligible size of the zooming out effect is attributable to the fact older adults show a typical local attending deficit (Pesce et al., 2005) that lengthens RT particularly when local targets and the preceding cue are not presented foveally, as it is the case for RT on valid local cuelocal target trials that was used as subtrahend for the computation of the zooming out effect (Figure 2). The huge interindividual variability in zooming out effect is therefore an indicator that some individuals succeeded in overcoming the typical age-related local attending deficit, thus showing a positive zooming out effect, but other did not. A similar, but non-significant trend emerged for the zooming in effect ( Figure 5B).
The graphical representation suggests that those, who succeeded were high-active late middle-aged individuals. Since orienting effects are RT differences, the larger effect in high-active late middle-aged individuals would be meaningless if it would be paralleled by a corresponding increment in absolute RT. This was not the case, as they showed lowest RTs. This strengthens the interpretation that being physically active helps overcoming age-related attending deficits. Intriguing neuroimaging evidence, while confirming the hypothesis that gains in cardiovascular fitness lead to enhanced neural efficiency, also shows a unique relationship between coordination training at old age and increased activation in the visuo-spatial orienting network (Voelcker-Rehage et al., 2011). The counteracting effect of overall PA on the age-related decline of attention orienting performance found in the present study might be therefore attributable, at least in part, to the coordinative demands of being physically active, since our objective measure of overall PA tapped a variety of possible activities at workplace, sports, house holding.
However, in studies of aging effects on the attentional networks, the most pronounced deterioration has been reported for the executive control network (Mahoney et al., 2010), which is also reported to be the primary locus of the beneficial effects of PA and exercise (Etnier and Chang, 2009), particularly at old age (Colcombe and Kramer, 2003;Gajewski and Falkenstein, 2016). In the present study, we focused on cognitive flexibility, a core executive function needed to switch attention between tasks that we measured by means of local switch costs. In contrast to the orienting effect, whose size reflected the ability to exert topdown control over attention to adhere to task requirements, local switch costs represented the inability to adjust executive control flexibly according to the unpredictable need to switch between global and local attending. Thus, the higher the switch cost, the lower the efficiency of the executive control. This type of costs is indeed thought to reflect the executive processes needed to deactivate a previous task set in favor of the actually relevant one. Differently from many other aspects of executive function that are benefited by PA, this type of cost was not positively affected by the PA level of the participants, but only negatively by age (Figure 6). This age-related decline was observed only for local-to-global switch costs, because global-to-local switch costs were biased by interacting spatial orienting effects. The strong automatic capture of attention by small cues interfered with the allocation of attention to visual objects (Goldsmith and Yeari, 2003), overweighing the persistence of attention on the last attended object that should facilitate RT in the case of consecutive local target trials used as subtrahend for the computation of the global-to-local switch cost (Figure 3; Pesce and Audiffren, 2011).
In sum, our findings parallel and extend to overall PA the notion that regular participation in exercise training, regardless of exercise mode, facilitates reaction speed, but is uninfluential on local switch costs (Dai et al., 2013). In their study across the lifespan, Pesce and Audiffren (2011) found that age and sport expertise independently predicted lower switch costs, whereas we could not find any effect by PA level. Taken together, the result suggests that not PA per se, but the cognitive demands inherent in many sports may be the mediator of PA effects on the executive control networks and particularly the fronto-parietal network (Pesce, 2012). This hypothesis refers to the "cognitive component skills approach" (Voss et al., 2010), suggesting that sport-related cognitive expertise may transfer to sport-unspecific tasks requiring fundamental cognitive abilities. The interpretation of the absence of PA effects on switch costs in the present study is in accordance with the finding that not PA, but cognitive training interventions in aging seem to have the potential to positively impinge on the plasticity of those specific processes and underlying neural substrate responsible for task switching ability (Gajewski and Falkenstein, 2012).
The second aim of the study was to investigate whether in aging diabetics, who are at risk of poor cognition and especially executive dysfunction (Qiu et al., 2006;Okereke et al., 2008;Luchsinger, 2012), a physically active lifestyle counteracts the deterioration of the ability to exert executive control over attention, which is particularly relevant for this special population. The outcomes of this study do not show impairments of specific aspects of the attentional networks performance as compared to non-diabetic co-aged participants, but only a worse information processing speed ( Figure 8A). A physically active lifestyle seems beneficial to their processing speed to the same extent as it benefits performance of non-diabetic aging individuals. This means that the vascular pathologies that characterize the diabetic status and may be responsible for cerebrovascular disease and cognitive dysfunction (Luchsinger, 2012;Umegaki, 2014) can be counteracted -at least in terms of efficiency of the processes responsible for perceiving and responding -by physically active habits. Instead, the question if an active lifestyle counteracts the deterioration of the attention networks performance in diabetics needs further exploration, since this study resulted underpowered for that type of variables.
The study has further limitations that must be addressed. Merging the spatial orienting paradigm with the task switching between global and local stimulus features has the advantage to tap different attention networks with one task, but also led to some biases. One reason for the absence of PA effects on switch costs might be the relatively small size of such costs probably due to the presence of a spatial cue to switch. This anticipated information, typical of cueing paradigms, generally results in smaller switch costs (Wasylyshyn et al., 2011). This reflects an influence of the orienting network on the executive network, with the latter taking advantage from the information provided by the first to revolve a conflict and switch sooner (Callejas et al., 2005). Furthermore, the spatial cueing with small cues narrowed and captured attention, overweighing the global-to-local switch effect (Goldsmith and Yeari, 2003;Pesce and Audiffren, 2011). It is therefore possible that in our study, due to the influence of the advance cues on local switch costs, benefits by PA level could be detected only for attention orienting.
Thus, an outlook for future research is to use an interactionist approach to the study of PA effects on the attentional networks in aging (Callejas et al., 2005). The present study assessed local switch costs in heterogeneous blocks of spatially cued trials, but did not consider global switch costs in homogeneous trial blocks. Instead, beneficial PA effects in aging could be found for both local and global switch costs in uncued task switching Themanson et al., 2006). An interactionist approach with/without spatial cueing in both heterogeneous and homogeneous blocks of trials might further our understanding of whether the orienting of attention, preserved by an active lifestyle at least in late middle-age, is able to raise the efficiency of the executive control networks responsible for the adaptability of top-down control on a trial-by-trial basis and the stability of top-down control for set maintenance, respectively (Dosenbach et al., 2008).
A reason that may have prevented to detect a buffering effect of PA on the attentional performance of diabetics, as instead found for the orienting performance of non-diabetics, is the relatively small sample size and the intrinsically low power of moderated multiple regression analysis, particularly when the moderator is a dichotomous variable (Stone-Romero and Anderson, 1994;. Extending the sample of diabetic elderlies can help distinguish true absence of PA effects on the attentional networks from power and generalizability issues of this convenience sample.
CONCLUSION
Adequate levels of PA may positively influence the brain processes and systems responsible for information processing speed and the strategic control of the orienting of attention, but seem uninfluential on the ability to exert executive control for switching attention, which, instead, seems positively influenced by participation in cognitively demanding sports (Pesce and Audiffren, 2011). The differential association of physically and/or cognitively challenging activities to different attentional functions may provide the basis to design interventions for successful attentional aging tailored to exploit the multifaceted nature of the concept of an enriched environment, including PA and challenging cognitive tasks (Hertzog et al., 2009;Kraft, 2012). Given the broad range of unstructured, daily-life activities and structured exercise or grassroots/competitive sports composing overall PA levels measured in this study, our results add to the evidence that both daily-life PA as walking (Yaffe et al., 2001;Abbott et al., 2004) and sports participation (Pesce et al., 2007a;Zhao et al., 2016) may act as protective factors against cognitive decline in elderlies. These two main components of an active lifestyle (Condello et al., 2016) should be promoted by actions that impact the built environment to render it more conducive to PA (Saelens and Handy, 2008) and support physically active habits and sport participation until old age (Baker et al., 2010).
AUTHOR CONTRIBUTIONS
GC: Data acquisition with relevant role in data acquisition coordination, analysis and interpretation and drafting of the work, final approval of the version to be published and agreement to be accountable for all aspects of the work. RF: Data interpretation, drafting and critical revision of the work for important intellectual content with specific contribution as regards aging issues, final approval of the version to be published and agreement to be accountable for all aspects of the work. SF: Data acquisition and analysis, contribution to drafting the work, final approval of the version to be published and agreement to be accountable for all aspects of the work. JS: Interpretation of data and critical revision of the work for important intellectual content, final approval of the version to be published and agreement to be accountable for all aspects of the work. ADB: Interpretation of data and critical revision of the work for important intellectual content, final approval of the version to be published and agreement to be accountable for all aspects of the work. LC: Contribution to conception of the work with relevant role in project coordination, critical revision of the work, final approval of the version to be published and agreement to be accountable for all aspects of the work. CP: Main role in the conception and design of the work, creation of the attentional test, data analysis and interpretation, drafting of the work with specific contribution as regards the physical activity-attention relationship, final approval of the version to be published and agreement to be accountable for all aspects of the work.
|
2017-05-04T15:55:54.779Z
|
2017-03-06T00:00:00.000
|
{
"year": 2017,
"sha1": "07fd5fe9dfd041998f8bc7ba10a460214417743d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2017.00107/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07fd5fe9dfd041998f8bc7ba10a460214417743d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
232383648
|
pes2o/s2orc
|
v3-fos-license
|
Effects of SiC Fibers and Laminated Structure on Mechanical Properties of Ti–Al Laminated Composites
Ti/Ti–Al and SiCf-reinforced Ti/Ti–Al laminated composites were fabricated through vacuum hot-pressure using pure Ti foils, pure Al foils and SiC fibers as raw materials. The effects of SiC fiber and a laminated structure on the properties of Ti–Al laminated composites were studied. A novel method of fiber weaving was implemented to arrange the SiC fibers, which can guarantee the equal spacing of the fibers without introducing other elements. Results showed that with a higher exerted pressure, a more compact structure with fewer Kirkendall holes can be obtained in SiCf-reinforced Ti/Ti–Al laminated composites. The tensile strength along the longitudinal direction of fibers was about 400 ± 10 MPa, which was 60% higher compared with the fabricated Ti/Ti–Al laminated composites with the same volume fraction (60%) of the Ti layer. An in situ tensile test was adopted to observe the deformation behavior and fracture mechanisms of the SiCf-reinforced Ti/Ti–Al laminated composites. Results showed that microcracks first occurred in the Ti–Al intermetallic layer.
Introduction
Due to presenting several advantages such as low density, high modulus of elasticity, good high-temperature creep strength and oxidation resistance, titanium aluminide (TiAl) based alloys have great potential in aerospace applications [1][2][3]. However, they suffer from a major challenge of low room temperature ductility. This was considered to be a significant barrier for these classes of alloys for use in structural components.
Inspired by the structural biological materials such as the abalone shell, animal bones and mammal teeth, etc. [4,5], Ti intermetallic multilayered composites [6][7][8][9][10], which possess improved toughness, fracture resistance and excellent creep resistance and have been extensively researched. By micro-, meso-and macrostructure designing and tailoring, specific functionality of Ti intermetallic multilayered composites can be achieved. The superior specific properties of this class of composites make them extremely attractive for high-performance aerospace applications [11]. Meanwhile, continuous SiC fibers with outstanding properties of high strength, high modulus and low density have successfully been introduced into titanium matrix material through the vacuum high-temperature pressing (VHP) method [4,12]. It has been found that SiC fiber-reinforced Ti intermetallic multilayered composites possess excellent toughness and fracture resistance.
An SiC f -reinforced Ti intermetallic multilayers composite was fabricated by Yu et al. [13]. Results showed that along the longitudinal direction of SiC fibers, the ultimate tensile and flexural strength and fracture toughness of the composite increased by 53%, 74% and 75%, respectively, while the elongation remained almost similar to the Ti intermetallic multilayer composite. Zhu et al. [4] fabricated a Ti intermetallic multilayered/SiC f -reinforced Ti matrix composite and found that with the introduction of SiC fibers, the tensile and the flexural strength of the hybrid composite along the longitudinal direction of the fibers increased by 57% and 92% compared with the composite without SiC fiber. Wang et al. [14] investigated the fatigue behavior and damage modeling of SCS-6/titanium/titanium aluminide hybrid laminated composite. Compared with SCS-6/Ti-6-4 composite, the SCS-6/Ti-6-4/Ti-25-10 hybrid laminated composite possessed improved fatigue behavior. Zhang et al. [15] also studied the mechanical behaviors and failure mechanisms of SiC f -reinforced Ti/Ti 2 AlNb laminated composite. The researches above show that SiC f -reinforced Ti intermetallic laminated composites provide an effective approach for tailoring mechanical propertied. This can be attributed to the crack deflection ability of the Ti intermetallic multilayered structure, as well as the reinforcement effect of continuous SiC fibers.
It was found that the majority of studies focused on strengthening ductile Ti layers with SiC fibers, followed by hybridization with intermetallic layers. This is principally because the ductile layer can serve as the compliant layer to suppress the initiation and propagation of residual stress placed on cracks near the interface [14]. Lin et al. [16] prepared an SiC f -Ti/Al 3 Ti laminated composite by using the vacuum hot-pressing sintering method and found that SiC fibers can provide strengthening and toughening effects for the brittle intermetallic and the laminated composites. The strengthening mechanism of the SiC f -reinforced intermetallic layer still needs to be clarified. Meanwhile, reaction products, thickness, bonding strength of the interface that between SiC fiber and its surrounding intermetallic matrix, as well as the fiber damage, play critical roles in the mechanical properties of the SiC f -reinforced Ti/TiAl laminated composite. The fiber arrangement and the enhancement mechanism need to be further investigated.
Hence, an SiC f -reinforced Ti/Ti-Al laminated composite was designed and fabricated with commercial pure Ti foils, Al foils and SiC fibers by the VHP method. The interface between the SiC fiber and the intermetallic matrix was investigated. Tensile tessst and three-point bending test were adopted to evaluate the mechanical properties of the composites. The deformation behavior and fracture mechanisms of the SiC f -reinforced Ti/Ti-Al laminated composite were studied by applying an in situ tensile test. Furthermore, the Ti/Ti-Al laminated composite was fabricated and investigated for comparison.
Structure Design
Commercial pure Ti foils (200 µm thick), pure Al foils (20, 100, 200 µm thick) were cut into round foils with diameters of 100 mm. Table 1 shows the chemical compositions of the selected materials. SiC fibers fabricated by Beijing Institute of Aeronautical Materials (China) were chosen as the reinforcement component. These fibers have diameters of 120 µm with a tungsten core (16 µm in diameter) and a layer of β-SiC (50 µm thick), as shown in Figure 1. In addition, a layer of carbon (2 µm) was coated by chemical vapor deposition (CVD) to inhibit the reaction. The Ti foils were cleaned in aqueous HF solution (10 vol.%) while the Al foils were etched in aqueous NaOH solution (10 wt.%) for 2 min. Subsequently, the materials were rinsed with alcohol and processed with ultrasonic cleaning in distiller water, then were dried immediately for further preparation. In order to avoid fiber aggregation in the prepared SiC f -reinforced Ti/Ti-Al laminated composite, a novel method of fiber weaving was implemented. In this method, equally spaced predrilled holes were drilled into the Al foils (with thickness 20 µm) by a steel needle with Φ200 µm. Then the fibers travelled up and down the foil material through the drilled holes ( Figure 2a). In this work, the spacing is 5 mm in the X direction and 20 mm in the Y direction. Fiber braids with 5 mm equal spacing were prepared as shown in Figure 2b. The metallic foils and SiC fiber braids were stacked according to the schematic illustration in Figure 3. Three laminated structures were used: Ti-Al-Ti (Figure 3a), Ti-Al-SiC f -Al-Ti (Figure 3b Pattern A) and Ti-SiC f -Al-SiC f -Ti (Figure 3b Pattern B). The volume fraction of Ti layers was about 55% as designed in the preform. Among them, the Ti/Ti-Al laminated composite used the "Ti-Al-Ti" laminated structures, while the SiC f -reinforced Ti/Ti-Al laminated composite was made up of Pattern A and Pattern B.
Sintering Process
The prepared assembly was placed in a graphite mold and was subsequently moved into a vacuum hot-press furnace for sintering. The preparation process includes three steps, as shown in Figure 4. Firstly, the temperature was raised to 600 • C at a rate of 10 • C/min and held for 60 min under a pressure of 5 MPa to achieve a primary combination between the Ti and Al foils. Secondly, the temperature increased to 660 • C (the melting temperature of pure aluminum) for 2 h to ensure that Al foils were consumed completely, while the pressure was decreased to 0 MPa to avoid the expulsion of the molten aluminum. Finally, the temperature and pressure increased to 950 • C and 10 MPa for one hour for the Ti/Ti-Al laminated composite, or 950 • C and 40 MPa for one hour for the SiC f -reinforced Ti/Ti-Al laminated composite. Then, the exerted pressure was released, and the temperature decreased to room temperature in the furnace. Schematic illustration of the sintering parameters is presented in Figure 4.
Materials Characterization
After VHP, specimens used for characterization were cut from the synthesized samples with a wire electrical discharge machine (DK7355, Xiongfeng Machinery Co., Ltd., Ningbo, China), and were inlaid into epoxy resin. Subsequently, the specimens were ground using sandpaper and polished to a smooth mirror surface using diamond paste. Then, metallographic specimens were etched by reagent (5 vol.% HF+15 vol.% HNO 3 + 80 vol.% H 2 O).
Optical microscopy (OM, IM 300, China) was performed for the microstructure observation. A Field Emission Scanning Electron Microscope (SEM; FEI Nova Nano SEM450, Hillsboro, OR, USA) equipped with an Energy Dispersive X-ray Spectrometer (EDXS, INCA 250X-Max 50, Oxford, UK) was used for the microstructure observation and local composition analysis. X-ray diffraction (XRD, D8 ADVANCE, Brooke, Germany) was performed for the phase identification.
Mechanical Properties Measurements
Quasi-static tensile tests were carried out on the fabricated Ti/Ti-Al and SiC f -reinforced Ti/Ti-Al laminated composites at room temperature with a loading rate of 0.2 mm/min. Test specimens were machined along the SiC fibers direction using wire electrical discharge. Subsequently, their surfaces were polished. The gauge sections were 26 mm in length, 6 mm in width and 3 mm in thickness, as illustrated in Figure 5a. The strain sheet was affixed to both sides of the specimen, and the strain value was recorded by the strain gauge. For the three-point bending test, the constant loading rate was 2 mm/min. The specimens were 80 mm in length, 10 mm in width and 3 mm, as shown in Figure 5b. The specimens of SiCf-reinforced Ti/Ti-Al laminated composites with two different loading modes are shown in Figure 5c. The supporting distance is 40 mm. After tests, the fracture surfaces of tensile specimens were observed by SEM.
In situ tensile testing of specimen in SEM (MINI-MTS2000, Qiyue Technology Co., Ltd., Hangzhou, Zhejiang Province, China) was applied with a loading rate of 5 µm/s. In order to better observe the fracture and crack extension behavior of the SiC f -reinforced Ti/Ti-Al laminated composites, each surface was polished, especially the observation surface-i.e., the cross section of the specimen that is 10 mm in gauge length ( Figure 6). Figure 7a,b present the transverse section of the fabricated SiC f -reinforced Ti/Ti-Al laminated composite. The layers with dark colors are the titanium, while the layers with light color are the formed intermetallic, among which are the scattered SiC fibers (black dots). As described above, two different arrangements were designed in the preform: Ti-Al-SiC f -Al-Ti and Ti-SiC f -Al-SiC f -Ti. Figure 7b shows the structure illustration of the fabricated laminated composite. It can be found that the SiC fibers in Pattern A are almost at equal distances (5.0 ± 0.5 mm) along the laminate direction. This means that the fiber intervals can be guaranteed by using the fiber braid, as shown in Figure 2, while in Pattern B, two layers of fiber braid were placed between the Ti foils (Ti-SiC f -Al-SiC f -Ti). With the melt and reaction of Al foils, two layers of fiber braids were pressed together, resulting in SiC fiber aggregation. Both the actual volume fractions of the Ti layer in the Ti/Ti-Al and SiC f -reinforced Ti/Ti-Al laminated composites were approximately 60%, as measured. This is 5% higher than the theoretical volume fraction, which was mainly due to the reaction and mutual diffusion of aluminum and titanium atoms at high temperatures.
Microstructure Characterization
Previous studies [17,18] have demonstrated that due to the different diffusion coefficient of Ti and Al atoms, Kirkendall holes will emerge when fabricating Ti/Ti-Al laminated composites. Figure 8a,b present microstructures of the Ti/Ti-Al laminated composite, while Figure 8c-f show the microstructures of the SiC f -reinforced Ti/Ti-Al laminated composite. In Al/Ti diffusion couple, during the net movement of atoms from Al to Ti caused by the different diffusion coefficient, vacancies will form and diffuse from the Ti toward Al layer [17,18]. It can be observed that obvious Kirkendall holes existed in the Ti/Ti-Al laminated composite (Figure 8b). By comparison, Kirkendall holes in the SiC freinforced Ti/Ti-Al laminated composites were much lower in number (Figure 8d). This may be attributed to the larger exerted pressure (40 MPa) during the fabricating processes, compared to the 10 MPa pressure applied in the Ti/Ti-Al laminated composites. (Figure 8f). This means that the "Ti-Al-SiC f -Al-Ti" laminated structure (Pattern A) should be preferred to obtain an equal spacing of the fiber arrangement. Figure 9 shows the X-ray diffraction analysis of the laminated composites. The results suggest that the intermetallic phases contain Ti 3 Al, TiAl, TiAl 2 and TiAl 3 . Additionally, Al phases were not found in XRD patterns, which indicates that Al was completely consumed in the sintering process. Table 2. These reveal that the corresponding phases are Ti, Ti 3 Al, TiAl 2 and TiAl 3 , respectively. The results are consistent with previous studies [4,19]. It can also observe that the concentrations of the elements Ti and Al present significant gradient changes in the SiC f -reinforced Ti/Ti-Al laminated composites (Figure 10b) in contrast with the Ti/Ti-Al laminated composites (Figure 10a). A conclusion can be made that a more stable TiAl 3 phase can be obtained with an exerted pressure 40 MPa in the sintering process. Figure 10c presents the interface of the SiC fiber and intermetallic matrix. An intact C coating can be found on the SiC fiber surface, which prevents the fiber damage in the laminated composites. Meanwhile, a clear thin gray layer (about 1 µm thick) was formed on the Ti-Al intermetallic side (Figure 10c). According to the results of the line scanning in Figure 10c and the point scanning in Table 3 (Spot 12), the reaction layer can be identified as TiC and Al 4 C 3 , as demonstrated in previous reports [4,12,20]. However, they were not detected in XRD (Figure 9) due to the low contents. Figure 10c shows holes with short strips around SiC fibers and Figure 10b shows that similar morphologies exist in Ti-Al intermetallic compound layer. This is due to the Ti and Al atomic diffusion coefficient differences, resulting in the formation of Kirkendall holes. Table 4 shows the tensile strength and flexural strength of the fabricated laminated composites. Compared with Ti/Ti-Al laminated composites, the tensile strength of SiC freinforced Ti/Ti-Al laminated composite increases by 60%, while there is no significant increase in flexural strength. The three-point bending test of SiC f -reinforced Ti/Ti-Al laminated composite shows that the bending strength on the Pattern B side is 40 MPa more than that on the Pattern A side. This is due to the fact that the volume fraction of SiC fiber in Pattern B is twice as much as the Pattern A side, despite there being fiber aggregation on the Pattern B side. Figure 11 presents the stress-strain curves of Ti/Ti-Al and SiC f -reinforced Ti/Ti-Al laminated composites in room temperature uniaxial tension tests. The yielding strength at 0.2% deformation of the Ti/Ti-Al laminated composite is 217 MPa and the elastic modulus is 61.17 GPa. The yielding strength at 0.2% deformation and the elastic modulus of the SiC f -reinforced Ti/Ti-Al laminated composite are 339 MPa and 101.05 GPa, respectively. Comparing to the Ti/Ti-Al laminated composite (with the same volume fraction, 60%), the yielding strength at 0.2% deformation of SiC f -reinforced Ti/Ti-Al laminated composite increases by 122 MPa, while the elastic module increases by 65%. This means that by introducing the SiC fibers, both the tensile strength and the resistance to deformation of laminated composites are significantly improved. Meanwhile, the ultimate elongation of SiC f -reinforced Ti/Ti-Al laminate composites reaches 1.6%, which is about 14% higher than the Ti/Ti-Al laminate composites. Figure 12 shows the force-displacement curves of SiC f -reinforced Ti/Ti-Al laminated composite obtained through in situ tensile testing. The curve consists of five stages: elastic deformation stage (I), yield stage (II), delamination stage (III), start failure stage (IV) and fracture stage (V). The entire failure process of in situ tensile testing was observed under SEM. Figure 13 describes the detailed deformation morphologies corresponding to different tension force values. Firstly, microcracks initiated in the Ti-Al intermetallic layer when the tensile load ranged from 1002 N to 1102 N, as shown in Figure 13a. As the load continued to increase, the microcracks propagated and merged in the Ti-Al intermetallic layers. Interlayer cracks appeared when the load reached 1109 N. The microcracks and interlayer cracks first appeared in the outer layer of the laminated composite, as shown in Figure 13a,b. When the load reached 1124 N, delamination occurred within the Ti-Al intermetallic layers (Figure 13c). Then the SiCf-reinforced Ti/Ti-Al laminated composite began to fail (Figure 13d). It can be observed that the outer layers failed first and then the inner layers (Figure 13e). Ultimately, the whole specimen fractured completely (Figure 13f). The fractures of the fabricated laminated composites are composed of the ductile fracture (Ti layer) and the brittle fracture (Ti-Al intermetallic layer). Figures 14 and 15 show the tensile fracture morphologies of Ti/Ti-Al and SiC f -reinforced Ti/Ti-Al laminate composites. It can be seen that delamination was mainly generated in the Ti-Al intermetallic layer of the prepared laminated composites (Figures 14a and 15a). Both trans-granular and intergranular fractures were observed (Figure 14c). Additionally, secondary cracks ( Figure 14b) emerged in the brittle Ti-Al intermetallic layer, while the Ti layer closely combined with the Ti-Al intermetallic layer, and there is no delamination in the Ti layer (Figures 14e and 15e).
Mechanical Properties
Fiber reinforcement consists of fiber pullout, fiber debonding and fiber fracture. In this experiment, it can be seen that fiber fracture was the main failure mode. Debonding of SiC fibers can also be observed at the interface between the fracture SiC fiber and the intermetallic matrix (Figure 15b). No pulling out failure mode was found at the fracture section, which can be ascribed to the high interface bonding strength. The excessive interface binding strength of the fibers with the matrix may limit the enhancement effect of fibers to some extent, consistent with the results in the literature [21,22]. There is no damage of the fibers in the tensile experiment (Figure 15c,f). Meanwhile, the interface between the Ti layer and the Ti-Al intermetallic layer was well integrated, which further verified that with the higher exerted pressure, more compact structures with fewer Kirkendall holes can be obtained. The conclusion can be drawn that with the introduction of the fibers, the fracture resistance of the SiC f -reinforced Ti/Ti-Al laminated composites further improves. Figure 18 shows the bending crack morphologies of the fabricated laminated composites. Firstly, the microcracks in the Ti-Al intermetallic layers initiated parallel to the loading direction (Figure 18b,e,g). Subsequently, the microcracks continued to spread to the Ti layer. Because of the obstruction of the ductile titanium layer, the crack tip blunted and did not continued to spread in the Ti layer (Stage I, Figure 18c,e,g). With the increase in load (Stage II), more microcracks were generated. With the microcracks' growth and merging, long cracks parallel to the laminate structure emerged. These cracks propagating in the Ti-Al intermetallic layers led to the delamination of laminated composites. The propagation path of the cracks was long and zigzagged (Stage III, Figure 18a,d,f), which indicates a better absorption of fracture energy.
Conclusions
Ti/Ti-Al and SiC f -reinforced Ti/Ti-Al laminated composites were fabricated through vacuum hot-pressure, and the effects of SiC fiber and laminated structure on the properties of Ti-Al laminated composites were studied. The main conclusions are presented as follows: 1.
Equal spacing of the fibers could be guaranteed for SiC f -reinforced Ti/Ti-Al laminated composites prepared by a novel method of fiber weaving. No other elements would be introduced to contaminate the composites.
2.
With the higher exerted pressure, more compact structure with fewer Kirkendall holes could be obtained in SiC f -reinforced Ti/Ti-Al laminated composites.
3.
SiC f -reinforced Ti/Ti-Al laminated composites had a tensile strength of 400 ± 10 MPa and flexural strength of 900~950 MPa. Compared to Ti/Ti-Al laminates, the tensile strength increased by 60%, while the ultimate elongation reached 1.6% (increased by about 14%). The flexural strength did not change much (Ti/Ti-Al laminate composites had a flexural strength of 923 ± 10 MPa). The tensile properties of the laminated composites could be effectively improved by introducing the SiC fibers, while the bending properties were not obviously influenced due to the small volume fraction of fibers.
4.
The deformation behavior and fracture mechanisms of SiC f -reinforced Ti/Ti-Al laminated composites were obtained through in situ tensile tests. Microcracks first occurred in the Ti-Al intermetallic layer. With the growth and merging of microcracks, interlayer cracks formed in the Ti-Al intermetallic layer along the load direction.
|
2021-03-29T05:18:24.601Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "75a07832e6003737fa2ab16e1f545c4317face18",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/6/1323/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75a07832e6003737fa2ab16e1f545c4317face18",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
90258286
|
pes2o/s2orc
|
v3-fos-license
|
Complete classification of algebras of level two
The main result of the paper is the classification of all (nonassociative) algebras of level two, i.e. such algebras that maximal chains of nontrivial degenerations starting at them have length two. During this classification we obtain an estimation of the level of an algebra via its generation type, i.e. the maximal dimension of its one generated subalgebra. Also we describe all degenerations and levels of algebras of the generation type $1$ with a square zero ideal of codimension $1$.
INTRODUCTION
The paper is devote to the classification of algebras of a given level. Algebras in this paper are not assumed to be associative. All algebra structures on a given linear space form an algebraic variety with a natural action of the general linear group. Orbits under this action correspond to isomorphism classes of algebras. Algebras satisfying some set of polynomial identities constitute a closed subvariety closed under the mentioned action. There are many papers considering the structure of such subvarieties. One of the main problems in this direction is the description of irreducible components. This problem is called the geometric classification of algebras. Examples of a geometric classification in some classes of algebras one can find, for example, in [1,2,4,11,14,16,17,21].
Another important notion that is used in the description of varieties of algebras is the degeneration. One algebra degenerates to another if the closure of the orbit of the first algebra contains the second one. The description of degenerations helps to describe the irreducible components. For example, if the variety has only finite number of orbits, then any irreducible component is a closure of an orbit of a rigid algebra, and an algebra is rigid in this case iff there is no nontrivial degeneration to it. On the other hand, degenerations are interesting themselves. There are some papers, where the degeneration graph is constructed for some variety (see, for example, [1,2,4,11,[15][16][17][18]21]). The notion of a degeneration is closely related to the notions of a contraction and of a deformation.
The notion of the level of an algebra was introduced in [8]. The algebra under consideration has the level n if there is a chain of n nontrivial degenerations that starts at the given algebra and there is no such a chain of length n + 1. Roughly speaking, the level estimates the complexity of the multiplication of the given algebra. For example, the unique algebra of the level zero is the algebra with zero multiplication and an algebra has the level one if the closure of its orbit is formed by the zero algebra and the orbit itself. At this moment there are no many results about the levels of algebras. Anticommutative algebras of the first level were classified in [8], but the classification of all algebras of the first level presented there turned up incorrect. Later the algebras of the first level were classified in [20] (see also [13]). In [9] the author introduced the notion of the infinity level. The infinity level can be expressed in terms of the usual level, and hence the classification of algebras with a given infinity level is much easier than the classification of algebras with a given level. Anticommutative algebras of the second infinity level were classified in [9]. The author made an attempt to classify the anticommutative algebras of the third infinity level in the same paper, but the obtained classification is wrong and can not be taken in account. Finally, associative, Lie, Jordan, Leibniz and nilpotent algebras of the level two were classified in [5,19].
In the current paper we try to develop a way to classify algebras of small levels. Inspired by the paper [20], we firstly estimate the level of an algebra via its generation type, i.e. the maximal dimension of its one generated subalgebra. We prove that the level of an algebra is not less than its generation type in the case where the generation type is greater of equal to 3. This estimation is very rough, but is enough for the classification of algebras of small levels. Further we consider different classes of algebras of the generation types one and two and estimate their levels with the help of standard Inönü-Wigner contractions.
The first type of algebras that we consider is formed by algebras of the generation type 1 with a square zero ideal of codimension 1. The anticommutative portion of such algebras was considered in [10], where they were called almost abelian Lie algebras. Some examples of degenerations between such algebras were given there. In the current paper we describe all degenerations between algebras of the generation type 1 with a square zero ideal of codimension 1 and give an explicit formula for the level of such an algebra. Algebras of this type of the first five levels are given in Tables 1-3.
Then we consider algebras of the generation type 1, whose standard Inönü-Wigner contractions with respect to one dimensional subalgebras have levels not greater than one. It turns out that it is not difficult to classify such algebras. Except the Heisenberg Lie algebras and one algebra of the level 2, all such algebras have a level not greater than 1 and moreover have a square zero ideal of codimension 1. This allows to classify the algebras of the generation type 1 having the second level. Note that all anticommutative algebras have the generation type 1. Thus, we recover the valid part of the results of [9].
In the remaining part of the paper we consider algebras of the generation type 2. We give a criterion for a trivial extension of a 2-dimensional algebra of the generation type 2 to have the generation type 2. Then we classify such trivial extensions of the level 2. Finally, we consider algebras with an ideal isomorphic to the unique algebra of the generation type 2 of the first level. All such algebras have degenerations to algebras of a nonantisymmetric bilinear forms. We estimate the level of an algebras of a bilinear form, classify all algebras that have all nonantisymmetric bilinear form degenerations of the level 1 and estimate the levels of these algebras. In result, we get the classification of all algebras of the level 2. In particular, we recover and correct the results of [19]. It is interesting that all the anticommutative algebras of the second level are Lie algebras and all the alternative algebras of the second level are associative.
PRELIMINARIES
In this section we introduce some notation and recall some well known definitions and results that we will need in this work. Note that all the algebras used in this paper are defined in Section 6 in the end of the paper. In all multiplication tables given there we intend that all omitted products of basic elements are zero. We will be free to use the notation of Section 6 throughout the paper.
2.1. Degenerations. All vector spaces in this paper are over some fixed algebraically closed field k and we write simply dim, Hom and ⊗ instead of dim k , Hom k and ⊗ k . An algebra in this paper is simply a vector space with a bilinear binary operation. This operation does not have to be associative unlike to the usual notion of an algebra.
Let V be an n-dimensional space. Then the set of n-dimensional algebra structures on V is A n = Hom(V ⊗ V, V ) ∼ = V * ⊗ V * ⊗ V . Any n-dimensional algebra can be represented by some element of A n . Two algebras are isomorphic iff they can be represented by the same structure. Moreover, sometimes we will identify a structure from A n and an algebra represented by it. The set A n has a structure of the affine variety k n 3 . There is a natural action of the group GL(V ) on A n defined by the equality (g * µ)(x ⊗ y) = gµ(g −1 x ⊗ g −1 y) for x, y ∈ V , µ ∈ A 2 and g ∈ GL(V ). Two structures represent the same algebra iff they belong to the same orbit.
Let A and B be n-dimensional algebras. Suppose that µ, χ ∈ A n represent A and B respectively. We say that A degenerates to B and write A → B if χ belongs to O(µ). Here, as usually, O(X) denotes the orbit of X and X denotes the closer of X. We also write A → B if χ ∈ O(µ). We say that the degeneration A → B is proper if A ∼ = B. We will write A Whenever an n-dimensional space named V appears in this paper, we assume that there is some fixed basis e = (e 1 , . . . , e n ) of V . In this case, for µ ∈ A n , we denote by µ k i,j (1 ≤ i, j, k ≤ n) the structure constants of µ in the basis e, i.e. scalars from k such that µ(e i , e j ) = n k=1 µ k i,j e k . To prove degenerations and nondegenerations we will use the same technique that has been already used in [21] and [16][17][18]. In particular, we will be free to use [16,Lemma 1] and facts that easily follow from it. This lemma asserts the following fact. If A → B, µ ∈ A n and there is a closed subset R ⊂ A n invariant under lower triangular transformations of the basis e 1 , . . . , e n such that µ ∈ A, then there is a structure χ ∈ R representing B. Invariance under lower triangular transformations of the basis e 1 , . . . , e n means that if ω ∈ R and g ∈ GL(V ) has a lower triangular matrix in the basis e 1 , . . . , e n , then g * ω ∈ R (see [16] for a more detailed discussion).
To prove degenerations, we will use the technique of contractions. Namely, let µ, χ ∈ A n represent A and B respectively. Suppose that there are some elements is a basis of V for any t ∈ k * and the structure constants of µ in this basis are µ k i,j (t) for some polynomials gives a degeneration between algebras represented by the structures µ and χ, we will write µ E t − − → χ. Usually we will simply write down the parametrized basis explicitly above the arrow.
An important role in this paper will be played by a particular case of a degeneration called a standard Inönü-Wigner contraction (see [12]). We will call it IW contraction for short. Suppose that A 0 is an (n − m)-dimensional subalgebra of the n-dimensional algebra A and µ ∈ A n is a structure representing A such that A 0 corresponds to the subspace e m+1 , . . . , e n of V . Then µ (te1,...,tem,em+1,...,en) for some χ ∈ A n and the algebra B represented by χ is called the IW contraction of A with respect to A 0 . The isomorphism class of the resulting algebra does not depend on the choice of the structure µ satisfying the condition stated above and always has an ideal I ⊂ B and a subalgebra B 0 ⊂ B such that B = B 0 ⊕ I as a vector space, I 2 = 0 and B 0 ∼ = A 0 as an algebra. We will call an algebra of such a form a trivial singular extension of A 0 by k m .
2.2. 1-generated algebras. Let us discuss now some facts about subalgebras generated by one element of an algebra.
Definition 2.1. Let A be an n-dimensional algebra. For a ∈ A, we denote by A(a) the subalgebra of A generated by a. The generation type of A is the dimension of a maximal 1-generated subalgebra of A, i.e. the number G(A) defined by the equality Let us now choose some structure µ ∈ A n representing A. It induces a k[x 1 , . . . , x n ]-algebra structure For two n-tuples of polynomials in n variables f (x 1 , . . . , x n ) = f 1 (x 1 , . . . , x n ), . . . , f n (x 1 , . . . , x n ) ∈ (k[x 1 , . . . , x n ]) n , g(x 1 , . . . , x n ) = g 1 (x 1 , . . . , x n ), . . . , g n (x 1 , . . . , x n ) ∈ (k[x 1 , . . . , x n ]) n , we define the n-tuple (f ⋆ µ g)(x 1 , . . . , x n ) = (f ⋆ µ g) 1 (x 1 , . . . , x n ), . . . , (f ⋆ µ g) n (x 1 , . . . , x n ) ∈ (k[x 1 , . . . , x n ]) n by the equality Let us recall that the Catalan numbers defined by the equality union of the sets S i (i ≥ 1). Moreover, the formula above guarantees that there exists some bijection F i : We fix a family of such bijections in this paper and, for m ∈ S i we denote by l m and r m such integers that F i (m) = (l m , r m ). Thus, for any m > 1 we have defined two integers 1 ≤ l m , r m < m. Note that we can choose the bijections F i (i ≥ 0) in such a way that m ≤ l if l m ≤ l l and r m ≤ r l . We will assume everywhere that the chosen maps F i satisfy this property.
Now, for a structure µ ∈ A n , we define the n-tuples . . , f µ,i n (x 1 , . . . , x n ) ∈ (k[x 1 , . . . , x n ]) n by induction on i ≥ 1 in the following way. Firstly, we define f µ,1 j (x 1 , . . . , x n ) = x j for all 1 ≤ j ≤ n. If i > 1 and f µ,j are defined for all 1 ≤ j ≤ i − 1, then we set f µ,i = f µ,li ⋆ µ f µ,ri . Now, it is clear that the vector v = n j=1 α j e j ∈ v generates a subalgebra that is generated as a linear space by the vectors n j=1 f i,µ j (α 1 , . . . , α n )e j for i ≥ 1.
for all integer numbers i and j, and A is generated by A 1 as an algebra. It is easy to see that G(A) = dim A for a standard 1-generated algebra A. Note that A 3 is the unique standard 1-generated 2-dimensional algebra structure.
2.3. Partitions. In Section 4 we will develop the degenerations of algebras with the generation type 1 and a square zero ideal of codimension 1. For this purpose we need the notion of a partition and some facts about it. Note that the notion of a partition was already applied for the studying of the variety of nilpotent matrices in [6]. For more detailed information on partitions we reference the reader to [3,7]. Let us recall that a partition of the integer number n of the length l is a sequence a 1 , . . . , a l such that a 1 ≥ a 2 ≥ · · · ≥ a l > 0 and l i=1 a i = n. In this case we set len(a) = l. We denote by par n the set of all partitions of n. We also introduce par * = ∪ n≥1 par n . For convenience, we set a i = 0 for i > len(a). Let us define the so-called dominance order ≻ on the set par n .
It is easy to see that ≻ is a partial order on par n . Given a, b ∈ par n , we say that b is a preceding partition for a if a ≻ b and there is no c ∈ par n such that a ≻ c ≻ b. We denote by a − the set of all preceding partitions for a. The following lemma is proved in [3].
For a ∈ par n , the set a − is formed by partitions from the following two sets: For a partition a ∈ par n , we will denote by lev(a) the maximal number m such that there exist a 0 , . . . , a m−1 ∈ par n satisfying a ≻ a m−1 ≻ a m−2 ≻ · · · ≻ a 0 . In other words, lev(a) can be defined by induction in the following way. If a − = ∅, then lev(a) = 0, in the opposite case lev(a) = 1 + max Also we will need the sum operation on the set par * . Given partitions a ∈ par n and b ∈ par m , we define their sum a + b ∈ par n+m by the equality (a + b) i = a i + b i for i ≥ 1. As usually, the notion of a sum makes sense for any finite family of partitions.
2.4.
Matrices and their full specters. Let λ 1 , . . . , λ l be all the distinct eigenvalues of the matrix M ∈ M n (k) and a i 1 , . . . , a i len(a i ) be the nonincreasing sequence of the sizes of the Jordan blocks corresponding to λ i (1 ≤ i ≤ l) that contains each size as many times as many blocks of the corresponding size M has. In other words, a i ∈ par ki , where min(a i t , p) for any p ≥ 1. Here E denotes the identity n × n matrix. Note also that l i=1 k i = n and k i ≥ 1 for any 1 ≤ i ≤ l by definition. We denote the set {(λ i , a i )} l i=1 ⊂ k × par * by F S(M ) and call it the full specter of the matrix M . We denote the set {F S(M ) | M ∈ M n (k)} of all possible full specters of n × n matrices by F S n . The group k * acts on F S n by the equality α * It is well known that there is a one to one correspondence between the set F S n and the set of conjugacy classes of n × n matrices. It is easy to see also that there is a one to one correspondence between the set F S n /k * and the set M n (k)/ k * × GL n (k) . The action of k * × GL n (k) on M n (k) is defined by the equality (α, U ) * M = αU M U −1 for α ∈ k * , U ∈ GL n (k), and M ∈ M n (k). Here and further, for a set X and a group G acting on it, X/G denotes the set of orbits under this action.
Let us introduce, for an integer l ≥ 1, an l-tuple (i 1 , . . . , i l ) of nonnegative integers, and l-tuple (λ 1 , . . . , λ l ) of elements of k, the matrix J i1,...,i l (λ 1 , . . . , λ l ) by the equality In other words, J i1,...,i l (λ 1 , . . . , λ l ) is the m i in the direct product k m . Note also that the group k * acts on k m by multiplications. These actions commute and both of them stabilize the zero point. We choose one representative in each orbit of k m \ (0, . . . , 0) under the action of S m1 × · · · × S m k × k * and form a set that we denote by K * m1,...,m k . Also we choose one representative in each orbit of k m under the action of S m1 × · · · × S m k and form a set that we denote by K m1,...,m k .
a i . It is not difficult to see that one can choose in a unique We define M (S) = M b,α1,...,α b 1 in this case. If at least one of the scalars λ i is nonzero, then we also can choose in a unique way α ∈ k * and α 1 , . . . , ..,b1−b2 . Then we define M (S) = M b,α1,...,α b 1 , whereS denotes the class of S in F S n /k * . Note that F S M (S) = S and the class of F S M (S) in F S n /k * equalsS.
GENERATION TYPE AND LEVEL
In this section we will show that the notions of the generation type and of the level are closely related in the sense that the level of an algebra can be estimated using its generation type. Though the observations of this section are not very surprising, they play a crucial role in our approach to the classification of algebras of low levels. In fact, this approach is inspired by [20], where as the first step of the proof the authors consider the algebras of a generation type more than 1.
Proof. Let us fix some structure µ ∈ A n . It follows directly from our definitions that G(µ) ≤ m iff the rank of the matrix . . .
is less or equal to m for all α 1 , . . . , α n ∈ k and all l ≥ 0. It is clear that for a fixed number l this condition is equivalent to some system of polynomial equations in µ k i,j (1 ≤ i, j, k ≤ n). Really, the condition rank(M µ l ) ≤ m is equivalent to the fact that all minors of the dimension m are zero. This gives us a system of polynomial equations in α i and µ k i,j (1 ≤ i, j, k ≤ n). But, since the required equalities have to hold for all α 1 , . . . , α n ∈ k, we get polynomial equations in µ k i,j (1 ≤ i, j, k ≤ n). Thus, the set The level of the n-dimensional algebra A is the maximal number m such that there exists a sequence of nontrivial . The level of A is denoted by lev(A). Now we want to find the minimal value of the level of an algebra with a given generation type. The next lemma shows that standard 1-generated algebras play a significant role in this problem. Proof. Let us represent A by a structure µ such that e 1 , . . . , e m corresponds to a subalgebra generated by e 1 . It is clear that f µ,i l (1, 0, . . . , 0)e l for i ≥ 1. In particular, v 1 = e 1 . We have v i i≥1 = e 1 , . . . , e m . Then we can choose 1 = i 1 < i 2 < · · · < i m such that v i l (1 ≤ l ≤ m) are linearly independent and, for any 1 ≤ l < m and i l < i < i l+1 , the vector v i belongs to v i1 , . . . , v i l . Let us now choose d 1 , . . . , d m such that i l ∈ S d l for all 1 ≤ l ≤ m. Let us consider the parametrized basis defined by the equalities E t l = t d l v i l for 1 ≤ l ≤ m and E t l = t dm e l for m + 1 ≤ l ≤ n. It is clear from our definitions that, It is clear that χ(e i , e j ) = χ(e j , e i ) = 0 for 1 ≤ i ≤ n and m + 1 ≤ j ≤ n, and e 1 , . . . , e m is a subalgebra of χ. It remains to show that the restriction of χ to e 1 , . . . , e m represents an m-dimensional standard 1-generated algebra. Let us define the grading on U = e 1 , . . . , e m by the equality U d = e k | d k = d . It is clear from the formula above that χ(e k , e l ) = 1≤j≤m,dj =d k +d l α j s e j for s = F −1 d k +d l (i k , i l ), and hence χ respects the grading on U . It is clear that U 1 = e 1 is 1-dimensional, and thus it remains to show that e 1 generates U with respect to the structure χ. Let us show using induction on 1 ≤ l ≤ m that e 1 , . . . , e l lies in the subalgebra generated by e 1 with respect to χ. Suppose that the assertion is true for some l < m. It is clear that , i.e. e l+1 = χ(e p , e q ) belongs to the subalgebra generated by e 1 with respect to χ. Consequently, the lemma is proved.
The next result estimates the minimal possible level of a standard 1-generated algebra. This estimation is rough, but it is sufficient for the classification of algebras of low levels.
Proof. By our assumption, A has a grading A = ⊕ i≥1 A i such that dim A 1 = 1 and A is generated by A 1 as an algebra. Let us choose a homogeneous basis e 1 , . . . , e n of A such that the degree of e i is less or equal than the degree of e j if i < j. It is easy Thus, it is enough to prove the assertion of the lemma for n = 3. It is easy to show that any standard 1-generated algebra of dimension 3 can be represented
Lemmas 3.3 and 3.5 show that, for any algebra
Remark 3.6. One can show that lev(A) ≥ 5 for a standard 1-generated algebra of dimension 4. Thus, one can show analogously to the proof of Lemma 3.5 that lev(A) ≥ n + 1 for a standard 1-generated algebra of dimension n ≥ 4. It is interesting to obtain some good estimation for the level of a standard 1-generated algebra of dimension n. For example, it is interesting if this estimation is linear or not.
THE VARIETY T n
In this section we introduce the variety T n and study its algebraic and geometric properties. This variety is formed by algebras with the generation type 1 and is important in the study of such algebras, because any IW contraction of an algebra with the generation type 1 with respect to a 1-generated subalgebra belongs to T n .
4.1.
Definition and algebraic description of T n . An n-dimensional algebra with a square zero ideal of codimension 1 is an algebra that can be presented by a structure µ ∈ A n such that (1) µ(e i , e j ) = 0 and µ(e 1 , e i ), µ(e i , e 1 ) ∈ e 2 , . . . , e n for 2 ≤ i, j ≤ n.
Proof. It is easy to see that µ if α ∈ k satisfies the conditions listed in the lemma. Suppose now that G(A) = 1. It is clear that µ(e 1 , e 1 ) = αe 1 for some α ∈ k in this case. Since e 1 + e i and µ(e 1 + e i , e 1 + e i ) = αe 1 + µ(e 1 , e i ) + µ(e i , e 1 ) have to be linearly dependent, we get µ(e 1 , e i ) + µ(e i , e 1 ) = αe i for any 2 ≤ i ≤ n. ✷ Let us denote by T n the subset of A n formed by structures representing algebras of the generating type 1 with a square zero ideal of codimension 1. It is well known and easy to see that the set of structures representing n-dimensional algebras with a square zero ideal of codimension 1 is a closed subset of A n . Thus, it follows from Lemma 3.1 that T n is a closed subset of A n .
Let M = (M i,j ) 2≤i,j≤n be an (n − 1) × (n − 1) matrix. We define T M α ∈ A n for α ∈ k in the following way: one of the following conditions holds: Proof. It follows from Lemma 4.1 that any structure from T n lies in O(T M α ) for some M ∈ M n−1 (k) and α ∈ k. If α = 0, then it is easy to see that the structure constants of T M α in the basis e1 α , e 2 , . . . , e n are the same as the structure constants of T M α 1 . Thus, the first assertion is proved.
Then it is easy to see that the matrix of g in the basis e 1 , . . . , e n has the form for some α 2 , . . . , α n ∈ k and U ∈ GL n−1 (k). Then it is easy to see that , then it is easy to see that the matrix of g in the basis e 1 , . . . , e n has the form respectively for S ∈ F S n−1 . The next corollary follows directly from Corollary 4.2.
is a presentation of the variety T n as a disjoint union of orbits under the action of GL n (V ).
Now we collect some facts about the variety T n . Namely, we describe its intersections with well known varieties of algebras.
• The orbits of structures of the form T S 0 (S ∈ F S n−1 ) correspond to solvable Lie algebras. Such an algebra is nilpotent iff S = {(0, a)} for some a ∈ par n−1 . The orbits of structures of the form T S 1 (S ∈ F S n−1 ) are nonsolvable and not anticommutative, and hence not Lie.
4.2.
Degenerations in T n . The aim of this subsection is to describe all degenerations in the variety T n . In the end of the subsection we also will give some applications of this description.
Firstly, let us show that all the degenerations in T n are listed in the theorem.
Let us consider the case r = 1. Let us set It is easy to see that R is a closed subset of A n invariant under lower triangular transformations of the basis e 1 , . . . , e n . Since Let us consider two cases.
In this case, by Corollary 4.2, we have s = 0 and S = F S(U ) = {(0, b)} for some b ∈ par n−1 . It remains to prove that As before, we are going to prove the inequality In the case r = 0 we set It is easy to see that R is a closed subset of A n invariant under lower triangular transformations of the basis e 1 , . . . , e n . The rest of the proof in the case r = 0 is analogous to the case r = 1.
Thus, it remains to show that the degenerations listed in the theorem are valid. It suffices to prove only primary degenerations. According to Lemma 2.3 and the statement of the theorem, if T R r → T S s is a primary degeneration, then one of the following conditions holds: In the first case we have For the second case, let us introduce p = max 1≤i≤l len(a i ). We assume for simplicity that len(a 1 ) ≥ len(a 2 ) ≥ · · · ≥ len(a l ).
The next two corollaries follow directly from Theorem 4.4.
. Now we can compute the level of an algebra from T n .
Proof. The first assertion follows directly from Corollary 4.5. Let us prove the second assertion using induction on Then, using Corollary 4.6 and the first assertion of this corollary, we get ✷ Example 4.8. Nilpotent algebras of low levels in the variety T n are classified in Table 1. All these algebras are of the form T for some a ∈ par n−1 . This variety is very similar to the variety of (n − 1) × (n − 1) nilpotent matrices described in [6]. In particular, Corollary 4.5 and the first part of Corollary 4.7 can be deduced from the just mentioned paper.
Based on this fact we give classifications of solvable nonnilpotent and nonsolvable algebras of low levels in T n in Tables 2 and 3. is an irreducible variety.
CLASSIFICATION OF ALGEBRAS OF LEVEL TWO
In this section we classify all the algebras of the level 2. Note that the described methods can be extended to the study of algebras of higher levels and the obtained results give a reasonable part of the classification of algebras of the level 3.
5.1.
Algebras of low levels with generation type 1. The goal of this subsection is to classify all algebras of the level 2 with generation type 1. But firstly let us recall the classification of algebras of the level 1. For algebras over the field C this classification can be found in [20]. For algebras over infinite fields the same result can be proved absolutely analogously or can be found in [13]. However, we give here a short proof for the convenience of the reader.
Proposition 5.1. Let n ≥ 2 be an integer. Then any structure in A n corresponding to an algebra of the level 1 lies in the orbit of exactly one of the structures A 3 ⊕ k n−2 , n 3 ⊕ k n−3 , p − or ν α (α ∈ k).
Proof. It follows from Examples 4.8 and 4.9 that the union of the orbits of n 3 ⊕ k n−3 , p − and ν α (α ∈ k) is exactly the set of all algebras of the level 1 in T n . It is easy to see that R = {µ ∈ A n | µ k i,j = 0 for (i, j, k) = (1, 1, n)} is a closed subset of A n invariant under lower triangular transformations of the basis e 1 , . . . , e n . The structure A 3 ⊕ k n−2 has the level 1, because its orbit contains all the nontrivial structures from R.
Let now A be an n-dimensional algebra. If G(A) ≥ 2, then it follows from Lemmas 3.3 and 3.5 that A ∼ = B ⊕ k n−2 for some 2-dimensional standard 1-generated algebra B, i.e. A can be represented by A 3 ⊕ k n−2 . If G(A) = 1 and A has nontrivial multiplication, then there exists a ∈ A such that aA + Aa = 0. Then the IW contraction with respect to A(a) has to be an algebra of the level 1 isomorphic to A. Since this contraction belongs to T n , A can be represented by one of the structures n 3 ⊕ k n−3 , p − or ν α (α ∈ k).
✷
Let us now prove a lemma about the structure of algebras with the generation type 1 Lemma 5.2. Let A be an n-dimensional algebra with G(A) = 1. Then A has an (n − 1)-dimensional subspace U such that a 2 = 0 for all a ∈ U .
Proof. Note that if a 2 = b 2 = 0 for a, b ∈ A, then ab + ba = 0. It is obvious if a and b are linearly dependent. If a and b are linearly independent, then considering (a + αb) 2 = α(ab + ba) we obtain that ab + ba ∈ a + αb for any α ∈ k * , and hence the equality ab + ba = 0 holds. If a, b ∈ A are linearly independent elements such that a 2 = 0 and b 2 = 0, then rescaling them we may assume that a 2 = a and b 2 = b. Now, considering (a+ αb) 2 = a+ α(ab + ba)+ α 2 b = (1 + α)(a+ αb)+ α(ab + ba− a− b) we obtain that ab + ba − a − b ∈ a + αb for any α ∈ k * , and hence ab + ba = a + b. Then (a − b) 2 = 0. Now it is easy to see that we can choose (n − 1) linearly independent square zero elements in A that generate a space U with the required properties.
✷ We are going to extend the method just used for the classification of algebras of the level 1 to classify algebras of the level 2. Thus, we will consider algebras of different generation types separately. In this subsection we consider the algebras of the generation type 1. The classification for this case is given in the proposition below. Note that the (2m + 1)-dimensional algebra structure η m from Table 6 is known as a nondegenerate Heisenberg Lie algebra. Note also that η 1 = n 3 and η 2 is isomorphic to the 5-dimensional Lie algebra structure g 1 that can be found in [11,16]. (4) If n ≥ 5, then lev(A) = 2 iff A can be represented by a structure from the set The rest of this subsection is devoted to the proof of Proposition 5.3. Firstly, let us prove that all the algebras mentioned in the proposition have the level 2. This assertion follows from Examples 4.8 and 4.9 for all the structures, except for the structure k n−2 ⋊ E 4 and η 2 ⊕ k n−5 . Thus, the next lemma finishes the first part of the proof. if µ 2 2,2 = 0 and µ 2 1,2 + µ 2 2,1 = 0, and µ ∈ O (p − ) if µ 2 2,2 = 0, µ 2 1,2 + µ 2 2,1 = 0 and µ 2 1,2 = 0.
On the other hand, a nilpotent Lie algebra A with dim A 2 = 1 can be represented by η l ⊕ k n−2l−1 for some 1 ≤ l < n 2 . Thus, O (η m ⊕ k n−2m−1 ) contains only the orbits listed in the statement of the lemma.
✷ It remains to prove that any algebra with the level 2 and the generation type 1 can be represented by a structure from Proposition 5.3. The main idea of the proof is the following. If the algebra A has generation type 1, then any IW contraction of A with respect to a 1-generated subalgebra is an algebra from T n . Thus, the main step of our proof is the classification of algebras A with lev(A) = 2 and G(A) = 1 such that any IW contraction of A with respect to a 1-dimensional subalgebra has a level not greater than 1. Suppose that A satisfies the just stated conditions. We say that a ∈ A \ {0} is of X-type, where X ∈ {n 3 ⊕ k n−3 , p − } ∪ {ν α } α∈k , if the IW contraction of A with respect to A(a) can be represented by the structure X. If the corresponding IW contraction is trivial, we say that a is of 0-type. We will also write simply n 3 -type instead of n 3 ⊕ k n−3 -type. By Proposition 5.1, any a ∈ A is of 0-type, n 3 -type, p − -type or ν α -type for some α ∈ k.
Lemma 5.5. Let A be an algebra with G(A) = 1 such that any element of A has 0-type, p − -type or ν α -type. Suppose that a, b ∈ A are two linearly independent elements. Then Proof. (1) Suppose that a and b are of p − -type. After a rescaling by nonzero scalars, we may assume that ac Since a and b − a are linearly independent, it is easy to see that b − a cannot have p − -type or ν α -type for some α ∈ k. Thus, b − a is of 0-type.
(2) After a rescaling by nonzero scalars, we may assume that ac = αc + f 1 (c)a, ca = (1 − α)c + f 2 (c)a, bc = c + g 1 (c)b, cb = −c + g 2 (c)b for any c ∈ A, where f 1 , f 2 , g 1 , g 2 ∈ Hom(A, k) satisfy the equalities f 1 (a) = 1 − α, f 2 (a) = α, −g 1 (b) = g 2 (b) = 1. In particular, a + (1 − α) ✷ Corollary 5.6. Suppose that G(A) = 1 and any IW contraction of the n-dimensional algebra A with respect to a 1-dimensional subalgebra has a level not greater than 1. If A does not have an element of n 3 -type, then it either has a level not greater than 1 or can be represented by the structure k n−2 ⋊ E 4 .
Proof. It easily follows from Lemmas 5.2 and 5.5 that A has a basis a 1 , . . . , a n such that a i is of 0-type for i ≥ 3 and either a 2 is of 0-type too or a 1 is of ν 1 -type and a 2 is of p − -type. If a 2 is of 0-type, then it is clear that A is either trivial or of the level 1. In the second case, after rescaling of the elements a 1 and a 2 , we have a 1 a 1 = a 1 , a 1 a 2 = −a 1 + a 2 , a 2 a 1 = a 1 , a 1 a i = a 2 a i = −a i a 2 = a i for 3 ≤ i ≤ n, and all the remaining products of basic elements equal to zero. Changing a 2 by a 1 − a 2 , one can see that A can be represented by k n−2 ⋊ E 4 . ✷ Lemma 5.7. Suppose that G(A) = 1 and any IW contraction of the algebra A with respect to a 1-dimensional subalgebra has a level not greater than 1. If A has an element of n 3 -type, then it can be represented by the structure η m ⊕ k n−2m−1 for some 1 ≤ m < n 2 . Proof. Let us represent A by a structure µ ∈ A n such that the IW contraction of µ with respect to the subalgebra generated by e 1 equals n 3 ⊕ k n−3 . This means that µ(e 1 , e 1 ) = 0, µ(e 1 , e 2 ) = µ 1 1,2 e 1 + e 3 , µ(e 2 , e 1 ) = µ 1 2,1 e 1 − e 3 , µ(e 1 , e i ) = µ 1 1,i e 1 and µ(e i , e 1 ) = µ 1 i,1 e 1 for 3 ≤ i ≤ n. Changing e 3 by e 3 + µ 1,2 e 1 we may assume that µ 1 1,2 = 0. Since µ(e 1 , e 2 ) = e 3 , the basic element e 2 cannot be of 0-type, p − -type or ν α -type for any α ∈ k. Thus, e 2 is of n 3 -type, and hence e 1 e 2 + e 2 e 1 ⊂ e 2 , i.e. µ 1 2,1 = µ 3 2,3 = µ 3 3,2 = 0. Subtracting µ 3 2,i e 1 from e i for i ≥ 4, we may assume that µ(e 1 , e 1 ) = µ(e 2 , e 2 ) = 0, µ(e 1 , e 2 ) = −µ(e 2 , e 1 ) = e 3 , µ(e 1 , e i ) = µ 1 1,i e 1 , µ(e i , e 1 ) = µ 1 i,1 e 1 , µ(e 2 , e i ) = µ 2 2,i e 2 , µ(e i , e 2 ) = µ 2 i,2 e 2 (3 ≤ i ≤ n). Suppose that e 3 is of p − -type. After rescaling e 1 and e 3 , we may assume that µ(e 3 , e 2 ) = e 2 . Then we have µ(e 1 + e 3 , e 2 ) = e 2 + e 3 and µ(e 1 + e 3 , e 3 ) = −(e 1 + e 3 ) + e 3 . Thus, the IW contraction of µ with respect to the subalgebra generated by e 1 + e 3 is not of the first level. The obtained contradiction shows that e 3 cannot be of p − -type. Analogously, e 3 cannot be of ν α -type for any α ∈ k. Then e 3 is either of 0-type or of n 3 -type and, in particular, µ(e 1 , e 3 ) = µ(e 3 , e 1 ) = µ(e 2 , e 3 ) = µ(e 3 , e 2 ) = 0. Now, for i ≥ 4, the argument as above shows that if e i is of p − -type or of ν α -type for some α ∈ k, then the IW contraction of µ with respect to the subalgebra generated by e 1 + e i is not of the first level. Thus, any element of A is either of 0-type or of n 3 -type, and thus µ 1 1,i = µ 1 i,1 = µ 2 2,i = µ 2 i,2 = 0 for any i ≥ 3. In particular, a 2 = 0 for any a ∈ A, i.e. A is anticommutative. Suppose that e 3 is of n 3 -type. Then we may assume that e 4 e 3 = 0. Since (e 1 + e 4 )e 2 = e 3 , the element e 1 + e 4 has to be of n 3 -type, and hence e 4 e 3 = (e 1 + e 4 )e 3 = α(e 1 + e 4 ) for some α ∈ k. Since e 3 is of n 3 -type, we have α = 0. The obtained contradiction shows that e 3 has to be of 0-type. Moreover, since any element of A is either of 0-type or of n 3 -type, we have µ(e 3 , e i ) = µ(e i , e 3 ) = 0 for all 1 ≤ i ≤ n. Suppose that there exist 4 ≤ i, j ≤ n such that µ(e i , e j ) ∈ e 3 . We may assume that i = 4 and j = 5. Since (e 1 + e 4 )e 2 = e 3 and e 1 + e 4 has to be of n 3 -type, we have e 4 e 5 = (e 1 + e 4 )e 5 = α 1 (e 1 + e 4 ) + β 1 e 3 for some α 1 ∈ k * and β 1 ∈ k. Analogously, considering e 2 + e 5 , we get e 4 e 5 = e 4 (e 2 + e 5 ) = α 2 (e 2 + e 5 ) + β 2 e 3 for some α 2 ∈ k * and β 2 ∈ k. This contradicts the linearly independence of e 1 + e 4 , e 2 + e 5 and e 3 . Thus, µ(e i , e j ) ∈ e 3 for any 1 ≤ i, j ≤ n. Thus, A is an anticommutative nilpotent algebra with dim A 2 = 1. The statement of the lemma easily follows from this fact. ✷ Proof of Proposition 5.3. It remains to prove that any algebra of the level 2 can be represented by a structure from the statement of the proposition. Suppose that the n-dimensional algebra A has the level 2. If there exists a ∈ A such that the IW contraction of A with respect to A(a) has a level greater than 1, then this contraction has the level 2 and it is isomorphic to A. In particular, A belongs to T n in this case and the required assertion follows from Examples 4.8 and 4.9. If the level of any IW contraction of A with respect a 1-dimensional subalgebra has a level not greater than 1, then the required assertion follows from Proposition 5.1, Corollary 5.6 and Lemma 5.7. ✷ Note that Corollary 5.6 and Lemmas 5.4 and 5.7 give the following interesting result that can be useful for the classification of algebras of levels higher than 2.
Corollary 5.8. If lev(A) = m > 2, G(A) = 1 and any IW contraction of the algebra A with respect to a 1-dimensional subalgebra has a level not greater than 1, then n ≥ 2m + 1 and A can be represented by the structure η m ⊕ k n−2m−1 .
Since any anticommutative algebra by definition has the generation type 1, we get the following classification of anticommutative algebras of the level 2. In particular, all anticommutative algebras of the level 2 are Lie algebras. [19] used a wrong version of the description of the degenerations of 3-dimensional Lie algebras and gave a wrong classification for the case n = 3. Also, there is a misprint in [19] excluding the algebra isomorphic to T 2,0,1 0 from the classification in the case n ≥ 5.
5.2.
Extensions of 2-dimensional algebras with generation type 2. In the studying of levels of algebras with generation type 2, we are going to use the same tool as in the case of generation type 1, i.e. IW contractions. In the case of generation type 2, we are going to apply them with respect to 2-dimensional 1-generated algebras. This subsection is devoted to the algebras that can be obtained in result, i.e. to trivial singular extensions with generation type 2 of 2-dimensional algebras with generation type 2.
Let C be a 2-dimensional algebra with G(C) = 2. A trivial singular extension of C is an n-dimensional algebra A that has an ideal I ⊂ A and an injective algebra homomorphism φ : C → A such that I 2 = 0 and A = φ(C) ⊕ I as a vector space. It follows from the results of [18] that C has an element a such that a and a 2 are linearly dependent. Then it is easy to show that C can be represented by a structure χ ∈ A 2 such that χ 2 1,1 = 1 and χ(e 2 , e 2 ) = χ 2 2,2 e 2 , where χ 2 2,2 ∈ {0, 1}. We will denote the set of such structures byà 2 . Suppose that A, I and φ : C → A are as above. Let us represent the algebra A by a structure µ ∈ A n such that e 3 , . . . , e n corresponds to the ideal I, e 1 , e 2 corresponds to the subalgebra φ(C) and moreover µ k i,j = χ k i,j for 1 ≤ i, j, k ≤ 2. The structure µ ∈ A n is fully determined by the structure χ ∈ A 2 and four matrices L 1 , R 1 , L 2 , R 2 ∈ M n−2 (k) such that µ k i,j = (L i ) kj and µ k j,i = (R i ) kj for i = 1, 2 and 3 ≤ j, k ≤ n. Here, for the convenience, we enumerate the rows and the columns of all the (n − 2) × (n − 2) matrices under consideration by the numbers from 3 to n. Moreover, we identify these matrices with the corresponding linear transformations of e 3 , . . . , e n . We will denote by k n−2 ⋊ (L1,R1,L2,R2) χ the structure determined by the structure χ and the matrices L 1 , R 1 , L 2 , R 2 . Let E denote the (n − 2) × (n − 2) identity matrix and S denote the matrix L 1 + R 1 − χ 1 1,1 E.
Proposition 5.11. If µ = k n−2 ⋊ (L1,R1,L2,R2) χ for some χ ∈Ã 2 and L 1 , R 1 , L 2 , R 2 ∈ M n−2 (k), then G(µ) = 2 iff Proof. It is clear that µ represents an algebra with the generating type 2 iff, for any . . , e n , we have to check the required condition for elements of the form e 1 + te 2 + v and e 2 + v, where t ∈ k and v ∈ e 3 , . . . , e n . Let us introduce α t = 1 + (χ 2 1,2 + χ 2 2,1 − χ 1 1,1 )t + (χ 2 2,2 − χ 1 1,2 − χ 1 2,1 )t 2 and M t = S + t L 2 + R 2 − (χ 1 1,2 + χ 1 2,1 )E . For u t = e 1 + te 2 + v, direct calculations show that w t = µ(u t , u t )− χ 1 1,1 +(χ 1 1,2 +χ 1 2,1 )t u t = α t e 2 +M t v. Now we have to check that µ(u t , w t ), µ(w t , u t ), µ(w t , w t ) ∈ u t , w t . We have It is clear that the obtained vector lies in u t , w t for any t ∈ k and any v ∈ e 3 , . . . , e n iff Considering the coefficients at the zero, first and second degrees of t, we get the equalities Thus, the formula for R 2 stated in the proposition has to be satisfied. Analogously, the formula for L 2 follows from µ(w t , u t ) ∈ u t , w t . Substituting the obtained values of L 2 and R 2 in the second of the obtained equalities, we get the last required equality. On the other hand, if all the required equalities are satisfied, then direct calculations show that Analogously, if the required conditions are satisfied, then µ(w t , u t ) ∈ u t , w t . We have also if the required conditions are satisfied.
It is easy to see that R is a closed subset of A n invariant under lower triangular transformations of the basis e 1 , . . . , e n . It is also not difficult to see that k n , A 3 ⊕ k n−2 , n 3 ⊕ k n−3 , T 2,0,1 0 , F 1,−1 ⊕ k n−3 , and k n−2 ⋊ A 3 are all the structures, whose orbits intersect R. Thus, lev k n−2 ⋊ A 3 = 3.
✷ The next corollary is one of the results stated in [19].
✷ It follows from the results of [18] that the algebras presented in Table 4 with the exception of E 4 are exactly all the 2dimensional 1-generated algebra structures. In this table we unite the series E 1 , E 2 and E 3 of the paper [18] in a one series called E 1 . We omit also the conditions required for the uniqueness modulo isomorphism, i.e. we allow some of the structures from Table 4 to represent the same algebra. Nevertheless, the structures that we really will consider, namely, A α 1 , A 2 , B α 2 and D α,β 2 , where α, β ∈ k, α + β = 1, represent pairwise nonisomorphic algebras. Note that some structures in Table 4 do not satisfy the conditions of Proposition 5.11. To apply this proposition we have to apply firstly some linear transformation to the basis of V . On the other hand, any trivial singular extension is still determined by a structure of 2-dimensional 1-generated algebra and four (n − 2) × (n − 2) matrices. We are going to classify trivial singular extension of 2-dimensional 1-generated algebra of the level 2. By this reason, the next lemma allows to exclude 2-dimensional algebras of the level 3 from our consideration. Its proof is a direct calculation that we leave for the reader. Thus, it is enough to consider the trivial singular extensions of the structures A α 1 , A 2 , B α 2 and D α,β 2 for α, β ∈ k, α + β = 1. Moreover, due to Lemmas 5.13 and 5.15, in each case we may assume that L 1 = αE and R 1 = βE for some α, β ∈ k.
Definition 5.23. An algebra of a bilinear form is an algebra that can be represented by a structure µ ∈ A n such that µ k i,j have nonzero values only for 1 ≤ i, j ≤ n − 1 and k = n. If at the same time µ(v, v) = 0 for all v ∈ V , then we call the corresponding algebra an algebra of an antisymmetric bilinear form. In the opposite case we call it an algebra of a nonantisymmetric bilinear form.
It follows from the classification of antisymmetric bilinear forms that any algebra of an antisymmetric bilinear form is either trivial or can be represented by η m ⊕ k n−2m−1 for some 1 ≤ m < n 2 . It is easy to see also that an algebra of a nonantisymmetric bilinear form contains an ideal isomorphic to A 3 as an algebra. We give here an estimation of the level of an algebra of a bilinear. Let us recall that, for an algebra A, the annihilator of A is the ideal Ann(A) = {a ∈ A | ab = ba = 0 for all b ∈ A}. ✷ Suppose that the algebra A has an ideal isomorphic to A 3 as an algebra. Let us represent A by a structure µ ∈ A n such that µ(e n−1 , e n−1 ) = e n , µ(e n−1 , e n ) = µ(e n , e n−1 ) = µ(e n , e n ) = 0 and µ(e i , e n−1 , e n ), µ( e n−1 , e n , e i ) ⊂ e n−1 , e n for any 1 ≤ i ≤ n − 2. Then there is a degeneration A → B corresponding to the parametrized basis defined by the equalities E t i = te i for 1 ≤ i ≤ n − 1 and E t n = t 2 e n . It is easy to see that B is an algebra of a nonantisymmetric bilinear form. We will call B an A 3 -bilinear form contraction of A. Our next goal is to describe all the algebras with an ideal isomorphic to A 3 , whose A 3 -bilinear form contractions are of the level 1. All of these algebras except A 3 ⊕ k n−2 can be found in Table 6.
Lemma 5.25. Suppose that A has an ideal isomorphic to A 3 as an algebra. If all the A 3 -bilinear form contractions of A can be represented by A 3 ⊕ k n−2 , then A can be represented by a structure from the set Proof. It is enough to consider the case n ≥ 3. Let us represent A by a structure µ that is described just before this proposition. Suppose that all the A 3 -bilinear form contraction of A can be represented by A 3 . Let us now replace e i by e i − µ n i,n−1 e n−1 for all 1 ≤ i ≤ n − 2. After this replacement we may assume that µ n i,n−1 = 0 for 1 ≤ i ≤ n − 2. It is easy to see that if µ n i,j = 0 or µ n n−1,i = 0 for some 1 ≤ i, j ≤ n − 2, then the A 3 -bilinear form contraction of A corresponding to the structure µ has the dimension of the annihilator less than n − 1, i.e. cannot be represented by A 3 ⊕ k n−2 . Thus, µ n i,j = µ n n−1,i = 0 for all 1 ≤ i, j ≤ n − 2 Let 1 ≤ m ≤ n − 2 be some integer andμ k i,j (1 ≤ i, j, k ≤ n) be the structure constants of µ in the basis e 1 , . . . , e m−1 , e m − e n , e m+1 , . . . , e n of V . Then it is easy to see thatμ n i,j = µ m i,j for all 1 ≤ i, j ≤ n − 2, i, j = m. As it was mentioned above, it follows that µ k i,j = 0 for all 1 ≤ i, j, k ≤ n − 2 such that i, j = k. This condition has to be also satisfied after any nondegenerate linear transformation of the elements e 1 , . . . , e n−2 . In other words, ab ∈ a, b + A 3 for any a, b ∈ A, where the ideal A 3 corresponds to the subspace e n−1 , e n of V after going from A to µ. In particular, G(A/A 3 ) = 1. Let us take some element a ∈ A/A 3 and consider the IW contraction A/A 3 → B with respect to (A/A 3 )(a). Due to the results of Section 4, the algebra B can be represented by some T M r ∈ A n−2 , where r ∈ {0, 1} and M ∈ M n−3 (k). Since bc has to lie in b, c for any b, c ∈ B, the matrix M has to be diagonal in any basis. Thus, the algebra A/A 3 satisfies the conditions of Corollary 5.6, and hence either is trivial or can be represented by a structure from the set {p − , ν α , k n−2 ⋊ E 4 } α∈k .
(1) Suppose that A/A 3 is trivial. Let us consider firstly the case n > 3. Let us prove that µ n i,n = µ n n,i = 0 for any 1 ≤ i ≤ n − 2. Let us choose some 1 ≤ m ≤ n − 2 such that m = i and consider the basis e 1 , . . . , e m−1 , e m + e n , e m+1 , . . . , e n of V . If µ n i,n = 0 or µ n n,i = 0, then it is easy to see that the structure constants in this basis do not satisfy all the conditions obtained above since µ(e i , e m + e n ) = µ n i,n e n and µ(e m + e n , e i ) = µ n n,i e n . Thus, A can be represented by A 3 ⊕ k n−2 if n > 3. If n = 3, then considering the basis e 1 + e 3 , e 2 , e 3 we get that µ 3 1,3 + µ 3 3,1 = 0. Then it is easy to see that A can be represented either by the structure A 3 ⊕ k or by the structure A 3 ⋊ p − .
(3) Suppose that A/A 3 can be represented by ν α for some α ∈ k. Then we may assume that µ(e 1 , e 1 ) = e 1 , µ(e 1 , e i ) = αe i , µ(e i , e 1 ) = (1 − α)e i and µ(e i , e j ) = 0 for 2 ≤ i, j ≤ n − 2. As in the previous case, considering the basis e 1 + e n , e 2 , e 3 , . . . , e n of V , we get µ(e i , e n−1 ) = µ(e n−1 , e i ) = µ(e i , e n ) = µ(e n , e i ) = 0 for 2 ≤ i ≤ n − 2 and µ n 1,n + µ n n,1 = 1. If n = 3, then in fact the algebra A/A 3 does not depend on α and we can set α = µ n 1,n . If n > 3, then, considering the basis e 1 , e 2 + e n , e 3 , . . . , e n of V , we get µ n 1,n = α. In any case, we have µ n 1,n = α, µ n n,1 = 1 − α and, taking in account the equalities proved above, µ n−1 1,n−1 = α, µ n−1 n−1,1 = 1 − α. Thus, µ = A 3 ⋊ ν α . (4) Suppose that A/A 3 can be represented by k n−2 ⋊ E 4 . Then, analogously to the previous case, we get that µ = ✷ Now it is not very difficult to describe all the algebras of the level 2 with an ideal isomorphic to A 3 . It is also possible to calculate the exact values of the levels of A 3 ⋊ p − , A 3 ⋊ ν α and (A 3 ⊕ k n−4 ) ⋊ E 4 , but in this paper we will show only that they all have levels not less than 3. This fact follows from the next technical lemma, whose proof we leave for the reader.
Lemma 5.26. For any α ∈ k we have degenerations (e1+en−1,en,te2,...,ten−1) Suppose that A has an ideal isomorphic to A 3 as an algebra. Then it has level 2 iff it can be represented by some structure from the set {F α,β ⊕ k n−3 } (α,β)∈K * 2 . Proof. By Lemmas 5.25 and 5.26, any algebra with an ideal isomorphic to A 3 of the level 2 can be contracted to an algebra of a nonantisymmetric bilinear form with level greater or equal to 2. Thus, A can have the level 2 only if it is an algebra of a nonantisymmetric bilinear form. Moreover, by Proposition 5.24, we have dim Ann(A) ≥ n − 2. It is easy to see that if dim Ann(A) > n − 2, then A either is trivial or can be represented by A 3 ⊕ k n−2 . On the other hand, any algebra of a nonantisymmetric bilinear form with annihilator of codimension 2 can be represented by the structure F α,β ⊕ k n−3 for some (α, β) ∈ K * 2 , whose level is equal to 2 by Corollary 5.14. ✷ Note that, for k = C, the algebras F α,β ⊕k ((α, β) ∈ K * 2 ) form the same set as the algebras N C 2 (β) (β ∈ C) and N C 3 from [17] and the algebras F α,β ⊕ k n−3 ((α, β) ∈ K * 2 ) form the same set as the algebras A 5 (α) (α ∈ C \ {−1}) and A 6 from [19]. 5.4. Classification of algebras of the level 2. In this subsection we apply the results of previous sections to get a classification of the algebras of the level 2. As a corollary, in the end of this subsection we will give the same classification in some certain varieties. In particular, we will recover the results of [19] and will generalize some results of [9]. Thus, the main result of this subsection and one of the main results of the present paper is the next theorem.
Theorem 5.28. Let A be an n-dimensional algebra.
(1) If n = 2, then lev(A) = 2 iff A can be represented by some structure from the set (2) If n = 3, then lev(A) = 2 iff A can be represented by a structure from the set (3) If n = 4, then lev(A) = 2 iff A can be represented by a structure from the set (4) If n ≥ 5, then lev(A) = 2 iff A can be represented by a structure from the set ✷ Let us recall that due to [9] the ∞-level of an n-dimensional algebra A is lev ∞ A = lim m→∞ lev m (A ⊕ k m−n ). We say that the n-dimensional algebra A is stably isomorphic to the m-dimensional algebra B if A ⊕ k max(n,m)−n ∼ = B ⊕ k max(n,m)−m . It is clear that the ∞-level of an algebra is invariant under stable isomorphisms. The next corollary gives the classification of algebras with the ∞-level 2 modulo stable isomorphism, and thus recovers partially the results of [9], where the anticommutative algebras of the ∞-levels 2 and 3 were classified. On the other hand, the classification of anticommutative algebras of the ∞-level 3 given in [9] is absolutely wrong, and hence, in fact, we recover all the valid results of this paper. Note also that the classification of algebras with a given ∞-level is a much easier problem than the classification of n-dimensional algebras with a given level. Some specific methods for this classification are presented in [9]. Note that it follows from Proposition 5.1 that lev ∞ A = 1 iff A is stably isomorphic to an algebra represented by either A 3 or n 3 .
Corollary 5.29. The algebra A has the ∞-level 2 iff it is stably isomorphic to an algebra represented by some structure from the set T 2,1,0 0 , T 2,2 0 , η 2 ∪ {F α,β } (α,β)∈K * 2 . Finally, at the end of our paper we present corollaries that give the classification of algebras of the level 2 in some varieties. All of them follow directly from Theorem 5.28. In particular, we recover the results of [19] for Jordan algebras and correct the results of the same paper for associative algebras.
Corollary 5.30. Suppose that chark = 2. Let A be a commutative n-dimensional algebra.
(1) If n = 2, then lev(A) = 2 iff A can be represented by some structure from the set (2) If n ≥ 3, then lev(A) = 2 iff A can be represented by a structure from the set In particular, the set of n-dimensional Jordan algebra structures of the level 2 is formed by the structures D 0,0 2 and D 1,1 2 if n = 2 and by the structures k n−2 ⋊ t 0 D 0,0 2 , k n−2 ⋊ t 0 D 1,1 2 and F 1,1 ⊕ k n−3 if n ≥ 3.
Let us recall that the algebra A is called left alternative if (aa)b = a(ab) for all a, b ∈ A. It is clear that an associative algebra is always left alternative.
Corollary 5.31. Let A be a left alternative n-dimensional algebra.
(1) If n = 2, then lev(A) = 2 iff A can be represented either by the structure D 0,0 2 or by the structure D 1,1 2 .
|
2019-04-02T13:02:34.807Z
|
2017-10-24T00:00:00.000
|
{
"year": 2017,
"sha1": "ceadeb78ad9a1f58dfd17ace89a50807ac3bad7a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1710.08943",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ceadeb78ad9a1f58dfd17ace89a50807ac3bad7a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
199074655
|
pes2o/s2orc
|
v3-fos-license
|
Stereodynamical control of product branching in multi-channel barrierless hydrogen abstraction of CH3OH by F† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02445j
Comprehensive dynamical simulations of a prototypical multi-channel reaction on a globally accurate potential energy surface show that the non-statistical product branching is dictated by unique stereodynamics in the entrance channels.
I. Introduction
A main goal of reaction dynamics is to gain a microscopic understanding of chemical transformation by investigating quantum state resolved reactivity in the gas phase. 1 Over the past few decades, our knowledge of how chemical reactions take place has reached an unprecedentedly high level, both theoretically and experimentally. Detailed measurements and sophisticated theoretical investigations have led to a thorough understanding of the dynamics for many prototypical reactions involving three or four atoms, such as the H/F/Cl/O/N/C + H 2 and H/F/O/Cl + H 2 O reactions. [2][3][4][5][6][7] Recent focus has been shied to more complex ones, such as the H/F/Cl/O/OH + CH 4 (the simplest hydrocarbon) reactions. [8][9][10][11] These studies have played a pivotal role in advancing our understanding of fundamental mechanisms and dynamics in chemical reactions, and have shed valuable light on a wide array of important dynamical issues such as tunneling, resonance, mode specicity and bond selectivity, steric effects, and nonadiabatic effects.
However, the aforementioned reactions with only a single type of reaction channel are not representative of most chemical reactions of larger molecules. For instance, many reactions involving organic molecules oen have multiple reaction pathways, and one product may be highly desired. It is thus of great importance to understand product selectivity among competing reaction pathways. [12][13][14][15][16] Theoretically, it is challenging to study the reaction dynamics of these systems because of the increased dimensionality, expensive electronic structure calculations, considerably more complex potential energy surfaces (PESs), and costly theoretical treatments of the nuclear motions, especially when quantum effects are present. Indeed, detailed dynamical studies of multi-channel reactions are still lacking except for a few well-known systems. 12,15 The investment of substantial efforts to meet the challenges in multi-channel reactions is undoubtedly worthwhile because of the concomitant gains in understanding the rich reaction dynamics.
In this work, we examine the hydrogen abstraction from methanol by uorine atoms, which has two competing hydrogen abstraction pathways, namely from the hydroxyl group of methanol to form methoxy radicals, CH 3 O, and from the methyl group to form hydroxymethyl radicals, CH 2 OH, F( 2 P) + CH 3 OH / HF + CH 2 OH.
Methanol is widely used as a laboratory/industrial solvent and a promising alternative fuel. 28 It is the simplest oxygenated polyatomic organic molecule with two functional groups, which makes it an ideal candidate for studying branching ratios and stereodynamics in reactions with atomic radicals. 12 Its reaction with F has been used to generate CH 3 O or CH 2 OH radicals, which are important intermediates in combustion of hydrocarbon fuels, atmospheric chemistry, surface science, and interstellar chemistry. [29][30][31] Therefore, a large number of investigations, in particular experimental ones, have been carried out on the kinetics, branching ratios of the two channels, and dynamics for reactions of F atoms with CH 3 OH and the various deuterated isotopologues, CD 3 OH, CH 3 OD, and CD 3 OD. [17][18][19][20][21][22][23][24][25][26][27]29,[32][33][34][35][36][37][38][39][40][41][42] Considerable attention has also been directed toward measuring product-state or energy distributions. Internal state distributions of the nascent HF product have been measured by infra-red chemiluminescence and laser induced uorescence and found to be inverted in both product channels. [33][34][35][37][38][39] Selective deuteration, employed to disentangle these results, showed that the HF product formed by hydrogen abstraction from the methyl group (R2) possesses greater internal energy than that formed by the abstraction from the hydroxyl group (R1), consistent with the exothermicities of the two channels as discussed above. On the other hand, only a small amount of energy is partitioned into the methoxy radical, with ca. 2% of the available energy in the C-O stretching mode, suggesting the spectator nature of the CH 3 O moiety, for the (R1) channel. 37,38 The reaction channel (R1) has also been probed by the photodetachment of the F À (HOCH 3 ) anion, whose geometry resembles TS1 for this neutral reaction. 40,42,43 The photoelectron spectrum provides a direct probe of the vibrational structure and metastable resonances that are characteristic of the PES of the neutral reaction. In particular, the experiment revealed spectral features associated with manifold vibrational Feshbach resonances and bound states in the exit channel well. 43 In sharp contrast to the numerous experimental investigations on the title reaction, relevant theoretical research is scarce, especially on the reaction dynamics. Based on the information of the stationary points computed with Mfller-Plesset perturbation theory 17,18 and at the G2 level of theory, 18 the kinetics of the two reaction channels were investigated in an attempt to explain the observed anomalously large k R1 /(k R1 + k R2 ) branching ratio. However, the agreement with experiments was quite poor. In 2015, Schaefer and co-workers reinvestigated this reaction at the level of coupled cluster theory with single, double, and perturbative triple excitations (CCSD(T)) associated with the augmented double, triple, and quadruple-zeta basis sets (AVDZ, AVTZ, and AVQZ), 19 which revealed that the electronic structure theories used in the previous calculations were not sufficiently accurate.
More recently, we reported a global PES of the title reaction with all 15 internal degrees of freedom (DOFs) by tting 121 000 points calculated at the explicitly correlated (F12a) version of CCSD(T) with the AVDZ basis set and core electrons frozen. 43 This level of electronic structure theory was found to yield results comparable to the benchmark ab initio results obtained by Schaefer and co-workers. 19 The chemically accurate tting was performed with the permutation invariant polynomialneural network (PIP-NN) method, 44,45 which has been successfully applied to several reactive systems for high delity tting of their PESs. 46 Using this PES, the kinetics and the associated thermal branching ratio of the title reaction have been studied with the quasi-classical trajectory (QCT) method. 47 The calculated canonical rate coefficients were in good agreement with experiments, both showing a slightly negative temperature dependence. In addition, the calculated thermal branching ratios of 0.40-0.43 at 200-1000 K are in good agreement with measurements. 47 These results further conrmed the accuracy of the PES.
In this work, we report an extensive theoretical investigation on the reaction dynamics of this multi-channel system, focusing on the reaction mechanism, as well as stereodynamics and its impact on the microcanonical branching ratio. These calculations were carried out on the globally accurate PES, which has been used successfully to simulate the photoelectron spectrum 43 and photoelectron-photoion coincidence spectrum of the F À (HOCH 3 ) anion 48 and to reproduce the experimental canonical rate coefficients and branching ratios well. 47 Here, integral and differential cross sections are computed for both product channels using QCT, which shed light on the mechanism of this multi-channel reaction. It is found that the coexistence of capture and direct mechanisms at low collision energies gives way to an exclusively direct mechanism at high collision energies. Perhaps most interestingly, detailed analysis of the stereodynamics of the reaction revealed the origin of the non-statistical branching to the two product channels. These results shed valuable light on the dynamics of this multichannel barrierless reaction prototype.
II. Results and discussion
The QCT method used to investigate the dynamics is well established and the details of the calculations can be found in the ESI. † Briey, trajectories are calculated at collision energies of 1.0, 2.0, 3.0, 5.0, 10.0, 15.0, 20.0, 25.0, and 30.0 kcal mol À1 on the PIP-NN PES 43 for the title reaction interfaced to the VENUS chemical dynamics program. 49
II-A. Reaction mechanisms
Excitation functions, namely the dependence of the integral cross sections (ICSs) on the collision energy (E c ), have been calculated for the ground ro-vibrational state of methanol and are shown in Fig. 2(a). It can be seen that the reactivity of both channels is quite large at low collision energies and all ICSs show no threshold in energy, consistent with the barrierless nature and a complex-forming mechanism for the two channels. 5 The ICS of the (R1) (HF + CH 3 O) channel decreases monotonically with increasing collision energy. As shown in Fig. 1, (R1) features a barrierless entrance channel leading to a potential well (RC1) with a signicant depth. At low collision energies, this feature of the PES is expected to capture the collision partners and guide them towards TS1. Indeed, the unique RC1 complex is similar to the reactant complex between F and H 2 O, 50,51 and both are stabilized by a two-center-threeelectron covalent bond formed between the unpaired electron of the F atom and a lone pair of the O atom. 52 As the energy increases, faster collision partners are more difficult to capture and the reactivity decreases.
For the (R2) (HF + CH 2 OH) channel, the ICS is much larger than for (R1), due partly to the availability of three possible H atoms in the methyl group. It also decreases sharply with the collision energy and then becomes essentially at at high collision energies, qualitatively similar to that for the (R1) reaction discussed above. This can be explained by the barrierless energetics along the (R2) channel and the weak complex RC2, which also enables capture at low collision energies. Fig. 3 shows opacity functions of the two channels at different collision energies. At low collision energies, e.g., E c ¼ 1.0 kcal mol À1 , both reaction channels are dominated by very large impact parameters, signifying signicant capture. At higher collision energies, as discussed above, capture at large impact parameters becomes ineffective due to faster relative speed of the collision partners, and reactive trajectories can only be found at relatively small impact parameters. The dramatic change of the opacity functions suggests a change of the reaction mechanism from low energies to higher ones.
The attractive PES topography at large reactant separation (R) underscores the universal capture at low collision energies. At high energies, however, the PES is dominated by repulsive walls with narrow cones of acceptance, which can only be accessed with small impact parameters and correct approaching angles. 53 To illustrate this quantitatively, the cones of acceptance are determined as follows. At the reaction bottleneck, which is dened as the point where the forming bond distance F/H is 1.3Å, the polar angle :FHO and the dihedral angle :FHOC (for the (R1) channel) or the polar angle :FHC and the dihedral angle :FHCO (for the (R2) channel) are calculated for reactive trajectories at collision energies of 1.0, 15.0, and 30.0 kcal mol À1 . As shown in Fig. 4(a) and (b), the productive trajectories have a relatively narrow distribution in either :FHO or :FHC, although this distribution broadens somewhat at higher collision energies. On the other hand, the distributions of the dihedral angles, displayed in Fig. 4(c) and (d), show different characteristics for the two channels: for the (R1) channel, the distributions are quite narrow, peaking at around AE90 , which is quite close to the À82 of TS1, while for the (R2) channel, they are quite broad. The picture obtained from this analysis underscores the steric effect in this reaction: only trajectories that have the correct approach are productive in the reaction at high collision energies.
Relative differential cross sections (DCSs) for the two channels with the methanol in its ground ro-vibrational state are plotted in Fig. 5. It is clear that at collision energies between 1.0 and 30.0 kcal mol À1 , the DCSs in both channels are biased in the forward direction, although scattering occurs at all angles. This is quite different from the backward scattering dominated DCSs in F + H 2 O, 50,54 which has a small barrier of about 2 kcal mol À1 . As shown in Fig. 3, the isotropic capture contribution is only important at low collision energies. The dominant direct mechanism contains both backward and forward scattering, although the latter is favored.
To further understand the details of the DCSs, the correlation between the impact parameter and the scattering angle is shown in Fig. 6 for both reaction channels at E c ¼ 1.0, 15.0, and 30.0 kcal mol À1 . From the gures, it is clear that there are two types of scattering, particularly in the (R1) channel. One features a large impact parameter scattering with near isotropic scattering angles, which points to a complex-forming mechanism with a lifetime longer than the rotational period. 5 The other shows a strong correlation between the impact parameter and scattering angle. This latter mechanism is a direct one, in which small impact parameter collisions lead to backward scattering (rebound) while large impact parameter collisions result in forward scattering (stripping). It is clear that the former is signicant at low collision energies but disappears at high collision energies. These observations are consistent with the opacity functions in Fig. 3, reinforcing the notion about the mechanistic transition from low collision energies to higher ones.
II-B. Branching ratio
In our previous work, the calculated thermal branching ratio of the HF + CH 3 O channel (R1) and its temperature dependence reproduced the experimental values well. 47 In this work, the branching ratio of the HF + CH 3 O channel, which is dened as s R1 /(s R1 + s R2 ), is calculated using the QCT method as a function of collision energy with the reactants in their ro-vibrational ground states. As shown in Fig. 2(b), the (R1) branching ratio decreases monotonically with increasing collision energy except for E c ¼ 1 kcal mol À1 . This trend differs from our previous results for the thermal branching ratio. 47,48 As discussed in the ESI, † this difference is due to the fact that rotational excitation of methanol signicantly increases the reactivity of the (R1) reaction. A detailed analysis of mode specicity in this reaction will be discussed in a future publication.
It is noted from Fig. 2(b) that the (R1) reactivity is larger than the 0.25 statistical value at low collision energies, but this trend is reversed for E c > 5 kcal mol À1 . In other words, this channel has a lower reactivity than the statistical limit at high collision energies. As discussed above, the reaction is dominated by capture at low collision energies. Due to the deeper RC1 well, the (R1) channel has a larger capture radius than the (R2) channel, as evidenced by the opacity functions shown in Fig. 3. As a result, it is not surprising that the (R1) channel has a large branching ratio at low collision energies compared to the statistical value.
As the collision energy increases, the capture mechanism gives way to a direct one. As discussed above, only those trajectories that enter the cones of acceptance are reactive. However, this is true for both the (R1) and (R2) channels and it follows that these two channels are thus expected to approach the statistical limit of a 1 : 3 ratio. This is apparently not the case from Fig. 2(b)! To understand the lower branching ratio for the (R1) channel, we examine the stereodynamics associated with the oppy nature of the OH pseudo-rotation around the C-O axis of methanol. As shown in Fig. 7(a), the potential along the :H-O-C-H angle is quite at when F is far away from methanol, due apparently to the near free rotation of the OH moiety. However, as F approaches methanol, the potential along this coordinate becomes highly anisotropic, featuring two equivalent entrance channels (:H-O-C-H ¼ 80 and 135 ) corresponding to the TS1 geometry. As a result, only those OH moieties around these regions are likely to be reactive, as shown clearly in Fig. 8. For those OH congurations that happen to be away from these angles, the potential is quite repulsive, leading to non-productive collisions. The situation here is very different from the photodetachment of F À (HOCH 3 ), where the OH group is locked to the F À anion in the precursor, and is thus not free to rotate. 43 It should probably be noted that the stereodynamics described above is much more pronounced at high collision energies because of the fast relative collision velocity. At low collision energies, the slow-moving F allows the OH rotor to adiabatically adjust to the anisotropic potential. As a result, the steric effect is relatively minor.
The situation in the (R2) channel is completely different. The at potential in the :H-C-O-H angle when F is far away becomes only slightly anisotropic, as shown in Fig. 7(b). This is because of the relative rigidity of the CH 3 moiety, except for a three-fold internal rotation which is clearly seen in the gure. As a result, approaches of F in a wide range of the :H-C-O-H angle are productive, as shown by the distributions of reactive trajectories in Fig. 8.
The overall result is that the (R1) channel becomes less reactive at higher collision energies, leading to a branching ratio that is lower than the statistical limit. This strong stereodynamics in this channel can be considered as an example of the entropic effect, in which the nearly free OH internal rotation signicantly reduces the reaction rate by a pre-exponential steric factor (x < 1). 1 In other words, only a fraction of the OH orientation relative to the approaching F is reactive. This steric effect is absent in the (R2) channel.
We note in passing that stereodynamical control of product branching has been observed before. [55][56][57][58] However, the previous examples are all restricted to branching between intrinsically equivalent product channels, made distinguishable by isotopic substitutions. For example, the branching between the HCl and DCl channels in the Cl + HD reaction was found to be inuenced by a van der Waals well in the reactant channel. 55 There, the two product channels (HCl vs. DCl) are chemically identical. However, the stereodynamical control in the system discussed here is for two chemically distinct product channels, which is much more relevant to real chemistry.
IV. Conclusions
The availability of accurate high-dimensional potential energy surfaces for complex reactive systems has ushered in an era in which complex reaction dynamics can be investigated in great detail. As demonstrated in this work, deep insights have been gained through theoretical scrutiny of the dynamics of a multichannel reaction between F and CH 3 OH, which leads to two different product channels. Such a detailed investigation of reaction dynamics would be very difficult without the global potential energy surface.
While both product channels are exothermic and barrierless, it is shown that dynamics play an indispensable role in the reaction. In particular, a complex-forming mechanism is favored at low collision energies, while a direct mechanism becomes increasingly dominant as the collision energy increases. This change of the reaction mechanism manifests in measurable attributes such as the differential cross sections. It is also our hope that the current work will stimulate future experimental investigations on this reaction.
More importantly, the branching ratio between the two product channels is energy dependent and non-statistical. Detailed analysis suggests that the non-statistical branching ratio at high collision energies can be attributed to stereodynamics, particularly in the (R1) channel. This is due to the oppy nature of the OH internal rotation, which signicantly reduces the reactivity in this channel. Such a steric factor is not as pronounced for the (R2) channel, thanks to the relative rigidity of the CH 3 moiety. Since both OH and CH 3 moieties are quite common in organic molecules, the insights gained from Fig. 7 (a) The potential along one :H-O-C-H angle at different r FO distances when F approaches the HO moiety of CH 3 OH. Other coordinates are fixed at TS1, whose r FO is equal to 2.04Å. The two cross symbols indicate the TS1-like configurations. (b) The potential along one :H-C-O-H angle at different r FC distances when F approaches the CH 3 moiety of CH 3 OH. Other coordinates are fixed at TS2, whose r FC is equal to 2.95Å. this study can have signicant implications concerning the product selectivity in organic reactions.
Conflicts of interest
There are no conicts of interest to declare.
|
2019-08-02T16:38:18.809Z
|
2019-07-09T00:00:00.000
|
{
"year": 2019,
"sha1": "96ac5bf76ab7875a83e72e822cb98b35f0ffd5d1",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc02445j",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f73be35a363782232aec9fab24d5f8d5125b9d7d",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
235683435
|
pes2o/s2orc
|
v3-fos-license
|
Rationalization, Quantal Response Equilibrium, and Robust Outcomes in Large Populations
This paper provides a robust epistemic foundation for predicting and implementing collective actions when only the proportions that take specific actions in the population matter. We apply ∆-rationalizability to analyze strategic sophistication entailed in (structural) quantal response equilibrium (QRE); the former is called ∆(p)-rationalization to emphasize the only requirement on first-order beliefs is that they should be consistent with the transparent knowledge of the distributions of errors in the population. We show that each QRE is a ∆(p)-rationalizable outcome. We also give conditions under which the converse also holds, and prove that the condition is almost never satisfied in generic games. It implies that QRE may be too demanding as a predictor in general, and ∆(p)-rationalizable outcomes can be a robust benchmark to start from.
Introduction
Policy-making needs prediction and implementation of collective actions. Sometimes, they concern only the proportion in a population instead of choices in the individual level; the circumstance may be uncommon so that no extant data are directly applicable, for example, the voluntary vaccination rate in the outbreak of a new pandemic. A model thrivingly used in the empirical literature is proposed by McKelvey and Palfrey [19] (referred as MP in the following). There, the population is decomposed into groups, each having representative payoffs. An individual has her idiosyncrasy, or payoff type, which influences her payoffs and is known to her only; the distributions of the idiosyncrasies are publicly known. MP introduced a solution concept called quantal response equilibrium (QRE), which is a probabilistic summary based on the commonly known idiosyncrasies distributions of pure-strategic optimal action under each type given the distribution of actions among other groups. 1 However, QRE may not necessarily fit the problem here. In general, achieving an equilibrium requires players' common correct beliefs about each other (Tan and Werlang [29], Aumann and Brandenburger [2], Polak [25], Battigalli and Siniscalchi [5]), yet correct beliefs are hardly guaranteed, especially in an unprecedented circumstance. An alternative is rationalizability (Bernheim [7], Pearce [24]): when a player is ignorant about others' behaviors, she can only rely on her individual rationality, i.e., "making a choice which is justifiable by an internally consistent system of beliefs" (Bernheim [7], p.1007). Battigalli and Siniscalchi [5] generalize this idea into games with incomplete information. Their ∆-rationalization is a framework to study behavioral consequences under some explicit restrictions on the commonly known content of first-order beliefs without constraining the possible epistemic types à la Harsanyi [13]; in other words, it characterizes robustness in the sense of Bergemann and Morris [6].
In this paper, we apply ∆-rationalization into MP's model. Instead of focusing on the classic rationalizable actions/strategies, we turn to rationalizable outcomes, i.e., distributions over actions resulting from distributions over errors and the rationalization procedure. By doing this, we make explicitly the epistemic structure behind the individual reasoning procedure that QRE entails. Since the model assumes that the distributions of the idiosyncrasies are publicly known, the only restriction on an individual's initial belief about her opponents is that the marginal distribution on their types should coincide with it; there is no conditions on people' beliefs on others' behavior or the correlation between types and choices. Yet if the type spaces are large enough, that restriction leads to determinate choices under some types. Given the distribution over types, the consequence is an infimum of the proportion that some action is used. This updates the restriction on beliefs, and leads to a new infimum, etc. Finally, the iterative procedure of ∆-rationalization results in a limit of the infimums of the proportion that each action is adopted, which can be a benchmark for the estimation aforementioned.
As an illustration, suppose that a policymaker is considering whether the size of population who will voluntarily get vaccinated in a community is above the threshold of herd immunity. The situation is described as the game in Table 1: the population is separated into two groups, the row is more vulnerable than the column (for example, they correspond to the senior and younger citizens or the health-care workers and people outside the medical systems, respectively).
Not vaccinated Vaccinated
Not vaccinated 0, 1 7, 2 Vaccinated 1, 16 3, 4 Due to the cost (e.g., risk of side effects and the time spent on administrative processes), a representative player would prefer others to get vaccinated and reduce the chance of virus spreading. Yet because of the difference in vulnerability, the benefits of "free-riding" are asymmetric. Each individual has some idiosyncrasies (e.g., having some underlying health condition which makes her eager to get a vaccinated, or feeling distrustful of vaccination due to some personal trauma) described as a real-valued random variable for each action.
Suppose that idiosyncrasies are independent and each has an extreme value distribution with parameter λ = 0.5, i.e., the cdf F i (θ ik ) = exp(− exp(−0.5θ ik )) for each i ∈ {row, column} and k ∈ {Not Vaccinated, Vaccinated}. The infimum-updating process is illustrated in Figure 1. Numerically, q n row,Not → 0.396, q n row,Vac → 0.604, q n col,Not → 0.968, and q n col,Vac → 0.032 as n → ∞, which coincides with the unique QRE in this game. In other words, given that the distributions of types and people's rationality are commonly known, intrapersonal reasoning alone leads to the QRE outcome. This provides a robust epistemic foundation for using QRE as a predictor in this case. This paper discusses the general relationship between QRE and the reasoning structure; the latter is called ∆(p)-rationalization procedure to emphasize p, the distribution of idiosyncrasies, which is the only restriction on people's beliefs that does not concern the classic rationality, in the framework of ∆-rationalization. Theorem 1 shows that every QRE is a ∆(p)-rationalizable outcome, that is, given a QRE π, among the type-action pairs surviving the ∆(p)-rationalization procedure, each type can be associated with an action optimal at it under a belief consistent with the transparent knowledge of p, the distribution of types, and rationality, such that based on p, the distribution on actions coincide with q.
However, not every ∆(p)-rationalization procedure converges to a QRE as it does in the above example. Theorem 2 gives a sufficient condition on payoff structures for the convergence; further, they are necessary when there are multiple QREs. In 2 × 2 cases, they characterize a subset of games that are intensively studied in the literature. However, they are almost never satisfied in general cases. Therefore, QRE may be too demanding as a predictor in general, and the limit infimum generated by the ∆(p)-rationalization procedure can be a robust benchmark to start from.
Our results may be seen as providing an epistemic foundation for applying QRE in empirical research. Also, it provides a method to test the hierarchical belief assumption in epistemic game theory. By generating a distributions of a real-valued error for each action in addition to a game (with representative payoff values), at each level in the belief hierarchy, a point in the error space of a player is associated to a subset of "consistent" actions. Graphs depicting the change of the association is shown in Figure 3. Comparison between the theoretical prediction and the actual behavior in the laboratory may help to determine the depths of reasoning in a population and their relationship with the numerical values of the error, i.e., the payoff types. The QRE-inspired setting provides a baseline (i.e., common knowledge of the distributions of errors) for reasoning strategic uncertainty, which is different from Kosenkova [16]'s nonparametric inference about to quantify strategic sophistication (k-rationalizability) in first-price auctions.
Literature
As mentioned in Footnote 1, there are two versions of QRE in the literature. The original one is given in MP, which is renamed as structural QRE (sQRE) in Goeree et al [11]. MP generates McFadden [18]'s qualitative choice behavior model into quantal (i.e., discrete) choices in a game-theoretic framework for estimation using field and experimental data. MP adopts a large-population (or Nash's mass action) scenario with private information. Each player is interpreted as a large population; the payoffs reflect "representative" or "typical" preferences of each population, while an individual may have some "idiosyncrasies" for each action, 2 and those idiosyncrasies follow a fixed (joint) distribution in the population. A sQRE is a profile of probability measures over actions generated from some Bayesian-Nash equilibrium with common prior on the idiosyncrasies.
Another version of QRE is introduced in McKelvey and Palfrey [20]. They use an axiomatic method to define quantal response functions, which describe players' disturbed reactions to others' (mixed) strategies, and the equilibrium is a fixed-point in the system. Later, it is renamed as regular QRE (rQRE) in Goeree et al [11].
McKelvey and Palfrey [20] claimed that the two definitions are equivalent and structural QRE is the foundation of regular QRE. Since then, researches applying QRE without distinguishing the two versions to interpreting observed behavior in various fields flourishes (see Goeree et al [12] for a survey). However, Haile et al. [15] questioned the empirical content of (structural) QRE by showing that sQRE is not falsifiable in any static game. This forces researchers to differentiate the two QREs explicitly. One solution is provided by Goeree et al [11], which redefined rQRE by putting additional restrictions on the quantal response functions and showed that some rQRE cannot be modeled as a sQRE and rQRE have empirical content. 3 From the decision-theoretical viewpoint , the two QREs correspond to the two models interpreting the phenomenon that in a population, the subjects' responses over the set A of alternatives to the same choice situation is governed by a probability mechanism π (see Section 5 in Luce and Suppes [17] for a survey). Structural QRE corresponds to the random utility model, where the utility function is selected according to some probability mechanism p, i.e., π(a k ) = p[U k ≥ U t for each t = k]. Regular QRE corresponds to the constant utility model, where the utility function is fixed and the response probability is a function of it; formally, there is a fixed utility profile u = (u(a)) a∈A ∈ R A and a function R from R A to the set of all probability measures over A such that π = R(u).
The two models are based on different assumptions on each decision-maker's rationality. The random utility model assumes that an individual's behavior is optimal to her belief (i.e.,"rational"). In contrast, the constant utility model allows bounded rationality (e.g., "trembling hands" in the sense of Selten [28]). This distinction is not emphasized sufficiently in the literature, which may lead to conceptual problems.
As an example, consider Goeree and Holt [10]'s noisy introspection (NI). NI can be taken as a quantal-response version of rationalization procedure; roughly speaking, it is a hierarchical belief structure such that the higher the order is, the more random the response is supposed to be. With the assumption that the idiosyncrasies have extreme value distribution with some parameter λ, NI is formulated as a sequence (λ n ) n∈N with lim n→∞ λ n = 0, where each λ n represents the randomness of the response in the n-th order belief.
NI is about rQRE, not sQRE. Because if it were the latter, the distribution of idiosyncrasies should be transparent and commonly known (as implied in the definition given in MP); therefore, λ has no reason to vary along the belief hierarchies. The sequence (λ n ) n∈N can only be interpreted as due to individuals' bounded rationality: as the reasoning goes deeper, it is easier to make mistakes and harder to accurately predict others' behavior. In this vein, our research can be seen as a exploration of rationalization based on sQRE.
The rest of the paper is organized as follows. Section 2 provides preliminaries about sQRE. Section 3 introduces ∆(p)-rationalization procedure; we use some example to show how it works. Section 4 contains the main results. Section 5 concludes the paper.
Quantal response equilibrium
In this paper, we focus on structural quantal response equilibrium (sQRE). In the following, for simplicity, we omit "structural" and call it QRE when no confusion is caused. We start from a definition more general than the one given in McKelvey and Palfrey [19] (referred to as MP in the following).
Let G = I, (A j , u j ) j∈I be a static game, where I is the finite set of players and for each i ∈ I, A i = {a i1 , ..., a iK i } with K i ≥ 2 is the set of actions and u i : A(:= ∏ j∈I A j ) → R is the von Neumann-Morgenstern payoff function of player i. We adopt a large-populaton scenario, where each player i ∈ I is interpreted as a population and u i the "representative" preferences of the population; in addition, there is a random variable reflecting individuals' idiosyncrasies toward each action. Formally, let Θ i = R A i for each i ∈ I, Θ = ∏ j∈I Θ j , and p a probability distribution on Θ. For each π = (π j ) j∈I ∈ ∏ j∈I ∆(A j ), each i ∈ I, and each k ∈ {1, ..., K i }, we define is the set of realization of θ i under which a ik is a best response.
Since π * i is a probability distribution, the following observation holds.
Observation 1 suggests that, under our definition, given a static game and p, QRE may not exist. The following example gives an illustration. Example 1. Consider the two-person game in Table 2. Let p i be the Dirac measure on {(0, 0)}, i = 1, 2. This game has no QRE. Indeed, for each π 2 ∈ ∆(A 2 ), p 1 (E 1H (π) ∩ E 1T (π)) = 1 > 0, which violates Observation 1. To guarantee the existence of a QRE, some restrictions are needed. MP requires p to be admissible, i.e., for each i, p i has a density function f i such that the marginal distribution of f i exists for each θ ik , k = 1, ..., K i and E f i (θ i ) = 0. In our paper, some results (e.g., Theorem 2) requires that f i to be continuous and have the full support (i.e., supp f i = Θ i ) while dispenses with restrictions on E f i (θ i ). By Brouwer's fixed-point theorem, QRE exists under both conditions. Example 2. When p has an extreme value distribution with parameter λ ≥ 0, for each π = (π i ) i∈I ∈ ∏ i∈I ∆(A i ), each i ∈ I, and each k ∈ {1, ..., K i }, This is a logistic quantal response function, one of the most popular model in the literature. MP shows that when λ → ∞, the corresponding QREs converge to a Nash equilibrium of G.
Definitions
We can take Θ as the set of payoff types. By integrating it into the "representative" model, we obtain a game with incomplete information. Formally, given a static game where for each θ = (θ j ) j∈I ∈ Θ, a = (a jk j ) j∈I ∈ A, and i ∈ I, U i (θ, a) = u i (a) + θ ik i . Since player i's payoff is only influence by her own payoff type θ i , this is a game with private value. Let p be a probability measure on Θ. We now have to incorporate p into this framework. At first glance, it looks like that no one needs to know p. Indeed, for each i ∈ I, an individual in population i even does not need to know p i ; if she knows π * −i , she just chooses a best response in A i based on the realization of her payoff types. However, achieving an equilibrium requires individual's common correct beliefs about others' behavior (Tan and Werlang [29], Aumann and Brandenburger [2], Polak [25]), and justification of π * −i needs the knowledge of the distribution of p on Θ −i conditional on each θ i ∈ Θ i . This is a transparent restriction on each individual's first-order beliefs. Formally, for each i ∈ I and θ i ∈ Θ i , the set of beliefs at θ i consistent with p is Here, µ i is a belief about the types and actions of i's opponents under payoff type θ i . There is no restriction on the joint distribution; the only requirement is that Definition 1 suggests that p −i does not vary along θ i , otherwise the opponents' distributions may not be stable as π * −i . Hence we focus on cases with independent (p j ) j∈I . Definition 2. Consider the following procedure, called ∆(p)-rationalization procedure: Step 0.
1. a i is a best response to µ i under θ i , and ∆(p)-rationalization procedure is a special case of Battigalli and Siniscalchi [5]'s ∆rationalization procedure; p emphasizes that the restriction on the first-order belief is given by p as defined in (1). This procedure iteratively removes (θ i , a i ) where a i cannot be rationalized under θ i by any belief in ∆ θ i (p) supported by the outcomes in the previous stage. Finally, every (θ i , a i ) ∈ Σ ∞ i,∆(p) is rationalizable based on common knowledge of rationality and the transparent restriction in (1). Figure 2 illustrates how this procedure works on a game with A i = {T, U} for some player i ∈ I. Suppose that p j 's are independent and the square (a subset of R A i ) is the support of p i . Since Σ 0 i,∆(p) = Θ i × A i , the support of others' belief about i can be arbitrary; on the left-hand side of Figure 2 we list four candidates. However, under some type θ i ∈ Θ i , whatever i's belief µ i is, the best response can only be T (or U). Definition 2 implies that Σ 1 i,∆(p) may look like something on the right-hand side of Figure 2, which impose restrictions on other players' second order belief about i (the white area are still "free", that is, under each payoff type the best response can be both T and U, depending on the belief). Since the marginal distribution of those beliefs on Θ i should coincide with p i , it implies that the first step of the ∆(p)-rationalization procedure generates infimums for the probabilities that T and U are used. Repeating this argument, we can see that the ∆(p)-rationalization procedure generates sequences for the infimums of T and U, respectively. The next subsection gives several numerical examples to show how those sequences behave.
The literature of rationalization used to focus on rationalizable (pure) actions (see Battigalli and Bonanno [3], Perea [23], Dekel and Siniscalchi [9], and Battigalli et al. [4] for surveys). In the literature of QRE, it is usually assumed that each p i has a full support. Under this assumption, every action is rationalizable, which makes rationalizable actions less attractive. Instead, here our focus is rationalizable outcomes, which means a profile of distributions on A i 's such that each is supportable by Σ ∞ i,∆(p) . The formal definition is as follows.
Our examples in the next subsections will illustrate what ∆(p)-rationalizable distributions and outcomes look like and how they are connected with the aforementioned sequences of infimums.
. Therefore, the types in the green area in in Figure 3 (1) can only be associated with H in each player's second-order belief. Similarly, when i,∆(p) ; this is depicted by the red area in Figure 3 (1). Therefore, the types in the green area in in Figure 3 (1) can only be associated with H . Every type in between them (i.e., in the white area) can be associated with both H and T. Therefore, the measures of the green and red areas provide the infimums of the probabilities that each action is used in players' second-order beliefs To be more specific, note that the probability measure of each area is 9 32 . It means that to ; in other words, the value of π in inequality (2) is between 9 32 and 23 32 . Under this new restriction, inequality (2) . Those areas are the green and red ones in Figure 3 (2). Now π gains new restrictions and its range becomes narrower. In the same vein, the range of π derived from Σ 2 i,∆(p) is shown in Figure 3 (3). In general, let π n and π n be the infimum and the supremum of the probability of using H at step n (i.e., the greatest lower bound and the lest upper bound of the range of π in inequality (2)). By mathematical induction, we can show that (a) π n + π n = 1, (b) π n ≤ 1 2 ≤ π n , and (c) (π n ) n is non-decreasing and (π n ) n is non-increasing. The measure of the area in Θ i which is only associated to H is (3+2π n ) 2 32 and the measure of the area for T is . Since, by inductive hypothesis, π n + π n = 1, it follows that (3+2π n ) 2 32 = (5−2π n ) 2
32
, and Hence (a) -(c) hold. Due to (b) and (c), both (π n ) and (π n ) converge and lim n→∞ π n ≤ 1 2 ≤ lim n→∞ π n . Further, (a) implies that lim n→∞ π n = lim n→∞ π n = 1 2 . Since the QRE in Example 3 coincides with the Nash equilibrium, one may conjecture that the ∆(p)-rationalization procedure converges to the latter as well. The following example shows that it is not the case. Table 4. 4 We assume that each θ ik has the extreme value distribution with λ = 10. Note that for each α, β ∈ [0, 1], (αU
Example 4. Consider the game in
is a Nash equilibrium. The only perfect equilibrium is (D, R). In contrast, the unique QRE is approximately (0.5U + 0.5D, 0.5L + 0.5R). We show that the ∆(p)-rationalization procedure converges to the QRE. Let player 1's belief about player 2's behavior π 2 = p 2 L + q 2 C + (1 − p 2 − q 2 )R. The probabilities that player 1's best response is U, M, or D are All of the functions rely upon q only. Since the game is symmetric, given player 2's belief about player 1 is π 1 = p 1 U + q 1 M + (1 − p 1 − q 1 )D, the probabilities that player 2's best response is L, C, R are obtained from (3) -(5) by replacing q 2 with q 1 , respectively. Therefore, q 1 = q 2 in the QRE, or, equivalently, q = q 1 = q 2 is a fixed point of (4). So, we only need to study the behavior of q 1 and q 2 in the ∆(p)-rationalization procedure. We rephrase (4) Consider the ∆(p)-rationalization procedure. We use q n i and q n i to denote the infimum and the supremum of q i , i = 1, 2. In the first step, each q i can be any number in [0, 1], so The same argument implies that for each n ∈ N, Since | f (x)| < 1 on [0, 1], f is a contraction mapping on a compact set in R. Hence the process in (6) converges to the unique fixed point of f on [0, 1], i.e., the probability of using M and C in QRE. Therefore, the ∆(p)-rationalization procedure converges to the QRE.
The following example differs from the previous two since, as the parameter varies, there may be multiple QREs. We apply the ∆(p)-rationalization procedure on the game with different parameter values and see what kind of distributions are ∆(p)-rationalizable. It will provide some hint for the main results in the next section.
Example 5.
Consider the vaccination game in Section 1, whose game matrix is reposted in Table 5. From the viewpoint of QRE (and ∆(p)-rationalization procedure) this game is equivalent to the asymmetric chicken game studied in Goeree et al. [12] (pp. [25][26]. Here, as in Section 1, we follow them and assume that each Θ i = R 2 and θ ik 's are independent and has an extreme value distribution with parameter λ. The relationship between QREs and the value of λ is summarized in Figure 2, which is a copy of Figure 2.4 in Goeree et al. [12], p.25. When λ is small, e.g., λ = 1/2 (i.e.,λ/(1 + λ) = 1/3), there is a unique QRE; while for big values of λ, e.g., λ = 4 (i.e., λ/(1 + λ) = 0.8), there are multiple QREs. We choose these two values of λ and see what outcomes the ∆(p)-rationalization procedure generates.
We use q n ik to denote the infimum of the probability that player i uses action k at round n. As in the previous examples, q 0 ik = 0 for each i and k. For n ≥ 0, the recursive relations are: T S T 0, 0 7, 2 S 1, 14 3, 4 When λ = 1/2, they converges to the unique QRE. The convergence process has been shown in Figure 1. The speed is relatively fast: in less than 15 steps, all q n ik 's are quite close to the limit. When λ = 4, q n 1T → 0.018, q n 1S → 1.613 × 10 −7 , q n 2T → 3.63 × 10 −21 , and q n 2S → 0.018 as n → ∞; by looking at Figure 4, one may notice that when λ = 4, the game has multiple QREs, and each q n ik converges to the smallest probability that action k of player i is used in all QREs.
One may imply from the above examples that ∆(p)-rationalization procedure "converges" to QRE. To be specific, the infimum of the probability that an action is used in all ∆(p)-rationalizable outcomes seems coincide with the infimum of the probability the action is used in all QRE; especially, when the QRE is unique, the ∆(p)-rationalization procedure seems to converge to the QRE. This conjecture is rejected by the following example.
Example 6.
Consider the asymmetric Matching-Pennies style game in Table 6. Here, p = i=1,2;k=H,T p ik , where each p ik is the extreme value distribution with λ = 5.
This game has only one QRE (see Goeree et al. [12], Chapter 2.2). The recursive func-H T H 9, 0 0, 1 T 0, 1 1, 0 Through some calculation, it can be seen that which is not a QRE since the sum of each pair is strictly less than 1. Actually, this is a fixed point of the system of equations in (7) near 0. 5 The relationship between ∆(p)-rationalization procedure and QRE will be intensively investigated in the next section.
QRE → ∆(p)-rationalizability
Our first result is that every QRE is a ∆(p)-rationalizable outcome, or, equivalently, each QRE mixed strategy is a ∆(p)-rationalizable distribution. This can be seen as the parallel of the classic result that every Nash equilibrium action is rationalizable (see Bernheim [7]). Theorem 1. Consider a static game G = I, (A j , u j ) j∈I and (Θ j ) j∈I , p with p j 's independent. Each QRE is a ∆(p)-rationalizable outcome.
Proof. Let π = (π j ) j∈I be a QRE under G, (Θ j ) j∈I , p . We will construct a a profile of random variables (s j : Θ j → A j ) j∈I such that that for each i ∈ I, p i (s −1 i (a i )) = π i (a i ) for each a i ∈ A i and (θ i , s i (θ i )) ∈ ∑ ∞ i,∆(p) for each θ i ∈ Θ i . Recall that for each i ∈ I and k ∈ {1, ..., K i }, E ik (π) is the set of θ i 's under which a ik is a best response to π −i . For each i ∈ I and θ i ∈ Θ i , we define A π i (θ i ) = {a ik ∈ A i : θ i ∈ E ik (π)}, i.e., the set of best responses of player i again π −i under θ i . We consider a mapping s i : Θ i → A i such that s i (θ i ) ∈ A π i . Note that for a "boundary" θ i , i.e., |A π i (θ i )| > 1, s i (θ i ) can be any action in A π i (θ i ); this does not cause any problem since, due to Observation 1, those "boundary" states form a null set with respect to p i .
Since π is a QRE, for each i ∈ I and a ik ∈ A i , p i (s −1 i (a ik )) = p i (E ik ) = π i (a ik ). We then have to show that for each i ∈ I, (θ i , s i (θ i )) ∈ ∑ ∞ i,∆(p) for each θ i ∈ Θ i . First, by definition, (θ i , s i (θ i )) ∈ ∑ 0 i,∆(p) for each i ∈ I and θ i ∈ Θ i . Suppose the statement holds for some n ≥ 0. For each i ∈ I, let µ i be the distribution over i,∆(p) . Here we have shown that (θ i , s i (θ i )) ∈ ∑ ∞ i,∆(p) for each i ∈ I and θ i ∈ Θ i .
∆(p)-rationalizability → QRE
Examples in Section 3.2 suggest that the converse of Theorem 1 may hold under some condition. As remarked after Example 5, it seems that the infimum of the probability that an action is used in all ∆(p)-rationalizable distributions coincide with the infimum of the probability the action is used in all QRE. This subsection is devoted to exploring this conjecture.
We first have to formulate the conjecture. Consider a static game G = I, (A j , u j ) j∈I and (Θ j ) j∈I , p . In this subsection, we assume that p j 's are independent and for each i ∈ I, p i has a continuous density function f i which has the full support (i.e., supp f i = R A i ). We use Q(G, p) to denote the set of QREs. Brouwer's fixed point theorem guarantees that Q(G, p) = ∅. For each i ∈ I and k ∈ {1, ..., K i }, we define For each i ∈ I, we use S i to denote the set of all random variables s i : Θ i → A i . For each n ≥ 0, let S n i = {s i ∈ S i : (θ i , s i (θ i )) ∈ ∑ n i,∆(p) for each θ i ∈ Θ i }, i.e., the "restriction" of S i on the n-th order ∆(p)-rationality. For each k ∈ {1, ..., K i }, we define As in Section 3.2, q n ik is interpreted as the infimum of the probability that i uses action a ik in step n of the ∆(p)-rationalization process. It is clear that q 0 ik = 0 since ∑ 0 i,∆(p) = Θ i × A i and the random variable s i which assigns a it (t = k) to each θ i is in S 0 i . The investigation is decomposed into the following three steps: (a) We show that for each i ∈ I and k ∈ {1, ..., K i }, the sequence (q n ik ) n is bounded and non-decreasing. Hence, there is some q * ik such that lim n→∞ q n ik = q * ik . Also, for each i ∈ I and n ∈ N, ∑ K i k=1 q n ik ≤ 1, and consequently ∑ K i k=1 q * ik ≤ 1. (b) We show that for each i ∈ I and k ∈ {1, ..., K i }, q * ik ≤ q ik .
(c) We gives a sufficient condition, under which for each i ∈ I and k ∈ {1, ..., K i }, q * ik is the probability that action a ik is used in some QRE. Note that for t = s, q * it and q * is may correspond to different QREs. In addition, we show that when |Q(G, p)| > 1, the condition is also necessary.
Properties of the infimum sequences
Proposition 1. For each i ∈ I and k ∈ {1, ..., K i }, the sequence (q n ik ) n is bounded and nondecreasing. Hence, there is some q * ik such that lim n→∞ q n ik = q * ik . Also, for each i ∈ I and n ∈ N, ∑ K i k=1 q n ik ≤ 1. Proof. We first show the statement for 2 × 2 games and then generalize the idea arbitrary cases. i 's are the pillars to determine the area in Θ i that "definitely" associated with t in each step of the ∆(p)-rationalization procedure.
Consider the game in Table 5. Without loss of generality, assume that H We take player 1's viewpoint. For player 1's belief qL + (1 − q)R, q ∈ [0, 1], about player 2, at each θ 1 = (θ 1T , θ 1U ) satisfying player 1 should choose T. We use E 1 TU to denote the set of θ 1 's satisfying (9). Therefore, for every interior θ 1 ∈ E 1 TU (i.e., the strict inequality holds for θ 1 in (9)) (θ 1 , U) / ∈ ∑ 1 1,∆(p) . Note that since p 1 is absolutely continuous, for those boundary θ 1 's, though (θ 1 , U) still in ∑ 1 1,∆(p) , they have no essential influence to the outcome. This argument implies that, for any s 1 : . Also, it is easy to see that for all θ 1 / ∈ E 1 TU , (θ 1 , U) ∈ ∑ 1 1,∆(p) because they can be supported by some belief µ 1 . Therefore, In the same vein, we can see that . At the second round, each player has to update her belief based on the outcome of the first round. Again, we take the viewpoint of player 1. Still, player 1 chooses T at any ]. Yet now the set of available q becomes smaller: for each µ 1 ∈ ∆(Θ 2 × A 2 ) with µ 1 (∑ 1 2,∆(p) ) = 1 and marg Θ 2 µ 1 = p 2 , the q generated by µ 1 is in the interval [q 1 with q 0 ik = 0 for each i and k. Since each q n ik ∈ [0, 1], it is clear that they are bounded. We show the monotonicity part, i.e., q n+1 ik ≥ q n ik for each i, k and n, inductively. Since f 1 is continuous and has the full support, it is clear that Similarly, it can be seen that q 1 ik ≥ q 0 ik for all i and k. Suppose that for some n > 0, q n ik ≥ q n−1 ik for all i and k. Now we show that it also holds for n + 1. Since by the inductive hypothesis, i.e., q n 2R ≥ q n−1 1T . Similarly, we can see that q n ik ≤ q n+1 ik for all i and k. By induction, we have shown the first statement. Hence, there is some q * ik such that lim n→∞ q n ik = q * ik . It is easy to see that each q n 1k (q n 2k ) is the measure of the area under or above some line on the θ 1T -θ 1U (θ 2L -θ 2R ) space, as shown in Figure 2. Geometrically, the first statement implies that the intercepts of those boundary lines converge to a point "in the middle". 6 Note that since ( . This outcome will be generalized in the following. Next, we show that q n 1T + q n 1U ≤ 1 and q n 2L + q n 2R ≤ 1 for each n ∈ N. The statement holds for n = 1 since H TU 1 ≤ H TU 1 and H LR 2 ≤ H LR 2 . Suppose it holds for some n ∈ N. Since q n 2L + q n 2R ≤ 1, i.e., q n 2L ≤ 1 − q n 2R , it follows from (10) and (11) that q n+1 1T + q n+1 1U ≤ 1. Similarly, q n+1 2L + q n+1 2R ≤ 1. Here we have shown the first statement for 2 × 2 games. Now we show how to generalize the method. Consider a general game G, (Θ j ) j∈I , p .
For each i ∈ I and k, t ∈ {1, ..., where the maximum is taken over the intervals determined by q n−1 jt 's. The set E kt i,n contains all θ i 's where player i should choose a ik against a it (again, the boundary does not matter). The maximum exists since the product of intervals is compact. Define E k i,n = ∩ K i s=1 E ks i,n . It can be seen that q n ik = p i (E k i,n ). By the inductive hypothesis, the feasible region for (q j (a j )) a j ∈A j (j ∈ I) is non-increasing from n − 1 to n, hence the maximum value is nonincreasing. Therefore, E kt i,n ⊆ E kt i,n+1 , and consequently q n ik ≤ q n+1 ik . Since p(E k i,n ∩ E t i,n ) = 0, ∑ K i k=1 q n ik ≤ 1 for each n ≥ 0. Therefore, the statement holds for general cases. Proposition 2. q * ik ≤ q ik for each i ∈ I and k ∈ {1, ..., K i }.
Proof. Suppose that q * ik > q ik for some i ∈ I and k ∈ {1, ..., K i }. Then for each s i : which is contradictory to Theorem 1.
Conditions for ∆(p)-rationalizability → QRE
Propositions 1 and 2 imply that for each i ∈ I and k ∈ {1, ..., K i }, q * ik , the infimum of the probability that action a ik is used in the outcome of the ∆(p)-rationalization process, is no larger than q ik , the smallest probability that the action is used in all QREs. Our purpose now is to find conditions under which q * ik = q ik , or, equivalently, the following statement holds.
Statement 1.
For each i ∈ I and k ∈ {1, ..., K i }, q * ik represents the probability that action a ik is used in some QRE.
Example 6 in the previous section shows that Statement 1 does not hold unconditionally. We use the game in Table 7 and discuss two cases where the conjecture holds and does not, respectively. The discussion will give an intuition about the conditions under which the statement holds.
Case 1 (Statement 1 holds). As in the proof of Proposition 1, we assume that H . Since (θ i , a i ) has not been eliminated in the ∆(p)-rationalization procedure, a i is optimal under θ i against on a belief (q j ) j =i ∈ ∏ j =i ∆(A j ) of the distributions of others' choices such that for each j = i and k ∈ {1, ..., K j }, q j (a kj ) ≥ q * jk . In other words, if we use B = (B i ) i∈I to denote the operator on subsets E ⊆ Θ × A such that B i (E) is the set of (θ i , a i ) ∈ E i which can be supported by some belief µ i in Definition 2, then B(∏ i∈I ∑ ∞ i,∆(p) ) = ∏ i∈I ∑ ∞ i,∆(p) . This is called the fixed-point property of the outcome of a rationalization procedure (see Pearce [24], Battigalli and Bonanno [3]).
Return to the game in Table 7. The fixed-point property implies that at the limit, Is there any QRE where player 1 using T with probability q * 1T ? The answer is yes. Con- This is a QRE. Indeed, note that when q 2 (L) = 1 − q * 2R , at each , player 2 chooses L, and it follows from equation (18) In a similar manner, we can show that each q * ik is the probability that player i uses the action k in some QRE.
Case 2 (Statement 1 does not hold). Assume that
At the limit, it should be Is there any QRE where player 1 using T with probability q * 1T ? Not necessarily so. To be specific, when q * 2L + q * 2R = 1 (which implies that q * 1T + q * 1U = 1), the answer is yes.
Yet if q * 2L + q * 2R < 1 (which implies that q * 1T + q * 1U < 1), as illustrated in Figure 5, the answer is definitely no. To see this, without loss of generality, suppose that in some QRE q = (q 1 , q 2 ), player 1 uses T with probability q 1 (T) = q * 1T . To fulfill this, by equation (19), q 2 (L) = q * 2L , which implies that q 1 (U) = q * 1U , and consequently it leads to q 2 (R) = q * 2R . However, by assumption, p 2 (L) + p 2 (R) = q * 2L + q * 2R < 1, not even a probability distribution. One may notice that the difference is at the pattern of the influence relations between actions. Indeed, for each i and k, the ∆(p)-rationalization procedure is based on recursively determining the area in Θ i where θ ik − θ it (t = k) is bigger than the maximum value generated by the infimum of q −i,s 's in the previous stage. To find the maximum value, one needs to make as small as possible the probabilities of the opponent's actions which is associated with a value strictly less than H kt i . Those actions of the opponent are the "marginal" ones determining the value of q n ik for each n. If we want one action a ik to be used by the probability q * ik , then every action a jt directly or indirectly marginal for it should be used by q * jt . Since all those are infimums, whether they are in some QRE depends on whether the remaining actions can decompose the residue and form an equilibrium.
We now formalize the idea in 2-person games. The condition will be formalized in Theorem 2. The generalization will be discussed in Section 4.2.3.
Consider a 2-person game G = A 1 , A 2 , u 1 , u 2 . For each i = 1, 2, we define a corre- Suppose that φ i (a ik , a it ) = ∅. Note that φ i (a ik , a it ) can never be A −i , since there is some it ) for each i ∈ I and a ik ∈ A i . Informally, Φ i (a ik ) is the set of "marginal" actions of player −i for a ik since the infimums of the probabilities that those actions are used determines the area in Θ i where a ik is optimal. 7 Φ i defines a relational structure on A i . An action a ik ∈ A i is called non-serial iff Φ(a ik ) = ∅; it is called indirectly non-serial iff for some b −i ∈ Φ(a ik ), Φ(b −i ) = ∅. 8 We have the following result.
Lemma 2.
If for some i ∈ {1, 2}, a ik ∈ A i is non-serial, then each action in A i is non-serial, and consequently every action in the game is eventually non-serial.
Proof. If a ik is non-serial, then for each a i ∈ A i and each Hence Φ i (a i ) = ∅ for each a i ∈ A i , and consequently every action in the game is eventually non-serial.
We call a game serial iff no action is non-serial. It can be seen that (Φ 1 , Φ 2 ) defines a directed graph (⇒, A 1 ∪ A 2 ) such that for each a ik , a jt ∈ A 1 ∪ A 2 , a ik ⇒ a jt iff a jt ∈ Φ i (a ik ). For example, in Figure 6 (1) it is the directed graph for the game in Example 5, and in Figure 6 (2) is that for Example 6. For each a ik , a jt ∈ A 1 ∪ A 2 , a jt is reachable from a ik iff there are a i 0 ,k 0 , a i 1 ,k 1 , ..., Note that a ik is reachable from itself (i.e., when N = 0). We use C(a ik ) to denote the set of all actions reachable from a ik .
Also, we define an operator L on 2 A 1 ∪A 2 as follows: for each B ⊆ A 1 ∪ A 2 , L(B) := {a j ∈ A 1 ∪ A 2 : Φ j (a j ) ⊆ B}. Informally, L(B) is the set of actions whose marginal influencers are in B. We can repeatedly apply L to a set, and we define L ∞ (B) = ∪ ∞ n=0 L n (B); here, we stipulate that L 0 (B) = B. L ∞ (B) is the set of all actions directly and indirectly being influenced by and influencing actions in B.
Theorem 2. Let a ik ∈ A i . If one of the following condition is satisfied, then q * ik = q ik : (1) a ik is eventually non-serial, or (2) Proof. First, suppose that a ik is non-serial. It follows that for each t ∈ {1, ..., . Also, it is clear that in every QRE, a ik is used with probability p i (∩ K i s=1 E kt 1 ). Hence q * ik = q ik . Similarly, if a ik is indirectly nonserial, since the probabilities of their marginal actions will be fixed from the first round, q n ik is fixed for each n ≥ 2, and in every QRE a ik is used by q 2 ik . Hence we still have q * ik = q ik . Now suppose that a ik is not eventually non-serial and L ∞ (C(a ik )) = A 1 ∪ A 2 . We define the following symbols: for each j ∈ {1, 2}, It is clear that for each j ∈ I, A o j and A j form a partition of A j . We have the following observation.
Observation 2.
When a ik is not eventually non-serial, L ∞ (C(a ik )) = A 1 ∪ A 2 implies that A j = ∅ for each j = 1, 2.
To see this, suppose that A j = ∅ for some j ∈ {1, 2}. Then ). Yet since no a −j ∈ A −j is non-serial, there should be some b j ∈ A j marginal to some a −j ∈ A −j which is not in L ∞ (C(a ik )), otherwise by definition A −j = ∅. Yet since A j = ∅, b j ∈ L ∞ (C(a ik )), and consequently a −j ∈ L ∞ (C(a ik )), which implies that A −j = ∅, a contradiction. Therefore, A j = ∅ for both j = 1, 2.
Now we return to the proof of Theorem 2. To show q * ik = q ik , we show that Statement 1 holds here, i.e., there is some QRE where a ik is used by probability q * ik . Combining it with Propositions 1 and 2 we obtain q * ik = q ik . Consider q = (q 1 , q 2 ) ∈ ∆(A 1 ) × ∆(A 2 ) defined as follows: (a) For each a jt ∈ A o jt for each j ∈ {1, 2}, let q j (a jt ) = q * jt . (b) Since we have shown above that A j = ∅ for both j = 1, 2, we can define A j : ∑ a js ∈A j r j (a js ) = 1 − ∑ t:a jt ∈A o j q * jt and r j (a js ) ≥ q * js for each s with a js ∈ A j } By Proposition 1, each B j is well defined. It is clear that each B j is compact and convex, so is B := B 1 × B 2 . Now consider g : B → B such that for each j ∈ {1, 2}, r ∈ B, and s such that a js ∈ A j , g js (r) By our assumptions about p j , j = 1, 2, g is continuous. It follows from Brouwer's fixed point theorem that g has a fixed point r * . Then for each a js ∈ A js , we let q j (a js ) = r * js . It can be seen that q is a QRE in which a ik is used by probability q * ik .
The condition provided in Theorem 2 is sufficient. It is not necessary when the QRE is unique. To see this, consider games in Examples 3 and 6 in Section 3.2. 9 For each game and each a ik , L ∞ (C(a ik )) = A 1 ∪ A 2 . However, the ∆(p)-rationalization procedure converges to the QRE in Example 3 but fails to do so in Example 6. However, when there are multiple QREs, the conditions in Theorem 2 is also necessary. We have the following result.
Proof. Since no action is eventually non-serial and L ∞ (C(a ik )) = A 1 ∪ A 2 , it follows that for each distinct π, π ∈ Q(G, p), each j ∈ I and t ∈ {1, ..., K i }, π j (a jt ) = π j (a jt ). Therefore, ∑ s∈{1,...,K j } q js < 1 for each j ∈ I. Applying the fixed-point property as in Case 2 (Statement 1 does not hold) above, it follows that if q * ik = q ik for some i ∈ I and k ∈ {1, ..., K i }, it follows from L ∞ (C(a ik )) = A 1 ∪ A 2 that all q * jt = q jt , which does not form a QRE. Hence q * ik < q ik for each i ∈ I and k ∈ {1, ..., K i }.
The strictness of the conditions and a full convergence
Theorem 2 gives a sufficient condition for a ∆(p)-rationalization process "locally" converging to some QRE (locally means that we only focus on an individual action a ik ); it is also necessary when the QRE is unique by Proposition 3. Now we face a problem: How "large" is the set of games satisfying condition (1) or (2) in Theorem 2? Or, how "special" such a game can be? This problem is also relevant to generalizing Theorem 2 to n-person games. It is clear that condition (1) is quite strict and it does not hold for generic games. Indeed, as noted in Lemma 2, one action a ik 's non-seriality implies that the payoff matrix of player i has order 1. Condition (2) seems more general. In a 2 × 2 game, it implies a directed graph as in Figure 6 (1). Many games intensively studied in the literature satisfy the condition, for example, the asymmetric game of chicken (Goeree et al. [12], p. [25][26], coordination games (Goeree et al. [12], p.29-30, Anderson et al. [1], Turocy [30]), and many (but not all) dominance-solvable games. However, it is impossible for a Matching-Pennies style game (MP, Ochs [22]) to satisfy condition (2). For a general 2-person game, we have the following result. Lemma 3. Suppose that a ik ∈ A i is not eventually non-serial. Then if it satisfies condition (2) in Theorem 2, it satisfies the following two conditions: Proof. For (A), since a ik is serial, |Φ i (a ik )| ≥ 1. Suppose that |Φ i (a ik )| > 1 and let and consequently A i ⊆ L ∞ (C(a ik )). By Observation 2, it follows that L ∞ (C(a ik )) = A 1 ∪ A 2 , a contradiction. Hence, |Φ i (a ik )| = 1.
Lemma 3 shows that condition (2) in Theorem 2 can be quite strict in general 2-person games. It actually implies that most games do not statisfy the condition. We have the following result. Proposition 4. Consider a 2-person serial game with |A i | ≥ 2 for each i = 1, 2 and at least one player has more than two actions. Then no action satisfies condition (2) in Theorem 2.
By Lemma 1, {b 1 , c 1 } ⊆ Φ 2 (b 2 ) for each b 2 ∈ A 2 with b 2 = a 2 (since |A 2 | ≥ 2, such b 2 exists), and consequently b 2 ∈ L 2 (C(a 1 )). Hence A 2 ∩ L ∞ (C(a 1 )) = A 2 . By Observation 2, L ∞ (C(a 1 )) = A 1 ∪ A 2 , a contradiction. Proposition 4 implies that condition (2) in Theorem 2 cannot be satisfied in a generic nperson game (n > 2). Even if each player has only two actions, since Φ i (a ik ) now contains profiles of actions in A −i = ∏ j =i A j and |A −i | > 2, through an argument similar to the proof of Proposition 4, it can be seen that no action satisfies condition (2) in Theorem 2.
Conclusion
In this paper, we define ∆(p)-rationalization procedure, a special case of Battigalli and Siniscalchi [5]'s ∆-rationalization procedure, to characterize robust outcomes in large populations, and we investigate the relationship between ∆(p)-rationalizable outcomes and MP's QREs. Our results here have two implications. First, in a non-trivial class of 2 × 2 games, which is characterized in Theorem 2 and Proposition 3, QREs are informative for determining robust outcomes. Second, however, in general, when QRE is not unique, robust outcomes derived from the ∆(p)-rationalization procedure can be larger than the QRE, which makes the former a better benchmark to estimate robust outcomes in large populations. 10
|
2021-07-01T05:22:53.314Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a1ee767cb404c9f4306eeb955c33c942881a1040",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a1ee767cb404c9f4306eeb955c33c942881a1040",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": []
}
|
91567637
|
pes2o/s2orc
|
v3-fos-license
|
Eosinophilic Granulomatosis with Polyangiitis Diagnosed by Gallbladder Tissue
Received:August 16, 2018, Revised: August 28, 2018, Accepted:September 11, 2018 Corresponding to:Jinseok Kim http://orcid.org/0000-0001-7518-3284 Division of Rheumatology, Department of Internal Medicine, Jeju National University School of Medicine, 15 Aran 13-gil, Jeju 63241, Korea. E-mail:slera@yahoo.com Copyright c 2019 by The Korean College of Rheumatology. All rights reserved. This is a Open Access article, which permits unrestricted non-commerical use, distribution, and reproduction in any medium, provided the original work is properly cited. Clinical Image pISSN: 2093-940X, eISSN: 2233-4718 Journal of Rheumatic Diseases Vol. 26, No. 1, January, 2019 https://doi.org/10.4078/jrd.2019.26.1.83
Eosinophilic Granulomatosis with Polyangiitis Diagnosed by Gallbladder Tissue
In eosinophilic granulomatosis with polyangiitis (EGPA), the incidence of gastrointestinal involvement is reported to range from 17% to 59% [1].Gallbladder involvement is a rare comorbid condition in EGPA [2].We present an atypical case of EGPA diagnosed on the basis of histological findings of the gallbladder after cholecystectomy.The study was approved by the Institutional Review Board of the Jeju National University Hospital (IRB no.2018-07-009).A 47-year-old man visited the hospital with progressive weakness and sensory deterioration in both the lower legs for 9 days.He had been diagnosed with asthma 6 months ago and had a history of surgery for sinusitis 5 months ago.Physical examination showed decreased muscle strength and right sided foot drop.On blood testing, leukocytosis with a marked increase in eosinophils was observed (white blood cell count 21,300/μL; segmented neutrophils 33.7%, lymphocytes 7.5%, monocytes 2.1%, eosinophils 56.5%).Further laboratory examination revealed an increase in C-reactive protein to 4.68 mg/dL, erythrocyte sedimentation rate of 45 mm/hr, immunoglobulin E level of 2,500.0IU/mL).Also, rheumatoid factor (27 IU/mL) and myeloperoxidase antibody (150.5 IU/mL) were positive.
Muscle weakness progressed gradually, with left sided foot drop developing on the night of admission, followed by right wrist drop, which presented the following day.On the third day, a nerve conduction study was performed, which showed multiple mononeuropathy.A 3 cm long sural nerve was biopsied from the lateral aspect of the left ankle, and high dose corticosteroid treatment (1 mg/kg prednisolone) was initiated immediately.Abdominal and pelvic computed tomography (APCT) and chest CT were performed to rule out the possibility of peripheral neuropathy associated with malignancy.Diffuse irregular gallbladder wall thickening was seen on APCT (Figure 1).However, positron emission tomography-computed tomography revealed no findings suspicious for gallbladder cancer.Cholecystectomy was performed as recommended by the surgeon, in order to rule out malignancy.Nerve biopsy results showed no inflammatory cell infiltration or vasculitis.However, eosinophilic granulomatosis with polyangiitis was diagnosed from the gallbladder tissue due to presence of chronic ac- tive inflammation with granulomatous vasculitis and eosinophilic infiltration (Figures 2 and 3).The patient was started on cyclophosphamide and high dose corticosteroid treatment, after which muscle strength gradually improved.
Histopathologic analysis still remains the gold standard for diagnosis of antineutrophil cytoplasmic antibody-associated vasculitis.Most cases are diagnosed by performing a biopsy at the symptomatic site, but characteristic EGPA findings may be seen in a biopsy performed at the symptom-free site, as observed in this patient.
Figure 1 .
Figure 1.Abdominal and pelvic computed tomography scan showing diffuse wall thickening of the gallbladder with some irregularity.
1 Departments of 1
Internal Medicine and 2 Pathology, Jeju National University School of Medicine, Jeju, Korea
Figure 2 .
Figure 2. Granuloma surrounds the blood vessel (blue arrows) and granulomatous inflammation is also noted in and around the blood vessels (H&E, ×40).
Figure 3 .
Figure 3.The vessel wall (yellow arrows) is destructed by inflammatory infiltrates in the right upper area of the vessel (vasculitis with fibrinoid necrosis, red arrow).Many eosinophils which have bright red colored cytoplasm infiltrates into and around the vessel (black arrows) (H&E, ×200).
|
2019-04-03T13:09:03.440Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "19ddb7dbd236adfc1f722a4896d59d09faa70a2b",
"oa_license": "CCBYNC",
"oa_url": "https://synapse.koreamed.org/upload/SynapseData/PDFData/1010jrd/jrd-26-83.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c9aad77ab22ab69cd539fe491d979cc1312ae536",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
67857506
|
pes2o/s2orc
|
v3-fos-license
|
A rare case of transient left ventricular apical ballooning syndrome following living donor liver transplantation: A case report and literature review
Highlights • Transient Left Ventricular Apical Balloon Syndrome (LV-ABS) is an acute dysfunction of the left ventricle.• Its clinical feature is similar to myocardial infarction.• The echocardiogram show characteristic finding which is different from that of myocardial infarction.• LV-ABS is rare, however, should be considered after liver transplantation.• Proper diagnosis is important to restore cardiac function although it improve almost normally with conservative treatment.
Introduction
Transient left ventricular apical ballooning syndrome (LV-ABS) is unknown acute dysfunction of the left ventricle, which develops under emotionally or physically stressful event such as the perioperative setting after minor or major surgical procedures including deceased donor liver transplantation [1][2][3]. The clinical feature mimics an acute myocardial infarction, although it shows unique echocardiographic features [4][5][6]. Characteristically, hypokinesis or akinesis occurs in the middle and apical segments of left ventricle in the absence of epicardial coronary lesions, which results in ballooning of the apical wall with sparing of basal systolic function. Prognosis of this disease is relatively good and cardiac function of most cases improves to normal levels with a proper conservative management [4,5]. Here, we report the case of transient LV-ABS in liver transplantation recipient several days after LDLT with some literature view. Although the patient temporarily required inten- sive care, the cardiac function recovered to normal level after conservative treatment. Our work has been reported in line with the SCARE criteria [7].
Presentation of case
A 68-year-old female patient had been diagnosed with hepatitis C cirrhosis along with hepatocellular carcinoma (HCC) and HCC was treated with radiofrequency ablation (RFA) therapy. After RFA therapy, her liver function deteriorated and she developed hepatic encephalopathy frequently. She was referred to our hospital as a candidate for living donor liver transplantation (LDLT). A preoperative Model for End-stage Liver Disease (MELD) score was 14 and there was no evidence of recurrent HCC.
Her past medical history was unremarkable, and there was no evidence for underlying cardiac or pulmonary disease. Preoperative echocardiography demonstrated normal left ventricular function (EF 65%) with normal left ventricular size and motion and a 12-lead electrocardiogram (ECG) showed no abnormal findings including ischemic change.
She underwent LDLT using left lobe graft from her son. Total operating time was 13.0 h and the amount of bleeding was 4700 ml. Despite a high volume transfusion (packed red blood cells 14 units, fresh frozen plasma 5 units; platelet cells 10 units) during transplantation, the intraoperative course was uneventful. The patient required vasopressors during the procedure, but had no cardiac event or hemodynamic instability. After the operation, epinephrine was administered to the patient (8.62 ␥).
The patient showed a good post-operative course and was extubated on post-operative day (POD) 2. She discharged from the ICU on POD 3. On POD 4, she experienced dyspnea and tachycardia. Her blood pressure and oxygenation level had decreased. She had needed re-intubation and ventilation support due to sus-tained hypotension and hypoxemia. The chest X-ray examination showed expanded cardiothoracic ratio (CTR) and bilateral field opacities in a butterfly distribution, suggesting pulmonary edema due to congestive heart failure ( Fig. 1a). Computed tomography revealed diffuse ground-glass appearance in bilateral ling with pleural effusion (Fig. 1b). Echocardiogram showed a severe apical wall motion abnormality with normal basilar wall motion and severely impaired left ventricular function with an EF of 40% ( Fig. 2). A 12-lead ECG demonstrated sinus tachycardia and STelevation in the all leads and T wave inversion. Blood tests showed that a peak creatine kinase MB (CK-MB) levels were 11.0 IU/L (normal 0-7 IU/L) and troponin I levels were 0.33 ng/mL (normal <0.03 ng/mL). NT-pro BNP, which is an index of heart failure, greatly elevated to 6699 pg/mL (normal <125 pg/mL). Based on these clinical and laboratory findings, especially its characteristic echographic feature, a diagnosis of transient LV-ABS (also referred to as "stress induced cardiomyopathy" or "takotsubo cardiomyopathy") was made. Because the patient was hemodynamically unstable and had bleeding diathesis, we withheld coronary catheterization although myocardiac infarction could not be excluded completely. We started to administer diuretics, human atrial natriuretic peptide (hANP). The patient's cardiac and respiratory functions gradually improved. The echocardiography revealed that the EF returned to 62% and she was extubated on POD 9. As improvement of cardiac function, epinephrine was gradually tapered. She was discharged from Intensive Care Unit (ICU) on POD 10. After discharge of ICU, the patient's subsequent course was unremarkable during the period of hospitalization. NT-pro BNP gradually decreased to 3099 pg/mL, 2655 pg/mL, 2017 pg/mL, and 228 pg/mL on POD 8, 11, 25, and 39. On POD 44, echocardiogram showed normal left ventricular function (the EF 60%) with no wall motion abnormalities (Fig. 3). She was discharged on POD 50 in good condition, with no signs of cardiac insufficiency and good function of liver graft.
Discussion
A transient LV-ABS (also known as "stress induced cardiomyopathy" or "takotsubo cardiomyopathy") is a reversible acute dysfunction of left ventricle. This disease typically occurs in postmenopausal women (88%), following an emotionally or physically stressful event [8]. Its clinical features are similar to an acute myocardial infarction. The clinical diagnostic criteria are i) transient akinesis or dyskinesis of the LV apical and mid-ventricular segments with regional wall motion abnormalities extending beyond a single epicardial vascular distribution, ii) absence of obstructive coronary disease or angiographic evidence of acute plaque rupture, iii) new ECG abnormalities such as ST-segment elevation or T-wave inversion, iv) the absence of recent head trauma, intracranial bleeding, pheochromocytoma, obstructive epicardial coronary artery disease, myocarditis, or hypertrophic cardiomyopathy [6]. Since coronary catheterization was not performed due to instability hemodynamics and bleeding diathesis, we could not exclude obstructive coronary disease completely in this case. However, other findings fitted the rest of diagnostic criteria outlined above. Especially, echocardiogram showed characteristic finding of this syndrome and there was no obvious evidence of ischemic heart disease in various other examinations. The pathophysiology of this disease is unknown, but it is suggested to result from myocardial stunning secondary to high levels of circulating catecholamine and stress-related neuropeptides due to multi-vessel epicardial spasm and myocarditis [1,4,6,9]. The most possible mechanism is that a surge of catecholamines followed a stressful event impairs myocardial perfusion resulting in cardiac myocyte injury.4 In our case, she had undergone stressful liver transplantation which required large volume of transfusion and long operation time.
There is a hypothesis that a decreased level of estrogen which is reported to protect myocardium from the stress of the sympathetic nervous system leads to transient LV-ABS [9,10]. It is also reported that administration of estrogen was able to prevent the onset of transient LV-ABS in postmenopausal women.9 These findings suggest that postmenopausal women such as our case have a high risk of transient LV-ABS because estrogen has decreased. Of note, transient LV-ABS in liver transplantation occurred not only during or immediately after surgery but also on several days after operation (Table 1).
When the patient, especially postmenopausal women, complains of chest pain or dyspnea after liver transplantation, the clinician should consider transient LV-ABS although it is relatively rare [2,11,12]. Furthermore, this cardiac complications after surgery, not only at early period after surgery, it is necessary to observe carefully a week before and after surgery.
As for treatment, although optimal management has not been established, it is desirable to avoid or taper catecholamine use if possible and to treat heart failure with diuretics, hAMP, vasodilators, and -blockers to reduce the pre-and after-load. In a serious case, use of intra-aortic balloon pumping (IABP) can also be considered until the cardiac function improves, and a dynamic blood circulation state should be managed strictly [13]. If the cardiac function is maintained successfully with conservative management, cardiac function will return to a normal level without specific treatment and the prognosis is good in most cases. It is reported that there is no difference between patients with transient LV-ABS and age-and gender-matched general populations in a 4-year survival rate. It is also reported that in-hospital mortality is <2%, and the recurrence rate is generally <10% [4,8]. In the literature, there are 10 cases of the report about transient LV-ABS after liver transplantation except for our case (Table 1). Interestingly, cardiac function had recovered to the normal range in all cases. Therefore, if we are diagnosed with transient LV-ABS, it seems important to manage cardiac function properly and strictly until it improves.
Conclusion
In conclusion, we report the case of a transient LV-ABS in LDLT recipient. The stress due to invasive surgery such as liver transplantation may be associated with myocardial stunning. It could result in acute heart failure and be a fatal complication after liver transplantation without proper diagnosis and treatment. Transient LV-ABS should be considered as a cause of cardiac dysfunction in recipient of liver transplantation.
Conflicts of interest
We declare no conflict of interest.
Sources of funding
This work was supported by the Japan Agency for Medical Research and Development (AMED;JP15fk0210016 h003 and JP16fk0210107).
Ethical approval
According to our institution guideline, approval to publish this case report was waived by the institution.
Consent
Written informed consent for publication of this case report and accompanying images was obtained from the patient.
Author's contribution
Asuka Tanaka and Takashi Onoe acquired the data, wrote an original draft of the manuscript, revised the draft and approved the final manuscript. Kohei Ishiyama, Kentaro Ide, Hirotaka Tashiro and Hideki Ohdan acquired the data, read and revised the draft and approved the final manuscript.
Registration of research studies
Not applicable.
Provenance and peer review
Not commissioned, externally peer-reviewed.
|
2019-03-11T17:16:48.216Z
|
2019-02-10T00:00:00.000
|
{
"year": 2019,
"sha1": "23f736eebeed54d605e7629654297782aa647242",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijscr.2019.02.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d8949da96786d5afbb6059e581580a5e6fb559b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212966719
|
pes2o/s2orc
|
v3-fos-license
|
Preparation and Characterization of Loess/Polyacrylamide Composites
Using a kind of rich soil in the world, loess soil (LS), as natural silicate minerals materials, a novel low-cost inorganic / synthetic polymer composite, loess/polyacrylamide composite (LS/PAM), was prepared by in-situ polymerization of acrylamide. The obtained LS/PAM composite was characterized by infrared spectra (IR), thermogravimetric analysis (TG) and scanning electron microscope (SEM). The results indicate that polyacrylamide (PAM) was uniformly composited with loess particles successfully.
Introduction
Loess soil (LS), a typical loose and porous silicate minerals with easily vadose [1,2], is a kind of rich soil in the world, and major deposits are observed in the USA, Argentina, China and some countries in Asia and Europe. In China, the Loess Plateau cover almost all the northwestern region and part of northeast region. Thus, it has prospective in industrialization. In past decades, the geologic origin, distribution, composition, and properties of Loess have been investigated by geologists, pedologists and environmental scientists [3]. Loess have high pH values, carbonate content, porosity and permeability, and low organic matter content. Its main compositions are silica, aluminosilicate, CaCO 3 , iron oxide particles, and carbonaceous particles. And main clay minerals in the natural loess are illite, kaolinite, chlorite, and montmorillonite [4]. The chemical composition of loess soil is mainly SiO 2 , followed by Al 2 O 3 , CaO, and a small amount of Fe 2 O 3 , MgO, K 2 O, Na 2 O, ΤiO 2 , and MnO. It was found that loess could be used to remove pollutant from aqueous solution [5,6], but its adsorption capacity was not higher in comparison with other natural or synthetic polymer adsorbents [7]. Nevertheless, we found that polymer modification is effective methods for improving its adsorption capability. For instance, loess based poly(acrylic acid-2-hydroxyethyl methacrylate) complex and loess based polymethylacrylic acid-g-chitosan complex were synthesized successfully and applied to remove contaminant in wastre water [8,9]. In this paper, a loess soil composited synthetic polymer, loess/ polyacrylamide composite (LS/PAM), was prepared by in-situ polymerization, and characterzied by IR, TG and SEM.
Materials
Loess soil (LS) was obtained free from the local hill located in Longnan of China. It was collected at 50 cm -100 cm below the surface. Acrylamide (AM) and ammonium persulfate (APS) were supplied by Tianjin Baishi Chemical Co., Ltd. N,N-methylene-bis-acrylamide (MBA) was purchased from Tianjin Institute of Chemical Reagents. Ethanol was purchased from Tianjin Fuyu Fine Chemical Co., Ltd. All reagents are analytical reagents and utilized without further purification.
Preparation of LS/PAM
Firstly, the raw material of loess powders was ground and sieved with 100 meshes, which afforded the powders of loess soil (LS).
Secondly, loess/polyacrylamide composite (LS/PAM), was prepared by in-situ polymerization of acrylamide: 12 g of loess soil (LS) was dispersed uniformly in 30 mL of distilled water with stirring for 30 min at room temperature. Then, 4 g of acrylamide (AM), 0.03 g of N,N-methylene-bis acrylamide (MBA) were added and stirred for 30 min at room temperature. The mixture was heated to 45 C under stirring. After adding 0.05 g of ammonium persulfate (APS), the temperature was raised to 75 C and the mixture was stirred for 60 min under continuous stirring, which afforded a gel-like solid. In order to remove unreacted monomers, the gel-like solid was washed by H 2 O, and EtOH, respectively. Finally, the product was cut into small pieces and vacuum dried at 50 C for 10 hrs, which afforded loess/polyacrylamide composite (LS/PAM).
Characterization
Following equipment was used to characterize the obtained particles, including FT-IR (DIGILAB-FTS-3000 spectrophotometer), thermogravimetric (TG) and differential thermal analysis (DTA) (PerkinElmer, Pyris Diamond), and scanning electron microscope (SEM) (ULTRA Plus, at 5 kV, Germany). FT-IR spectra were recorded between 4000 and 400 cm −1 through the KBr method with a FTS -3000 spectrophotometer. The morphology of loess particles was observed using a SEM microscope.
Results and discussion
As we know, polyacrylamide (PAM) are a relatively cheap and widely commercially relevant cationic polymer utilized mainly for water treatment due to its high efficiency and rapid dissolution, which show eminent application prospect in sewage treatment [10], such as adsorbent [11], green flocculent [12,13], or catalyst [14]. One of the largest uses for polyacrylamide is to flocculate solids in a liquid. This process applies to water treatment, and processes like paper making and screen printing. Polyacrylamide can be supplied in a powder or liquid form, with the liquid form being subcategorized as solution and emulsion polymer. Polyacrylamide can adsorb solid particles suspending in polluted water to make particles aggregate and form precipitates. Therefore, it can accelerate the settlement of the particles in the suspension, and obviously speed up solution clarification and the effect of filtration [15]. Being composited with natural polymers [16] or synthetic inorganic materials and mineral materials [17], its activity or selectivity could be improved [18]. For example, Amino functionalized polyacrylamide was grafted on magnetite nanoparticles via surface -initiated atom transfer radical polymerization (ATRP) [19]. Liu et al [20] reported that Al(OH) 3 -polyacrylamide chemically modified with dithiocarbamates was synthesized using formaldehyde, diethylenetriamine, carbon disulfide, and sodium hydroxide for rapid and efficient removal of Cu 2+ and Pb 2+ . Ramirez-Muniz et al [21] had prepared goethite-PAM composite by immobilizing goethite (iron oxides) on the PAM hydrogel, and PAM hydrogel was prepared vie in-situ free radical polymerization using MBA as crosslinking agent. They also found that the immobilization of goethite on the PAM hydrogel reduces the specific surface area of the composite, compared with the powder goethite, which slightly affects the arsenic adsorption capacity. Wang et al reported the synthesis of polyacrylamide intercalated molybdenum disulfide composites (PAM/MoS 2 ) by a simple hydrothermal method for the efficient elimination of Cr(VI) from aqueous solutions [22]. They found that Cr(VI) ions were firstly adsorbed onto the surfaces of the composites by electrostatic attraction, and then entered into the interlayer and combined with amide group at the interlamination of the composites. Saad et al [23] reported that the ordered mesoporous silica (MCM-41) could be grafted with polyacrylamide(PAM) through in situ 3 radical photopolymerization process. The obtained composite was then employed for the uptake of Hg(II) from aqueous solutions, and its maximum adsorption capacity got to177 mg/g at condition of 25 •C and pH 5.2.
Functional materials could be parpared by modifying or compositing PAM. However, most of costs were high. Therefore, one of developing direction is prepare an efficient and inexpensive functional polymer materials. Loess soil, a yellow soil with the uniform particle size and the porosity of the porous, is a green natural material of richness. In China, the Loess Plateau covers almost all the northwestern region and parts of others. Although its adsorption activity was investigated [24], there are few reports about loess composites. In the present study, in order to get a novel polymer adsorbent with low-cost, a loess soil composited polymer, loess/polyacrylamide composite (LS/PAM), was prepared by in-situ polymerization, and characterzied by IR, TG and TEM.
FT-IR spectra: Loess/polyacrylamide composite (LS/PAM) and its materials (LS) were characterized by FT-IR spectra, and results are showed in Figure 1. In the spectrum of LS, a broad absorption peak at 1037 cm −1 is assigned the Si-O-Si stretching vibrations. And the sharp peak at 797 cm −1 corresponds to the quartz [25]. In the spectrum of LS/PAM, some characteristic peaks of LS are remained near 1037 cm −1 and 797 cm −1 . And some characteristic peaks of polyacrylamide appears too. The peak at 1685 cm -1 is attributed to the stretching vibration of -C=O, and peaks at 1645 cm -1 , 3184 cm -1 is corresponding to the deformation vibration and characteristic absorption peak of -NH 2 [26]. The results suggest that polyacrylamide was composited with loess successfully. TG analysis: Loess/polyacrylamide composite (LS/PAM) and its materials (LS) were characterized by thermogravimetric (TG) and differential thermal analysis (DTA), and results are showed in Figure 2. In TG curve of LS/PAM, the physically adsorbed water was quickly escaped below 100 C. The interlayer water and bound water were lost slowly above 100 C, and 5% weight was lost totally. In TG curve of LS/PAM, there are two stages of mass loss. The initial 5% of weight was lost below 100 C, which is attributed to the escape of physically adsorbed water. The second weight loss in 250 ~ 350 C was related to the decomposition of carboxyl group; and the last weight loss is associated with the breakage of polymer chain. DTA showed two well-differentiated exothermic peaks at 300 ~ 350 C, which coincided with the pyrolysis of the organic matter.
SEM images: The micromorphology of loess/polyacrylamide composite (LS/ PAM) and its materials (LS) were characterized by SEM (Figure 3). Loess soil look like to be united by small particles. In SEM image of LS/PAM, the surface of LS particles are covered and linked by polymer film of PAM. It indicates that loess was composited with polyacrylamide uniformly. Based on above characterization, it concludes that polyacrylamide was composited with loess successfully. In another side, the polyacrylamide molecular chain contains a large amount of amide groups, which have good water solubility, excellent flocculation performance and adsorption properties. It can be form hydrogen bonds with many substances use affinity or adsorption. Therefore, it show eminent application prospect in sewage treatment.
The adsorption behaviors of LS/PAM were investigated with removing lead ions in aqueous solutions. The results showed that the removal rate of lead ions was more than 98 % under optimal conditions. That means a novel low-cost loess based polymer composites adsorbents was successfully prepared.
Conclusions
Loess soil (LS), a kind of rich soil in the world, was used as materials, acrylamide as funcational monomer, a novel low-cost inorganic minerals/polymer composite, loess/polyacrylamide composite (LS/PAM), was prepared by in-situ polymerization. And it was characterized by infrared spectra, thermogravimetric analysis and scanning electron microscope. It indicates polyacrylamide was uniformly composited with loess successfully, which afforded a novel low-cost loess-based polymer composites adsorbents.
|
2019-12-05T09:39:53.227Z
|
2019-11-27T00:00:00.000
|
{
"year": 2017,
"sha1": "4c9b82340cba6f1c270e4b7100ba2e4af9f95d1b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/678/1/012110",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "24a206a4648a81ecf792fef547a220f555271d6d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
}
|
187815371
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Mathematical Ability of High School Students based on Item Identification of National Examination Set
This research aimed to compare the mathematical abilities of high school students which include mathematical connection ability, mathematical reasoning ability, mathematical communication ability in Indonesia based on gender and school status in 2016 National Examination. The research sample was 1870 high school students of the Natural Science Departments and 2000 high school students of the Social Science Departments taken with random sampling. Research data was in the form of scores of Jakarta high school students' works in 2016 National Examination. The results showed (1) mathematical connection ability and mathematical reasoning ability of Natural Science department students from private high schools are higher than public high schools, (2) mathematical connection ability and mathematical reasoning ability of Social Science department students from private high school are higher than public high
I. INTRODUCTION
Mathematics is a science that cannot be separated from other sciences because mathematics is used to help solve other scientific problems or problems in everyday life. For example, to determine the growth of occupation of an area or the growth of bacteria, the calculation requires the concept of differential integral or geometrical series concept.
The concepts, principles, and procedures of mathematics conveyed in mathematical learning aim to develop five mathematical abilities, which are mathematical connection ability, mathematical communication ability, reasoning ability, mathematical representation ability and problemsolving ability [1]. Mathematical connection ability is students' ability to connect the knowledge they have had before with knowledge will be learned. Mathematical connection ability includes the relationship between one topic and another in mathematics, the relationship between mathematical topics and topics in other disciplines, as well as the relationship between mathematics and real life.
Mathematical communication is one of important aspect of mathematics learning. The process of mathematical communication can help in building students' understanding through the representation of mathematical ideas in verbal and written way so that students unwittingly have built an understanding of the mathematical concepts they have, and students are able to express and develop their ideas [1], [2], [3]. Indicators of mathematical communication ability are: 1) representing mathematical problems using pictures, tables and graphs, 2) communicating mathematical thinking in verbal and written way, 3) using mathematical language to represent ideas, 4) interpreting and evaluating mathematical ideas in verbal and written way, and 5) using terms, symbols, and structures to model mathematical situations or problems [1], [2], [4], [5].
The mathematical reasoning ability is the ability to process a logical conclusion based on relevant facts and sources; transformation is given in a certain order to draw conclusions [6]. Mathematical reasoning is a dynamic conceptualization of students' mathematical power and dynamic activities that involve a variety of modes of thinking [1]. Reasoning ability is a major component of thinking that involves forming generalizations and describing valid conclusions about ideas and how those ideas are connected [7]. Indicators of mathematical reasoning ability are (1) making observations and identification processes using mathematical relationships, (2) drawing conclusions based on data trends, (3) developing evidence from various mathematical arguments [8], [9], [10], [11], [12].
Almost every year the Education Assessment Center for the Ministry of Education and Culture conducts research related to the results of the National Examination (UN). The results of their research are always surprising: (1) Indonesian students have not mastered the mathematical material in the form of Higher Order Thinking Skills (HOTS) which includes application, analysis, synthesis and evaluative abilities. The research only discusses the results of the UN and TIMMS in the implementation of data collection in that year. The results of 2016 UN analysis were only analyzed based on cognitive domains according to Bloom's taxonomy include comprehension, application, analysis and synthesis; those have not been analyzed based on the mathematical connection ability, mathematical communication ability and mathematical reasoning ability and have not been analyzed based on gender and school status. Males and females have unique and different thinking abilities. Male students tend to use language skills in the left brain or use the right brain for spatial and mathematical skills, while females use both [13].
Gender is seen as one of the aspects that can influence a person's cognitive level. This is caused by psych biosocial factors that have an influence on gender differences [14]. These biological factors can make a difference in spatial ability, higher order thinking, and other cognitive aspects, thus giving rise to differences in the achievement of mathematical learning [15]. Males are seen as having a tendency to spatial aspects while females have a tendency to verbal abilities [15].
Some experimental research, development research, and classroom action research relating to the mathematical connections, mathematical communication and mathematical reasoning and HOTS abilities have also been widely practiced in Indonesia, including Kramarski & Mevarech [16] who conducted research developing mathematical reasoning ability and mathematical connections of high school students through problem-based learning; Naufal, et al [17] conducted problem-based learning research to improve mathematical reasoning abilities of junior high school students; Qohar [18] conducted research related to the development of mathematical communication instruments; Djamilah and Widjayanti [19] conducted qualitative research on mathematical problem solving ability of prospective mathematics teacher students: what and how to develop them. These researches only measure one ability on one subject.
Mathematical learning continues to be carried out by every teacher who has attended training to improve the quality of learning. Therefore, from the learning outcomes carried out by the teacher, problems arise on how the mathematical connection ability, mathematical communication ability and mathematical reasoning ability of Indonesian high school students are based on the identification of items in 2016 National Examination.
II. LITERATURE REVIEW
Mathematical connection ability is the ability of students in connecting ideas in learning mathematics with other contexts as well as everyday life so that students can better understand the learning received. Mathematical connection ability is developed with several indicators, such as (1) connecting concepts/rules of mathematics, (2) connecting mathematical concepts/rules with other fields of study, (3) connecting mathematical concepts/rules with applications in real life, and (4) looking for relationships between various concept representations and procedures [4], [20], [21], [22].
Mathematical communication ability is the process standards of communication and representation in the principles and standards for school mathematics to help the students develop mathematical literacy [23]. The mathematical communication ability related to various elements; critical thinking, systems thinking, problem solving, analysis, and judgment [24]. The communication in mathematics and its teaching and learning and the continuous professional development of mathematics teacher. The effective communication occurs in the classroom if it has the real critical aspects in student learning as its starting point. [25].
Mathematical reasoning ability is one of mathematical abilities which is done to make a conclusion or solution to mathematical problems using logical process and unlimited proof [11]. The process of mathematical reasoning divided into three stages, they are classifying material into different classes, finding order in each class, dan finding out relationship of two or more clasess [26]. It is also explained that to succeed in studying mathematics, students need to master those three stages of reasoning ability process. In other words, reasoning ability for students is a very important skill for students.
III. METHOD
The independent variables are school status and gender, and the dependent variables are mathematical abilities including mathematical communication ability, mathematical connection ability, and mathematical reasoning ability. The samples of the research were 1870 high school students from Natural Science Department consisting of 863 male students and 1007 female students and 2000 high School students from Social Science Department consisting of 1064 male students and 936 female students selected with random sampling. Data on mathematical abilities include mathematical communication ability, mathematical connection ability, and mathematical reasoning ability are in the form of scores of Jakarta high school students' works in 2016 National Exam. The results of the research scores used were the students 'work from the students' responses to the National Examination of High School Natural Science Department, and to the to the National Examination of High School Social Science Department on Mathematics subject in 2016.
Indicators of mathematical communication ability are: 1) representing mathematical problems using pictures, tables and graphs, 2) communicating mathematical thinking in verbal and written way, 3) using mathematical language to represent ideas, 4) interpreting and evaluating mathematical ideas in verbal and written way, and 5) using terms, symbols, and structures to model mathematical situations or problems. Indicators of mathematical reasoning ability are (1) making observations and identification processes using mathematical relationships, (2) drawing conclusions based on data trends, (3) developing evidence from various mathematical arguments [8], [9], [10], [11], [12]. Indicators of mathematical connection ability are (1) connecting concepts/rules of mathematics, (2) connecting mathematical concepts/rules with other fields of study, (3) connecting mathematical concepts/rules with applications in real life, and (4) looking for relationships between various concept representations and procedures [4], [20], [21], [22].
The procedures of this research were (1) the 2016 National Examination questions are grouped into indicators of mathematical connection ability, mathematical communication ability and mathematical reasoning ability, (2) the mathematical ability of students was calculated by Rasch Model, (3) the significance based on school status factors i.e., public and private schools and gender was tested.
A. Mathematical Abilities of High School Students from Natural Science Department
The average comparison of the mathematical connection ability of students in natural science department from public high schools and private high schools obtained significance value of 0.00000, so it is known that the mathematical connection ability of natural science students from private high schools is higher than public high school. The average comparison of the mathematical connection ability of male and female students in natural science department obtained a significance value of 0.59400, then the mathematical connection ability is no different between male and female student.
The average comparison of the mathematical communication ability of students in natural science department from public high schools and private high schools obtained significance value of 0.98900, so it is concluded that the mathematical communication ability in private high schools and public high schools is no different. The average comparison of the mathematical connection ability of male and female students in natural science department obtained a significance value of 0.05700, so it is concluded that the mathematical communication ability of male students and female students of natural science department is no different.
The average comparison of the mathematical reasoning ability of natural science department students from public and private high schools obtained a significance value of 0.00000, so it is concluded that the mathematical reasoning ability of students in private high school is higher than those in public high schools. The average comparison of the mathematical reasoning ability of male and female students in natural science department obtained a significance value of 0.74400 so it is concluded that the mathematical reasoning ability between male and female student is no different.
B. Mathematical Abilities of High School Students from Natural Science Department in 2016 National Examination
The results showed the mathematical communication ability in students of natural science department from public high schools and private high schools are no different, while the mathematical connection ability and mathematical reasoning ability in students of natural science department from private high schools are higher than that from public high schools.
In algebraic material, the difference in the average of number of private high school and public high school students who have mathematical connection ability is only 6%, in geometry material it is 10%, in calculus material it is 9.5% and in trigonometric material it is 7.5%. Descriptively, the difference in the average of students who have mathematical connection ability between private and public high school students is a maximum of 10%. This difference is very small, and this supports the inferential statistical test results showing no significant differences. The difference in the average of the number of students in natural science department from Private High School and Public High School measured mathematical connection ability is found the least in the algebraic material and the most in geometric material. Overall, only a small percentage of high school students from natural science department have mathematical connection ability.
In the statistical material, the average ability of the number of private high school students who correctly answered the questions that measured mathematical communication ability with the ability of representing mathematical problems using pictures, tables, and graphs is 58%, while that of public high school students is only 28%. In calculus material, the average ability of the number of high school students who correctly answered is 28%, while that of private high school students is 28%. The difference in the average number of private and public high school students in natural science department measured mathematical communication ability is found the least in calculus and the most in statistical material. Overall, students of public high school or private high school in natural science department have mathematical connection ability on the average below 50%.
The 2016 National Examination items that measured mathematical reasoning ability in this research only take algebraic material. The average ability of the number of private high school students who answered correctly the questions that measured reasoning ability is 50.5% and 44.5% are from public high schools. The difference in the average number of private and public high school students in natural science department measured reasoning ability is quite small, which is 6% so it descriptively supports the testing of inferential statistics which stated the average ability of the number of students who correctly answered the questions that measured mathematical reasoning ability of private high school students was higher than that of public high school students.
The average ability of the number of male students who answered correctly the questions that measured reasoning ability is 47% and female students get 48%. The difference in the number of female and male students who have mathematical reasoning ability is very small, that is only 1% so this result supports inferential statistics which stated that in answering questions that measured mathematical reasoning ability, female students' mathematical reasoning ability are higher than high school students from natural science department. The average of male and female students has mathematical reasoning ability based on the national exam is less than 50%.
Advances in Social Science, Education and Humanities Research, volume 178
The results of the research on the mathematical abilities of high school students in Natural Science department are shown in algebra, geometry, and calculus material. The average ability of the number of female and male students who answered correctly the questions that measured mathematical connection ability was not different; it was only 5.5% different in trigonometric materials. Descriptively the difference in the number of female and male students with maximum mathematical connection ability is 5.5%. Female and male students who have the most mathematical connection abilities are found in algebraic material and the least is in geometrical material. Overall female students or male high school students in natural science department who have mathematical connection ability on average are less than 50%.
In statistical material, 44% of male students answered correctly the questions that measured reasoning ability than female students. The difference is 44%. While in calculus material, 21% of female students correctly answered the questions that measured reasoning ability. Male students have more ability to represent mathematical problems by using pictures, tables, and graphs, while female students have more ability to communicate mathematical thinking in writing. Overall female or male students who mastered mathematical communication abilities are on average less than 50%.
C. Mathematical Abilities of High School Students from Social Science Department
The average comparison of the mathematical connection ability in students of social science department from public high schools and private high schools received a significance value of 0.00000, so it is concluded that the mathematical connection ability in private high school is higher than public high school. The average comparison of the mathematical connection ability for male and female students from the social science department obtained a significance value of 0.00000, so it is concluded that the mathematical connection ability of female is higher than male.
The average comparison of the mathematical communication ability in students from social science department from public high schools and private high schools obtained a significance value of 0.86800, so it is concluded that the mathematical communication ability in private high school and public high school is no different. The average comparison of the mathematical communication ability for male and female students from the social science department received a significance value of 0.78300 so it is concluded that the mathematical communication ability between male and female is no different.
The average comparison of the mathematical reasoning ability in students of social science department from public high schools and private high schools obtained a significance value of 0.00000, so it is concluded that the mathematical reasoning ability in private high school students is higher than public high school students.
The average comparison of the mathematical reasoning ability for male and female students from the natural science department obtained a significance value of 0.00700, so it is concluded that the mathematical reasoning ability of female is higher than male.
D. Mathematical Abilities of High School Students from Social Science Department in 2016 National Examination
The results showed that mathematical connection ability and mathematical reasoning ability of private high school students are higher than public high schools students in the social science department.
Algebraic material that measures mathematical connection ability includes sequence, function, and matrix. The probability material includes probability and multiplication rules. Calculus material includes derivatives of algebraic function, limit of algebraic functions, integration of algebraic functions. In algebraic material, the difference in the number of private high school and public high school students who have mathematical connection ability is only 22%, in geometry material the difference is 16% different, in calculus material it is 9% and in statistical material it is 10%. Private high school students who have mathematical connection ability are 12% more than public high school in all algebra, geometry, calculus and statistics materials.
Overall only a small percentage of private high school or public high school students in social science department have students with mathematical connection ability, which is below 50%.
In statistical material, 11% of private high school students in social science department correctly answered questions about statistics that measured communication ability which is more than public high schools students.
Overall only a small percentage of high school students in social science department have students with mathematical connection ability, which is below 50%.
The 2016 National Examination items from social science department that measured mathematical reasoning ability in this research only take algebraic and calculus material. In algebraic material, 11% of private high school students correctly answered questions that measured mathematical reasoning ability which is more than public high school students. Overall high school students in social science department with mathematical reasoning ability are below 55%.
The comparison results of the mathematical abilities based on gender showed that the mathematical connection ability and mathematical reasoning ability of female students is greater than male students. Meanwhile, mathematical communication ability of male and female students are no different. In algebraic material, the average ability of the Advances in Social Science, Education and Humanities Research, volume 178 number of male students who have mathematical connection ability is 38%, in calculus material the difference is 41%, in probability material it is 3% and in statistical material it is 4%. In algebraic material, the average ability of the number of female students who have mathematical connection ability is 44%, in calculus material the difference is 44%, in probability material it is 44% and in the statistical material it is 36%. Female students who have more mathematical connection ability are 12% more than male students in all algebraic materials, calculus, probability and statistics.
The difference in the average ability of the number of male and female students who master mathematical communication ability is very small, which is 2%. The difference in the average ability of the number of male and female students who master mathematical reasoning ability is 5%. In algebraic material, the average ability of the number of female students who correctly answered the questions that measured mathematical reasoning ability is 6% more than the average ability of the number of male students. In calculus material, the average ability of the number of female students who correctly answered the questions that measured mathematical reasoning ability is 2.5% more than the average ability of the number of male students. Overall female or male students in social science department who master mathematical communication ability and mathematical reasoning ability are on the average of below 50%.
V. CONCLUSION
The conclusions of this research are (1) the mathematical connection ability and mathematical reasoning ability of students in natural science department from private high school are higher than public high school, while the mathematical communication ability of students in natural science department from private high schools and public high schools has no difference; (2) the mathematical connection ability and mathematical reasoning ability of students in social science department from private high school are higher than public high school, while the mathematical communication ability of students in social science department from private high schools and public high schools has no difference.
|
2019-06-13T13:21:46.436Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "90c95a770534fb78a3c8334520ddc264fae4930f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/icoie-18.2019.88",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f04a4ee722e0a8df162d901b37a6094c9631cb51",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
245175623
|
pes2o/s2orc
|
v3-fos-license
|
Complex Biomineralization Pathways of the Belemnite Rostrum Cause Biased Paleotemperature Estimates
: Paleotemperatures based on δ 18 O values derived from belemnites are usually “too cold” compared to other archives and paleoclimate models. This temperature bias represents a significant obstacle in paleoceanographic research. Here we show geochemical evidence that belemnite calcite fibers are composed of two distinct low-Mg calcite phases (CP1, CP2). Phase-specific in situ measurement of δ 18 O values revealed a systematic offset of up to 2‰ (~8 ◦ C), showing a lead–lag signal between both phases in analyses spaced less than 25 µ m apart and a total fluctuation of 3.9‰ (~16 ◦ C) within a 2 cm × 2 cm portion of a Megateuthis (Middle Jurassic) rostrum. We explain this geochemical offset and the lead–lag signal for both phases by the complex biomineralization of the belemnite rostrum. The biologically controlled formation of CP1 is approximating isotope fractionation conditions with ambient seawater to be used for temperature calculation. In contrast, CP2 indicates characteristic non-isotope equilibrium with ambient seawater due to its formation via an amorphous Ca-Mg carbonate precursor at high solid-to-liquid ratio, i.e., limited amounts of water were available during its transformation to calcite, thus suggesting lower formation temperatures. CP2 occludes syn vivo the primary pore space left after formation of CP1. Our findings support paleobiological interpretations of belemnites as shelf-dwelling, pelagic predators and call for a reassessment of paleoceanographic reconstructions based on belemnite stable isotope data.
Introduction
Biogenic carbonates have been frequently used as paleoceanographic archives since the development of oxygen isotope analysis [1,2]. The most commonly used biogenic carbonate archives for Mesozoic sea surface temperatures (SST) are planktic foraminifera and mollusks [3].
Belemnite rostra ( Figure 1B-E) were initially used in the standardization of carbonate δ 18 O analyses [1] and are considered to be resistant to diagenetic alteration due to their dense low-magnesium calcite composition [14]. However, isotopic variability between rostra from the same stratigraphic horizon and within a single rostrum [10] and the observed Belemnite rostra ( Figure 1B-E) were initially used in the standardization of carbonate δ 18 O analyses [1] and are considered to be resistant to diagenetic alteration due to their dense low-magnesium calcite composition [14]. However, isotopic variability between rostra from the same stratigraphic horizon and within a single rostrum [10] and the observed difference in temperatures calculated between the aragonitic phragmocone and calcitic rostrum of a single specimen [15] question the uncritical applicability of these proxy data for paleoceanographic reconstructions. Large, comparative datasets show conclusively that belemnite rostra deviate towards heavier δ 18 O values compared to other contemporaneous calcitic fauna [16,17]. Recent high-resolution petrographic investigation revealed two distinct calcite phases (CP1 and CP2) forming belemnite rostra [18][19][20][21]. The timing and mechanism of formation of both phases have important implications for paleoceanographic reconstructions based on bulk analyses. Here we present phasespecific, in situ SIMS δ 18 O values, as well as high-resolution elemental and Raman mappings. Based on these data, we propose a new biomineralization model for belemnite rostra and outline paleoceanographic and paleobiological implications. Counts of all four elements along the profile I-II. Note: Bluish colors correspond to lower X-ray intensity, and warmer yellow-red colors, to higher x-ray intensity (numbers in (B) refer to Figure 1; confidence bands are given as 1σ).
Materials and Methods
We used a polished 2 cm × 2-cm section of a Megateuthis rostrum that shows the biphasic microstructure [19]. The specimen comes from the Middle Jurassic of southern Germany (Latitude: 49.4520/Longitude 11.0767, Figure 1A). During deposition, the area was located at about 34° N in a warm, fully marine epicontinental sea, about 60 to 80 km from the coastline, with water depths of up to 70 m [22].
Electron probe microanalysis (EPMA), confocal Raman microscopy (CRM), backscattered electron microscopy (BSE), and secondary ion mass spectrometry (SIMS) were done on the polished section to investigate differences between CP1 and CP2. EPMA Counts of all four elements along the profile I-II. Note: Bluish colors correspond to lower X-ray intensity, and warmer yellow-red colors, to higher x-ray intensity (numbers in (B) refer to Figure 1; confidence bands are given as 1σ).
Materials and Methods
We used a polished 2 cm × 2-cm section of a Megateuthis rostrum that shows the biphasic microstructure [19]. The specimen comes from the Middle Jurassic of southern Germany (Latitude: 49.4520/Longitude 11.0767, Figure 1A). During deposition, the area was located at about 34 • N in a warm, fully marine epicontinental sea, about 60 to 80 km from the coastline, with water depths of up to 70 m [22].
Electron probe microanalysis (EPMA), confocal Raman microscopy (CRM), backscattered electron microscopy (BSE), and secondary ion mass spectrometry (SIMS) were done on the polished section to investigate differences between CP1 and CP2. EPMA was done on a 10-15-nm carbon-coated sample surface using a Cameca SXFiveFE (central microanalytical laboratories, Ruhr-Universität Bochum). For quantitative analysis, we used a defocussed beam (5 µm, acceleration voltage 15 kV, probe current 10 nA), and qualitative X-ray mapping was on a 10-nm-thick gold-coated surface using a focused beam (acceleration voltage 8 kV, probe current 400 nA). CRM was done using a WITec alpha 300 R confocal Raman microscope (Alfred Wegener Institut Helmholtz-Zentrum für Polarund Meeresforschung, Section Marine BioGeoSciences) with an excitation wavelength of 488 nm, a spectrometer grating of 600/mm, and 500-nm blaze. The Cameca IMS-1280 (WiscSIMS, Department of Geoscience, University of Wisconsin) produces high-precision (0.3‰, 2SD) analyses of δ 18 O in carbonates with 10-µm pit diameter [23,24]. Analysis was done using a caesium ion beam with a standard-sample-standard bracketing technique, using two brackets of four analyses of UWC-3 (δ 18 O = −17.87‰, Vienna Pee Dee Belemnite, VPDB, [25]. Bulk isotope and thermogravimetric analysis (TGA) were performed on sample powders' averaging areas adjacent to microanalysis drilled using an alumina carbide bit. TGA was done using a Mettler Toledo TGA/DSC 1 (Alfred Wegener Institut Helmholtz-Zentrum für Polar-und Meeresforschung, Section Marine BioGeoSciences) and a heating rate of 10 k/min. Bulk δ 18 O were measured by gas source mass spectrometry (GSMS) using a Thermo Scientific Delta V Plus isotope ratio mass spectrometer (WiscSIMS, Department of Geoscience, University of Wisconsin) attached to a Thermo Gasbench II/CTC GC-PAL autosampler (WiscSIMS, Department of Geoscience, University of Wisconsin) and a precision of ±0.22‰ for δ 18 O.
Results
Thermogravimetric analysis indicated a weight loss of 1%, similar to water/organic content of speleothem calcite and less than that of modern Porites coral aragonite [26]. The two calcite phases, CP1 and CP2, had distinct isotope and trace element compositions. CRM identified both as calcite and EPMA showed stoichiometric calcite end members (X Cal = Ca/(Ca + Mg + Fe + Mn + Sr) = 0.99). CP1 comprised stacked and interconnected trigonal pyramids with the brightest fluorescence in CRM images ( Figure 1E-1), and submicron-size, organic-rich grains. The highest concentrations of Mg, P, and S in the EPMA map were within CP1, supporting the interpretation of organic matter from CRM fluorescence [27,28]. SIMS δ 18 O values of CP1 were −0.5‰ (VPDB; N = 49) on average, 2SD of 1.8‰, and a range from −2.1 to 0.8‰ (Figures 3 and 4).
On average, SIMS analyses of CP2 were 0.8‰ higher than CP1. Directly paired analyses measured within a distance of 25 µm showed CP2 with an average value of about +1.0‰ higher than CP1 (number of pairs = 21, standard error of mean = 0.2, T-test for no difference p ≤ 0.001). When compared across rostral growth, variation in CP2 lagged that of CP1 by~100 µm ( Figure 3). Bulk analyses showed an average δ 18 O of +0.7 ± 0.2‰ (VPDB). The bulk δ 18 O value, within analytical uncertainty, intersected CP2 (+0.5‰). Minerals 2021, 11, x 5 of 12 At several points, δ 18 O of CP2 is lower than that of nearby CP1, most notably near 3250 µm. This crossover highlights the lag between sinusoids in CP1 and CP2. Growth banding is more closely spaced toward the right of the sampling transect because of the geometry of growth rings related to the cutting plane ( Figure 1C and inset in top left corner). A single bulk value using conventional acid digestion gas-source mass spectrometry (GSMS) was sampled in a trench parallel to the SIMS transect for method comparison.
Discussion
The analyzed rostrum of Megateuthis showed no signs of significant diagenetic alteration [19] using established screening methods (e.g., [11]). It showed an intrinsic bluish luminescence and intact fibrous microfabric, all indicative for well-preserved calcite ( [19], Figure 5C,F). A few fractures and microstylolites were avoided during sampling.
SIMS Accuracy
Small differences in minor element chemistry did not significantly influence the accuracy of SIMS-measured δ 18 O values in carbonates [24]. Thermogravimetric analysis analyses revealed a weight loss of 0.4% between 0-200 °C, attributable to water loss, while weight loss between 200-600 °C was attributed to the combustion of organic matter [29].
There was a small difference in the amount of organic matter between both calcite phases visualized by CRM ( Figure 1E). Small differences in organic matter content within a speleothem [30], Nautilus shell [23], fish otoliths [31], and the rostrum analyzed herein did not bias δ 18 O values measured in organic-rich or organic-lean domains.
Oxygen Isotope Fractionation between Calcite and Water: Definitions and Range
The oxygen isotope fractionation factor between calcite and a precipitating water was defined by the n moles of 18 O and 16 O of the calcite (cc) divided by the n moles of 18 O and 16 O of the precipitating water: The isotope fractionation factor, αcc-H2O, which was closest to isotope equilibrium conditions, was estimated by speleothem calcite grown at extreme low precipitation rates (see [32,33]). Kinetically driven oxygen isotope fractionation effects can be related to nonisotopically equilibrated dissolved inorganic carbonate (DIC) species and/or high precipitation rates (e.g., [5,[34][35][36]). Both effects are known to result in lower apparent αcc-H2O values compared to equilibrium conditions. In brief, these kinetic effects were At several points, δ 18 O of CP2 is lower than that of nearby CP1, most notably near 3250 µm. This crossover highlights the lag between sinusoids in CP1 and CP2. Growth banding is more closely spaced toward the right of the sampling transect because of the geometry of growth rings related to the cutting plane ( Figure 1C and inset in top left corner). A single bulk value using conventional acid digestion gas-source mass spectrometry (GSMS) was sampled in a trench parallel to the SIMS transect for method comparison.
Discussion
The analyzed rostrum of Megateuthis showed no signs of significant diagenetic alteration [19] using established screening methods (e.g., [11]). It showed an intrinsic bluish luminescence and intact fibrous microfabric, all indicative for well-preserved calcite ( [19], Figure 5C,F). A few fractures and microstylolites were avoided during sampling.
SIMS Accuracy
Small differences in minor element chemistry did not significantly influence the accuracy of SIMS-measured δ 18 O values in carbonates [24]. Thermogravimetric analysis analyses revealed a weight loss of 0.4% between 0-200 • C, attributable to water loss, while weight loss between 200-600 • C was attributed to the combustion of organic matter [29].
There was a small difference in the amount of organic matter between both calcite phases visualized by CRM ( Figure 1E). Small differences in organic matter content within a speleothem [30], Nautilus shell [23], fish otoliths [31], and the rostrum analyzed herein did not bias δ 18 O values measured in organic-rich or organic-lean domains.
Oxygen Isotope Fractionation between Calcite and Water: Definitions and Range
The oxygen isotope fractionation factor between calcite and a precipitating water was defined by the n moles of 18 O and 16 O of the calcite (cc) divided by the n moles of 18 O and 16 O of the precipitating water: The isotope fractionation factor, α cc-H 2 O , which was closest to isotope equilibrium conditions, was estimated by speleothem calcite grown at extreme low precipitation rates (see [32,33]). Kinetically driven oxygen isotope fractionation effects can be related to non-isotopically equilibrated dissolved inorganic carbonate (DIC) species and/or high precipitation rates (e.g., [5,[34][35][36]). Both effects are known to result in lower apparent α cc-H 2 O values compared to equilibrium conditions. In brief, these kinetic effects were re-flecting isotopic disequilibrium conditions between the aqueous DIC species, e.g., induced by pH change or CO 2 degassing and/or during the uptake of the aqueous carbonate at the growing calcite surface (e.g., [5,35,37]). In this context, the temperature-dependent α cc-H 2 O relationship of Kim and O'Neil [38] is most frequently used as calibration equation, which indeed represents out-of-isotope equilibrium conditions (about 1.5‰ to 2‰ lower apparent α cc-H 2 O compared to equilibrium conditions), but fits well with the oxygen isotope fractionation behavior during most common calcite formation conditions at the surface of the Earth, approximating isotope fractionation conditions at low precipitation rates (see [5,32,39]). In contrast, higher apparent α cc-H 2 O values compared to equilibrium conditions were documented for calcite formed via amorphous Ca-Mg carbonate precursors, in particular, expected at high solid-to-liquid ratio, i.e., limited water availability during the transformation [7].
Oxygen Isotope Differences in the Two Calcite Phases
CP1 represented the phase formed under strict biological control [19] and was less affected by common kinetically driven oxygen isotope effects due to the presence of carbonic anhydrase (CA) enzyme [40], whose presence was inferred based on the presence of CA in modern mollusks including cephalopods [41][42][43]. Estimated formation temperature of CP1 calcite from δ 18 O cc_CP1 values, according to Kim and O'Neil [38], yielded 12 ± 4 • C, which indicated a high scatter of temperatures derived from a single rostrum. Oxygen isotope distribution of secondary CP2 calcite, in only about 25-µm distance from CP1 calcite, exhibited significantly lower δ 18 O cc_CP2 values (max. delta: −2.5‰; Figure 4A with deviation of CP1 und CP2 temperatures up to~8 • C), which cannot reasonably be explained by varying environmental formation temperatures. Accordingly, the local change in δ 18 O values is, rather, suggested to be based on calcite formation pathways. The potential impact on the 10 3 ln(α cc-H 2 O ) value was displayed by setting the calculated T CP1 as references (T CP1 = T CP2 ; Figure 4B). In the present case of biogenically induced formation, the higher apparent 10 3 ln(α cc-H 2 O ) values of CP2 versus CP1 in Figure 4B cannot be explained by reaching isotope equilibrium conditions such as those documented in speleothems (see [33]). In contrast, the apparent 10 3 ln(α cc-H 2 O ) of CP2 hinted towards the formation of calcite via an amorphous carbonate precursor and its transformation within a waterlimited environment, thus still reflecting the preferential entrapment of the 18 O (versus 16 O) during amorphous carbonate formation in the final calcite. Thus, CP2 is suggested to be precipitated out of isotope equilibrium with seawater, reflecting an isotope fractionation behavior typical for amorphous pathway of calcite formation. In analogy to CP2, the oxygen isotope fractionation between carbonated hydroxyapatite (CHAP: synthesized between 6 to 80 • C; [44]) and water indicated a temperature dependence of δ 18 O carbonate close to the relationship of Daëron et al. [33]. This further supports our above-developed concept of CP2 formation strategy, as CHAP precipitates are known to be formed through an amorphous precursor stage.
Alternatively, reduction of CA due to less exposed enzyme-hosting organic membranes or differing secretion rates could limit CO 2 -H 2 O system equilibration [43]. Vital effects in mollusks generally cause lower δ 18 O values and, therefore, cannot explain the observed offset ( [45,46], but see [47]). Disequilibrium formation of CP2 and equilibrium secretion of CP1 provide a plausible explanation for the observed δ 18 O offset and represent a biomineralization chronology. Table S1) are estimated by considering Mesozoic seawater to be δ 18 OH2O (SMOW) ≈ −1.0 ‰ (equal to −30.95 ‰, VPDB) at ice-free condition [48]. Black, solid line is obtained from inorganic calcite precipitation experiments at low precipitation rates [38] and is used as a TCP1 calibration line for CP1 calcite, representing a baseline for biogenically induced calcite. CP2 calcites are plotted vs. TCP1 values, assuming identical formation temperatures for CP1 and CP2, to follow the shift of oxygen isotope fractionation from CP1 to CP2, suppressing T effects (exemplarily indicated by the dashed line at about 6 °C). Dotted line: oxygen isotope equilibrium conditions, as proposed by Daëron et al. [33], from calcitic speleothems. Blue arrow: trend of increasing impact of vital effects on calcite precipitation (including precipitation rate and DIC disequilibrium effect (e.g., [5])). Red arrow: trend of increasing impact of calcite formed via amorphous Ca-Mg carbonate precursors at high solid-toliquid ratio, i.e., water is limited during the transformation [7].
Rostrum Biomineralization Model
Secretion of organic scaffolds from mantle cells controls the shape and growth rate of the rostrum (Figure 1B,C; [49]). In the rostrum, membranes with equidistant spacing lie parallel to the growth surface confining an extrapallial fluid reservoir (domain a, Figure 5E) that is compositionally derived from ambient seawater. Organic scaffolds between these membranes serve as sites for secretion and control the shape and crystallographic orientation of CP1 trigonal pyramids [19].
At first, CP1 started to form a filigree framework of organic-rich calcite trigonal pyramids. A second membrane-scaffold-membrane layer was constructed as the first layer of CP1 was secreted. Residual water in the cavity, where the first layer of CP1 was secreted, likely was altered from its original composition by removal of CO3 2-and Ca 2+ . With a lag of about two growth layers (~100 µm), CP1 secreted in the second layer, contemporaneously with CP2 growing syn vivo in the remaining pore space within the first layer.
This model explains both the similarity in the magnitude of δ 18 O variation within CP1 and CP2 and their spatial lag (Figure 3). Oxygen isotope values of CP2 had higher δ 18 O (on average ~0.8‰) compared to those of CP1 but displayed a similar amount of variability and a sinusoidal pattern. A potential argument for an amorphous precursor forming CP2 is the assumed less water present during transformation to calcite, which results in a larger big delta value. Given the dependence on the organic scaffold, CP1 formed under biological control in the presence of CA (sensu [49]), thus reflecting the isotope fractionation relationship of Kim and O`Neill [38]. Isopachous CP2 calcite crystals nucleated on the surface of the trigonal pyramids without organic scaffolds, but most likely through an amorphous precursor ( Figures 1E and 5; [50]). Additional research on Table S1) are estimated by considering Mesozoic seawater to be δ 18 O H 2 O (SMOW) ≈ −1.0 ‰ (equal to −30.95 ‰, VPDB) at ice-free condition [48]. Black, solid line is obtained from inorganic calcite precipitation experiments at low precipitation rates [38] and is used as a T CP1 calibration line for CP1 calcite, representing a baseline for biogenically induced calcite. CP2 calcites are plotted vs. T CP1 values, assuming identical formation temperatures for CP1 and CP2, to follow the shift of oxygen isotope fractionation from CP1 to CP2, suppressing T effects (exemplarily indicated by the dashed line at about 6 • C). Dotted line: oxygen isotope equilibrium conditions, as proposed by Daëron et al. [33], from calcitic speleothems. Blue arrow: trend of increasing impact of vital effects on calcite precipitation (including precipitation rate and DIC disequilibrium effect (e.g., [5])). Red arrow: trend of increasing impact of calcite formed via amorphous Ca-Mg carbonate precursors at high solid-to-liquid ratio, i.e., water is limited during the transformation [7].
Rostrum Biomineralization Model
Secretion of organic scaffolds from mantle cells controls the shape and growth rate of the rostrum (Figure 1B,C; [49]). In the rostrum, membranes with equidistant spacing lie parallel to the growth surface confining an extrapallial fluid reservoir (domain a, Figure 5E) that is compositionally derived from ambient seawater. Organic scaffolds between these membranes serve as sites for secretion and control the shape and crystallographic orientation of CP1 trigonal pyramids [19].
At first, CP1 started to form a filigree framework of organic-rich calcite trigonal pyramids. A second membrane-scaffold-membrane layer was constructed as the first layer of CP1 was secreted. Residual water in the cavity, where the first layer of CP1 was secreted, likely was altered from its original composition by removal of CO 3 2− and Ca 2+ . With a lag of about two growth layers (~100 µm), CP1 secreted in the second layer, contemporaneously with CP2 growing syn vivo in the remaining pore space within the first layer.
This model explains both the similarity in the magnitude of δ 18 O variation within CP1 and CP2 and their spatial lag (Figure 3). Oxygen isotope values of CP2 had higher δ 18 O (on average~0.8‰) compared to those of CP1 but displayed a similar amount of variability and a sinusoidal pattern. A potential argument for an amorphous precursor forming CP2 is the assumed less water present during transformation to calcite, which results in a larger big delta value. Given the dependence on the organic scaffold, CP1 formed under biological control in the presence of CA (sensu [49]), thus reflecting the isotope fractionation relationship of Kim and O'Neill [38]. Isopachous CP2 calcite crystals nucleated on the surface of the trigonal pyramids without organic scaffolds, but most likely through an amorphous precursor ( Figures 1E and 5; [50]). Additional research on the nanometer scale could provide proof for the presence or absence of an alternating layered structure resulting from the decomposition of the amorphous precursor phase, as was recently demonstrated by [51,52]. Extra-crystalline organic matrix or residual organic matter in the calcifying fluid was trapped along the surfaces of the CP2 crystals ( Figure 1E). Other potentially biologically induced biominerals are known from sepiid cuttlebones [53] and from intercameral deposits of some fossil cephalopods [54,55].
Similar findings of microstructural complexity were described for five Middle Jurassic to Late Cretaceous belemnite genera [18,31], which suggests that this specific type of rostrum microstructure and biomineralization pathway is universally present in belemnites. Interestingly, layers of organic membranes running parallel to the mineralized septa of the Sepia cuttlebone have a similar spatial arrangement and may help to better visualize the formation process of the belemnite rostrum microstructure ( Figure 5). the nanometer scale could provide proof for the presence or absence of an alternating layered structure resulting from the decomposition of the amorphous precursor phase, as was recently demonstrated by [51,52]. Extra-crystalline organic matrix or residual organic matter in the calcifying fluid was trapped along the surfaces of the CP2 crystals ( Figure 1E). Other potentially biologically induced biominerals are known from sepiid cuttlebones [53] and from intercameral deposits of some fossil cephalopods [54,55].
Similar findings of microstructural complexity were described for five Middle Jurassic to Late Cretaceous belemnite genera [18,31], which suggests that this specific type of rostrum microstructure and biomineralization pathway is universally present in belemnites. Interestingly, layers of organic membranes running parallel to the mineralized septa of the Sepia cuttlebone have a similar spatial arrangement and may help to better visualize the formation process of the belemnite rostrum microstructure ( Figure 5).
Paleoceanographic Implications
The complex intergrowth of CP1 and CP2 means bulk δ 18 O values likely bias reconstructions toward colder temperatures. The lowest δ 18 O values in CP1 (−2.7‰) might be the best representation of maximum SST because they produced the highest temperature estimates [31]. The homogenized bulk δ 18 O value for this sample was +1.2‰ higher than average values of CP1 and biased temperature estimates to colder temperatures by ~5 °C [38]. We suggest correction for this bias should be done with additional petrographic and in situ geochemical sampling and not extrapolation from a single rostrum. We hypothesized that species-specific and intraspecific differences in the proportion of CP1 to CP2 as well as differences in the magnitude of δ 18 O offset between [19] of trigonal pyramids (CP1), as depicted in the model (E), with slightly variable distances between each layer. (E) Scheme of calcite phases in cross-section view showing the growth model: 1, two organic membranes form a delineated space; carbonic anhydrase is associated with these membranes; 2, CP1 in domain a precipitates with its trigonal pyramid morphology determined by organic framework; 3, forming of an organic framework and isolating membrane for CP1 in domain b; 4, CP1 in domain b precipitates simultaneously with CP2 of domain a. The process repeats with continued organic matter scaffolding growth (see Supplementary for additional information).
Paleoceanographic Implications
The complex intergrowth of CP1 and CP2 means bulk δ 18 O values likely bias reconstructions toward colder temperatures. The lowest δ 18 O values in CP1 (−2.7‰) might be the best representation of maximum SST because they produced the highest temperature estimates [31]. The homogenized bulk δ 18 O value for this sample was +1.2‰ higher than average values of CP1 and biased temperature estimates to colder temperatures bỹ 5 • C [38]. We suggest correction for this bias should be done with additional petrographic and in situ geochemical sampling and not extrapolation from a single rostrum. We hypothesized that species-specific and intraspecific differences in the proportion of CP1 to CP2 as well as differences in the magnitude of δ 18 O offset between CP1 and CP2 exist. There is limited indication for clumped isotope close to equilibrium formation of belemnite calcite based on the dual clumped isotope (∆ 47 , ∆ 48 ) analysis of a single rostrum [57]. However, non-equilibrium δ 18 O values would preclude a straightforward reconstruction of past δ 18 O seawater composition based on clumped isotope temperatures [58].
Belemnite Paleobiology
Paleobiological evidence of belemnite lifestyle (e.g., comparative morphology, taphonomy) suggests they were mobile, active swimmers, and our new data fit into this framework [59]. Sinusoidal δ 18 O variation can be explained by vertical migration and/or seasonal temperature variability [13]. An interpretation of δ 18 O as solely seasonal, however, is unlikely because it would suggest a long (>5 years) lifespan at fixed depth for Megateuthis, while modern, shallow-water coleoids, which are ecologically comparable to belemnites, have lifespans of 1-2 years [60]. Vertical migration through a stratified water column multiple times could produce the measured δ 18 O pattern (Figure 3; [23]) but such gradients in temperature or δ 18 O of the seawater would have been present in the top 200 m of the water column due to mechanical limitations of belemnite shells [59].
Conclusions
We demonstrated that well-preserved belemnite rostra have heterogeneous δ 18 O values on a scale of 25 µm. Belemnite rostra record a complex, phase-specific oxygen isotope pattern, suggesting the first calcite phase (CP1) to be reflecting formation temperature and a second calcite phase (CP2) formed via amorphous precursor, which induces a typical enrichment of 18 O vs. 16 O in the precipitating carbonate. Bulk data, comprising both calcite phases, likely bias paleotemperature estimates by~2-4 • C towards colder temperatures. Therefore, reliable paleotemperature estimates can only be reconstructed from in situ sampled CP1. The observed offset in δ 18 O values of CP1 and CP2 can be best explained by biomineralization of CP1 from extrapallial fluid derived from seawater most likely in the presence of carbon anhydrase and the formation of CP2 from the transformation of an amorphous calcium carbonate precursor under a high solid/liquid ratio. The lead-lag in the formation of both phases implies that the rostra are initially less dense than pure calcite but increase in density during growth of the belemnite rostrum. Future petrographic and in situ geochemical sampling will provide constraints on the apparent bias in bulk records and, therefore, can help to refine our understanding of paleoceanography throughout the Jurassic, Cretaceous, and belemnite paleobiology.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/min11121406/s1: Figure S1: Thermogravimetric analyses of different carbonate materials. Figure S2: Classification of SIMS pits and associated δ18Obelemnite. Figure S3: Detailed precipitation model for calcitic belemnite rostra. Table S1: Summary statistics for the pit category data and related oxygen isotope ratios.
|
2021-12-16T17:49:15.977Z
|
2021-12-12T00:00:00.000
|
{
"year": 2021,
"sha1": "a34fa440e4ef5247f2bb18f86c8ab1e7f23c2eef",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/11/12/1406/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "691488e3c403b3add5060b0816018c91863d334f",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
}
|
219710514
|
pes2o/s2orc
|
v3-fos-license
|
Malaria patterns across altitudinal zones of Mount Elgon following intensified control and prevention programs in Uganda
Background Malaria remains a major tropical vector-borne disease of immense public health concern owing to its debilitating effects in sub-Saharan Africa. Over the past 30 years, the high altitude areas in Eastern Africa have been reported to experience increased cases of malaria. Governments including that of the Republic of Uganda have responded through intensifying programs that can potentially minimize malaria transmission while reducing associated fatalities. However, malaria patterns following these intensified control and prevention interventions in the changing climate remains widely unexplored in East African highland regions. This study thus analyzed malaria patterns across altitudinal zones of Mount Elgon, Uganda. Methods Times-series data on malaria cases (2011–2017) from five level III local health centers occurring across three altitudinal zones; low, mid and high altitude was utilized. Inverse Distance Weighted (IDW) interpolation regression and Mann Kendall trend test were used to analyze malaria patterns. Vegetation attributes from the three altitudinal zones were analyzed using Normalized Difference Vegetation Index (NDVI) was used to determine the Autoregressive Integrated Moving Average (ARIMA) model was used to project malaria patterns for a 7 year period. Results Malaria across the three zones declined over the study period. The hotspots for malaria were highly variable over time in all the three zones. Rainfall played a significant role in influencing malaria burdens across the three zones. Vegetation had a significant influence on malaria in the higher altitudes. Meanwhile, in the lower altitude, human population had a significant positive correlation with malaria cases. Conclusions Despite observed decline in malaria cases across the three altitudinal zones, the high altitude zone became a malaria hotspot as cases variably occurred in the zone. Rainfall played the biggest role in malaria trends. Human population appeared to influence malaria incidences in the low altitude areas partly due to population concentration in this zone. Malaria control interventions ought to be strengthened and strategically designed to achieve no malaria cases across all the altitudinal zones. Integration of climate information within malaria interventions can also strengthen eradication strategies of malaria in such differentiated altitudinal zones.
Background
According to the World Health Organization (WHO), malaria cases in the year -2018 were estimated at 228 million cases of malaria worldwide with 405,000 deaths [1]. Children under 5 years accounted for the largest (67%) deaths [1]. Of the total number of cases globally, Africa was a home to 93% of malaria cases and 94% of malaria deaths [1]. In 2013, it was estimated that a total of 437,000 African children died before their fifth birthday due to malaria and the disease caused an estimated global 453,000 under-five deaths in the same year − 2013 [2]. Through bites of infected mosquitoes, disease causing parasites are transmitted to humans [1,3]. Transmission dynamics area shaped by the environmental conditions, lifespan of the vector and the host's immunity [1]. Climatic conditions influence the lifespan of the vector while host's immunity reduce the risk of malaria infection in causing malaria disease in human body [1]. Several interventions have been implemented over the last decade and have led to observed decline in the malaria burden in sub-Saharan Africa. These interventions have aimed at avoiding mosquito bites through the use of repellents or insecticide treated bed nets, and specific medicines to prevent malaria. However, it still remains a major public health threat in areas within the tropical and subtropical region [4,5].
Malaria occurrence has traditionally been observed in the low-land areas, bogs and generally in the plains within the tropical regions [6]. Comparative analysis have shown the occurrence of such patterns in Africa, Latin America and Caribbean as well as in South East Asia [7][8][9][10]. Meanwhile, the afromontane areas characterized with unique biota [11], that had hitherto been known for being malaria free zones due to altitudinal effect, have seen increased malaria incidences with some areas experiencing a rise while others declining [12,13]. Malaria cases have lately been observed to be on the rise in the afromontane ecotones within sub-Saharan Africa such as in the Rwenzori highlands of south western Uganda [14,15]. Similar patterns have been experienced in the neighboring highlands of Butare (Rwanda) as well as in the Mount Kilimanjaro area (Tanzania) [16,17]. These patterns in malaria have led to increased cost of malaria interventions [16,18]. Such trends have been attributed to climate change that is creating ambient conditions within the highland altitudinal belts [18].
Malaria in Uganda has been endemic in the savannah areas of northern and eastern Uganda especially in Apac district, followed by Tororo district [19]. All these areas are within 1100 m altitude. However, highland areas especially Elgon region had earlier been reported to experience a surge in malaria cases despite continued intensified control and prevention interventions by both government, private sector and development partners [14,16,20]. These interventions have aimed at reducing malaria infections, reduce morbidity and prevent mortality attributable to malaria [21]. Control programs like the Uganda National Malaria Control Program (UNMCP) were developed based on the global Roll Back Malaria partnership, United Nations Millennium Development Goal and the 2000 Abuja Declaration [17,21]. Implementation of the UNMCP plan is aimed at controlling malaria to reduce its burden on the human population in Uganda, ensure universal access to malaria prevention and treatment, and minimize mortality rate for children under 5 years of age. These strategies have involved integrated vector management, effective diagnosis and treatment, prevention of malaria in pregnancy, and attention to malaria epidemics [22]. Despite all these interventions, Uganda still ranks among the six countries that contribute more than half of the global malaria cases [1]. This is partly because of the climate which allows stable, year round malaria transmission with relatively little seasonal variability in most areas [17]. Within the country, malaria is highly endemic in up to 95% of the country's area, where 90% of the population of 40 million live [22]. Despite inadequate information on the type and distribution of malaria parasites, the malaria species that are mainly reported in Uganda include P. falciparum, P. vivax, P. malariae, and P. ovale [23,24]. P. falciparum is responsible for more than three quarters of the cases in Uganda [23]. It is estimated that other species account for < 5% of cases, with a few percent of infections due to mixed species [24].
Climate has been pointed out as a key risk factor for spatial-temporal patterns of malaria, especially in the highland areas [17]. Studies [19,25] on malaria patterns in different mountainous areas have been undertaken but only a few [26,27] have focused on the patterns of malaria within different altitudinal zones (ecotones). Yet ecotones are characterized with varying environmental conditions that can influence mosquito biology and malaria patterns [28,29]. These studies have not documented patterns of malaria following intensified control and prevention interventions in mountainous areas such as Elgon region. This study thus analysed malaria patterns across altitudinal zones of Mount Elgon, one of the areas in Uganda where intensive malaria control and prevention programs have been implemented.
Study area
The study was undertaken in the Mount Elgon highland region within Kween District located between 0125 N and 3431E (Fig. 1). Kween District borders the districts of Nakapiripirit to the north, Amudat to the northeast, Bukwo to the east, Kapchorwa to the west and Bulambuli to the northwest [28]. In the South, it boarders the Republic of Kenya and it is located on the northern slopes of Mount Elgon, at an average altitude of about 1900 m (6,200 Feet) above sea level [28]. It has administrative units ranging from Sub county, Parish and village [29]. The area is characterized by high and well-distributed rainfall (averaging 1200 mm/year) and consists of two seasons, a rainy season (March-September) and a dry season (October-April) [30]. It has cool temperatures which are on average 17°C [31]. The human population of the district has been rising in the last three census conducted; 1991, 2002 and 2012 from 37,300, 67,200 to 103,300 respectively [32,33]. Its population is majorly consisting of subsistence farmers cultivating a range of crops including: maize, beans, bananas, wheat, barley and cowpeas and also rear some livestock [28]. The district has health centers with levels: IV, III and II with numbers amounting to 1, 9 and 13 respectively [28]. These health centres are supported by a team of village health teams also known as heath service providers constituting Health Center I and are mainly responsible for mobilization of communities to access health services.
Study design
This study employed a cross-sectional study design utilizing past malaria records from health center IIIs across the three altitude zones; higher (above 7150 ft), middle (between 4317 and 7150 ft) and lower altitude (below 4317 ft) of Mount Elgon [34]. Data on climate variables was obtained for the last 7 years (2011 to 2017). Data on confirmed malaria cases (using both microscopic and rapid diagnostic kits) from 2011 to 2017 was considered for this study and were computed to average number of true malaria cases per 1000 for each of the altitudinal zones. The rates of malaria cases were computed per month for each year. Climate data was obtained in retrospect for the 7 year period (2011 to 2017). Rainfall and temperature parameters (maximum and minimum) were the key climate parameters considered in this study as they play key roles in influencing breeding and survival of mosquitoes [35]. Analysis for the spatial temporal patterns was computed at parish level across the three altitudinal zones in the study area. Confounding factors like human population and vegetation were checked for their effect on the patterns of malaria incidences. Data on human population was obtained from the 2014 Uganda Bureau of Statistics records and computations were made for the values in different altitude zones using the human population growth rate. It was assumed that these human population values were a proxy to the actual population trends. Normalized Difference Vegetation Index (NDVI) was computed from high-resolution satellite images. Forecasts for malaria were made using ARIMA models for a period of 7 years (84 months) from the year 2017 [36]. Rates of malaria and time in terms of months were included in the model to understand the trends.
Data collection
In this study, health centers from where data was collected were purposively selected basing on their capacity to confirm and report malaria cases, as well as the volume of their malaria records. Accordingly, the most suitable health centres that were used to collect data were the health center IIIs owing to their capacity to conduct malaria tests (both microscopic and Rapid Diagnostic Test kits). The cases selected for this study at least underwent through one of these tests but not both. These health centres were also fairly well distributed across the different altitude zones divided into higher (above 7150 ft), middle (between 4317 and 7150 ft) and lower altitudes (below 4317 ft) in the district. Data was then collected from four out of nine Health Center IIIs in the four sub-counties of Benet, Binyiny, Kwanyiy and Ngenge. Data on the number of malaria cases for the past 7 years was obtained from the Health Center IIIs records. Data collected included; malaria occurrence, parish of residence, tests as well as a range of sociodemographic characteristics (gender, age and location) of each patients were obtained for a period of 7 years. Data for climate variables (temperature and rainfall) was obtained from the Uganda National Meteorological authority
Data analysis
Malaria patterns were determined using descriptive statistics of means and standard deviations (SD). These were compared across different altitudinal zones; low, mid and high altitude. Mean malaria cases per month per 1000 cases were computed over the years (2011 to 2017) for each of the three altitude zones (Higher, Middle and Lower). Secondly, in order to depict the spatial-temporal variation of malaria cases, an Inverse Distance Weighted (IDW) interpolation regression [37] at a distance of 15 km was undertaken. The IDW is a deterministic regression procedure that estimates values at prediction points (V) using the following equation [38]: Where d is the distance between prediction and measurement points, V 1 is the measured parameter value, and p is a power parameter. The advantage of IDW is that it uses non-Euclidean "path distances" for d. These path distances are calculated using an algorithm that accounts for the malaria cases from one cell to the next [39]. Trend analysis was performed to determine the variation of the patterns of malaria [40]. The average monthly numbers of malaria cases per 1000 were calculated for the full time-series (January 2009-December 2015). These were plotted to show temporal patterns in malaria and climate variables. The time series of malaria incidence was decomposed using seasonal-trend decomposition based on locally weighted regression to show: the seasonal pattern, the temporal trend and the residual variability. The time series data, the seasonal component, the trend component and the remainder component are denoted by Y t , S t , T t , R t respectively, for month t = 1 to N, and: The parameter setting "periodic" was used for the seasonal extraction, and all other parameters were by default. In the study, logarithmic transformations were used for the time series data [40].
Mann Kendal trend test [41] was used to detect the actual trends of the climate parameters and malaria. Relational analysis for malaria, temperature (maximum and minimum), human population, NDVI and rainfall was done using Kendall correlation. Model fitting was then performed to detect actual trends and relationships among variables. All analysis was done in R studio version 3.6.3 [42].
The Normalized Difference Vegetation Index was analyzed from the collected images by considering the red and near infra-red wavelengths bands in the respective images [43]. Prior to image analysis of Landsat 7 ETM+ images, a Landsat toolkit was used to remove scanline errors in all the images. NDVI was then computed by the following formula according to [44].
Trends in malaria across altitudinal zones
Time series decomposition of malaria patterns revealed existence of seasonality of malaria across the years (2011-2017) in all the altitude zones (Fig. 3). The number of cases of malaria declined from 2011 to least number of cases towards 2017 (Fig. 3). There was statistical significant difference (p < 0.05) in the number malaria cases per 1000 individuals across the three altitude zones (lower, mid and higher altitude) in each of the years (2011-2017) except the years 2013 and 2017 (Fig. 2). The cases of malaria per 1000 in high, mid and lower altitude areas were 49 (SD = 40), 67 (SD = 55) and 84 (SD = 96) respectively. Malaria cases revealed a normal curve-shaped trend over each year in the three areas (lower, middle and higher altitude areas) (Fig. 3).
Spatial patterns of malaria cases across altitudinal zones
Spatial variation of malaria (Fig. 4) revealed higher number of cases of malaria in the lower altitude areas of Kween district. Higher and mid-altitude areas of the district had relatively lower number of malaria cases (49 ± 40 and 67 ± 55 respectively), while lower altitude areas had the highest (84 ± 96) number of malaria cases. The trends however declined from 2011 to 2017 in all the altitudinal zones (Fig. 4).
Regarding spatial variation of malaria cases with NDVI, there was an increase in malaria cases as the NDVI increases (Fig. 4).
Biophysical and demographic factors interaction effect on malaria cases across altitudinal zones
We examined the relationship between biophysical factors; rainfall and vegetation and demographic factorpopulation and malaria cases across the altitudinal zones. Throughout the district, there was a good fit (R 2 > 50%) of the model for the relationship between malaria and variables (human population, NDVI, rainfall, maximum and minimum temperature) values (Fig. 5). Malaria trends revealed a significantly positive correlation with the human population (p = 0.011) (Fig. 5a) and NDVI (p = 0.00069). Further, increase in vegetation cover in all the altitudinal zones caused a positive increase in malaria cases each of these zones. Meanwhile, increase in human population caused an increase in malaria cases (Fig. 5a).
Altitudinally, the higher altitude areas had a positive correlation between human population and malaria. However, this correlation was not significant (Fig. 6a). The correlation between malaria cases and NDVI was significantly negative (at p < 0.05). In the mid altitude areas, malaria had a negative correlation with NDVI and human population (Fig. 6b). This negative correlation was however not significant. Lastly, in the low altitude areas, there was a negative correlation between malaria, human population and NDVI (Fig. 6b). The correlation between malaria and human population was significant. The R-squared values were higher (over 50%) reflecting a good fit of the model for this data (Fig. 7).
Similarly, we analyzed effect of climate factors (rainfall, maximum and minimum temperature) on malaria patterns across the altitudinal zones. In the higher altitude areas, malaria had a significant negative correlation with maximum and minimum temperature. Maximum temperature had a higher negative value compared to minimum temperature (Table 1). Meanwhile, malaria cases recorded within these high altitude areas were significantly positively related with rainfall (Fig. 8a).
In the mid altitude areas, malaria had a significantly positive correlation with rainfall ( Fig. 8b; Table 1). However, there was very low and insignificant correlation between malaria, maximum and minimum temperature (Fig. 8b). Meanwhile, in the lower altitude areas, there was a significantly higher positive correlation between malaria and rainfall ( Fig. 8c; Table 1). Meanwhile, malaria had a significantly negative correlation with minimum and maximum temperature (Fig. 8c). The correlation between malaria and maximum temperature was strongly negative compared to that of malaria and minimum temperature (Fig. 8c). There was a good fit of the model reflecting effects of climate variables (rainfall, maximum and minimum temperature) (Fig. 9). Malaria trends in relation to the climate variables reflected similar trends as the correlation analysis except that of malaria and maximum temperature.
Forecasting of malaria patterns across altitudinal zones
Forecasts of malaria for all the three altitudinal zones revealed malaria cases to continue to decrease for the following 7 (seven) years if the conditions were kept constant and/or intervention efforts are strengthened (Fig. 10). However, relaxation of the malaria control interventions would greatly allow for a surge in the cases of malaria (Fig. 10). Also, across the three zones, malaria appears to continue to be sustained in the high and mid altitude zones while the lower altitude zones would experience a decline in cases of malaria.
Discussions
There was a declining number of malaria cases across all the altitudinal zones (high, mid and low altitudes) during the study period. This can be attributed to the intensified malaria control and prevention interventions within the study area, and also throughout the whole of Uganda. Intervention efforts by the Ministry of Health in malaria prevention and control through increasing access to health services including basic diagnostics, provision of insecticide-treated mosquito nets could have reduced malaria transmission within the study area. Similar declining trends had been pointed out in other studies conducted throughout the country between 2009 to 2014 [45]. Conversely, this pattern is contrasts the results in other studies undertaken earlier in highland areas of Kenya that showed malaria incidence to increase over time [46]. This could be because of the difference in the intensity of the control interventions and other environmental factors that influenced transmission dynamics of malaria. Although a surge in malaria cases was expected in the strongest El Niño years of 2015 and 2016, it was not detected in this study. One of the reasons could be due to the continuing efforts to prevent malaria transmission in Uganda. Recent studies have highlighted distribution of insecticide treated mosquito nets to significantly reduce malaria cases in Uganda [47]. However, this result could have been masked by the under reporting of the malaria cases. Malaria patterns revealed a normal curve trend of malaria with the highest peak being in the middle (June-August) of each of the 7 years (Fig. 3). This corresponded to the trends in temperature and precipitation. However, the months of January and December had the least number of malaria cases. This can be linked to the low precipitation amounts during this period limiting availability of water for breeding of mosquitoes. This trend is similar to the results on studies undertaken in highland areas like Mount Kenya where malaria was prevalent during dry seasons [46,48]. This trend can be linked to the availability of conditions favorable for growth and development of mosquitoes that transmit malaria parasites. Increase in temperature and availability of water sources favors mosquito breeding and its transmission of malaria parasites [49]. Spatially, the hotspot of malaria varied over the 7 year period dominating the lowland areas of the district (Fig. 4). The highland areas had lower number of malaria cases compared to the lowland areas. There was a significant negative correlation between malaria patterns in the lower belt and temperature. Also, there was a significantly positive correlation between malaria and rainfall within the lower belt. In the mid altitude areas, malaria had a significant positive correlation with rainfall. Meanwhile in the high altitude areas, malaria had a significantly negative correlation with maximum and minimum temperature. Also, malaria had a significant positive correlation with rainfall.
Although previous studies noted the critical role of increasing temperature in causing surge in malaria within sub-Saharan Africa, recent studies have shown that, temperature at times can significantly reduce the vectorial capacity of the mosquitoes [50,51]. Ambient conditions of temperature enhance transmission by influencing vector and parasite life cycles [27]. However, increased or reduced temperature beyond optimal ranges can undermine the life cycle of mosquitoes limiting its transmission of malaria parasites [52]. Studies have highlighted the biological amplification nature of temperature on mosquitoes [53][54][55]. This study showed that the mean temperatures within the three altitudes varied. The difference in the contribution of maximum temperature to malaria cases between different altitudes can be attributed to the differences in prevailing temperatures in the three zones. The lower and mid altitude areas being relatively warmer and the district (Kween) having only one rainfall season was probably the main limiting factor in malaria vector development in the highland, mid and low altitude zones. Hence the onset of rainfall increased the media for vector growth and development. While rainfall creates the media, ambient temperatures favor the development and survival rates of both vectors and parasites. These conditions can be attributed to the trends of malaria in the three zones (high, mid and low) of Kween District. The highly seasonal rainfall within the study area could have limited the growth and development of mosquitoes. The pronounced malaria cases in the lower altitude zones compared to the higher altitude zones can be linked to the environmental conditions favorable for mosquito growth and development. The alternating trends can be alluded to temperature and rainfall as the latter can either favor or discourage optimal growth and development of mosquitoes [56]. In the low altitude areas where malaria had a significant relationship with malaria, it has been noted that temperature can determine the length of the time the mosquitos explore food resources while transmitting malaria [57]. This could be the same case in low altitude areas. This study was limited by lack of data on the actual malaria and mosquito vectors. This would have complemented information on understanding of the life cycle of the parasites. Future studies ought to incorporate these aspects.
Regarding effects of vegetation cover and human population on malaria, malaria had a significant negative correlation with NDVI. Similarly, in the lower altitude, malaria had a significant negative correlation with human population and NDVI. This implies vegetation increase significantly influenced malaria cases in the high altitude areas. Increase in vegetation enhances the habitat range for mosquitoes. This result has been highlighted in some studies that note vegetation cover to influence dynamics of growth and development of mosquitoes and that of the vector [58]. Also, increase in human population over time in these areas could have caused a decline in vegetation cover that would facilitate transmission of malaria by mosquitoes. Over time, extension of health services also with increasing human population could have contributed to the declines in malaria with increasing human population. Although the result regarding these two aspects in this study reveal interesting results, it was limited by the inability to disintegrate data into shorter time ranges. This was due to the unreliability of the data. Therefore future studies ought to further explore vegetation and population dynamics at monthly level and their effects on malaria incidences. This will generate information on how human activities influence transmission and incidences of such infectious diseases.
Forecasts of malaria patterns revealed a continued decline of malaria cases given conditions remain constant. However, the number of malaria cases may significantly explode if temperature and rainfall increase. This implies that interventions at this point ought to be intensified. There is also a window of opportunity for eradication of malaria in the event that the existing control and prevention interventions are intensified. This thus calls for more studies to inform modification of the interventions.
One of the limitations of this study was the use of data from ministry departments in Uganda. There is therefore no proof of validity of this data as some of it was not complete. However, it gives a general picture of what can be done so as to curtail malaria infections within high altitude areas.
Conclusions
Malaria patterns decreased over the study period in all the zones. Also, malaria belt was highly variable in in the altitudinal zones with the higher altitude areas becoming hotspots at some periods. Rainfall played a significant role in the distribution of malaria across the three zones of high, mid and lower altitude. This calls for strengthening of malaria control interventions irrespective of altitudinal ranges. The government of Uganda ought to design strategic malaria interventions to cater for different altistude zones. Stakeholders involved in malaria control and eradication efforts ought to design location specific interventions for malaria factoring out other factors like rainfall that had earlier received less attention in influencing malaria transmission. More large-scale studies should be undertaken in an attempt to understand how climate and other environmental factors influence similar variations of malaria (including different species of malaria) in different altitudinal zones. These studies should ensure validity of data by undertaking prospective studies within the population.
|
2020-06-17T14:22:16.671Z
|
2020-06-17T00:00:00.000
|
{
"year": 2020,
"sha1": "ab4344542cbe7411c243be091f43f8fba829b0a4",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-020-05158-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab4344542cbe7411c243be091f43f8fba829b0a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
}
|
233657401
|
pes2o/s2orc
|
v3-fos-license
|
Genesis of Precious Metal Mineralization in Intrusions of Ultramafic, Alkaline Rocks and Carbonatites in the North of the Siberian Platform
The gold and platinum-group elements (PGE) mineralization of the Guli and Kresty intrusions was formed in the process of polyphase magmatism of the central type during the Permian and Triassic age. It is suggested that native osmium and iridium crystal nuclei were formed in the mantle at earlier high-temperature events of magma generation of the mantle substratum in the interval of 765–545 Ma and were brought by meimechite melts to the area of development of magmatic bodies. The pulsating magmatism of the later phases assisted in particle enlargement. Native gold was crystallized at a temperature of 415–200 °C at the hydrothermal-metasomatic stages of the meimechite, melilite, foidolite and carbonatite magmatism. The association of minerals of precious metals with oily, resinous and asphaltene bitumen testifies to the genetic relation of the mineralization to carbonaceous metasomatism. Identifying the carbonaceous gold and platinoid ore formation associated genetically with the parental formation of ultramafic, alkaline rocks and carbonatites is suggested.
Introduction
Phlogopite, magnetite, chromite, fluorite, apatite, nepheline, diamonds, titanium, uranium, rare and rare earth elements deposits are known in the Maimecha-Kotuy Province associated with intrusions of ultramafic, alkaline rocks and carbonatites. A wide range of minerals associated with this unique magmatism was supplemented in the 1980s with alluvial gold and platinum-group metals (PGM) deposits within the contour of the Guli Intrusion outcrops. To date, no significant hardrock deposits have been discovered in the region. The hardrock gold and platinum mineralization in the Kresty intrusion (satellite of the Guli volcanic-plutonic complex) remains a subject of discussion due to poor reproducibility of the analytical results [1]. The reliability of the presence of platinumgroup elements (PGE) and gold in the rocks of the Kresty massif has been confirmed by finds of Au and PGM, confirmed by Wavelength-dispersive spectroscopy (WDS) analyses. No commercial prospecting works for precious metals accompanied by standardized analytics have been carried out within the intrusions, although scientific publications already contain material for geological justification of such works [1][2][3][4][5][6][7][8][9][10][11][12]. Classifications of noble metal deposits do not contain any data on the formation systematics of gold-platinum deposits in the intrusions of ultramafic, alkaline rocks and carbonatites [2,13]. In this study, we present evidence that the hardrock Au-PGE mineralization in this formation intrusion L.S. Yegorov [14] distinguishes seven phases in the history of formation of the intrusion: olivinite-dunites and pyroxenites (1st), melilitolites (2nd), alkaline ultramafites and gabbroids (3rd), foidolites (4th), nepheline and alkaline syenites (5th), phoscorites (6th) and carbonatites (7th). The photographs of the main rock varieties are provided ( Figure 2). The rock composition of the mass formation phases and the mineral composition of the rocks are given in Table 1. Table 1. Mineral composition of the rocks of the Maimecha-Kotuy alkaline-ultramafic magmatic complex [14].
Rocks Main Minerals Secondary and Accessory Minerals
Ultramafic Rocks (Phase 1) Dunites, olivinites, and ore olivinites, pegmatoid olivinites, ore pyroxenites (kosvites), porphyroid trachitoid olivine pyroxenites Olivine (Fa-6.2-8.8% in pegmatoids up to 9-11%), monocline pyroxene (augite-diopside), chromite (titanium-ferrichromite), titanium magnetite The age of the main magmatism phase is estimated equal to 225 Ma, with the K-Ar dating fluctuation within 375-75 Ma [14]. Table 2 shows data on the age of the rocks of the Guli intrusion, carried out by researchers in different years. We have performed analytical studies to determine the precious metals' distribution in the rocks of the Guli intrusion (Table 3; Figure 3). The excess in concentrations, in excess relative to the average crustal concentrations, has been revealed: Pt-in meimechites, dunites, chromitites, magnetitites, melilitolites, nepheline pegmatites; Pd-in all analyzed rock varieties of the intrusion; Rh-in the rocks of Phases 1, 2 and 3; Ir-in the rocks of Phases 1, 2, 3, and 5; Ru-in mylonites of Phase 1, rocks of Phase 3 and skarned melilitolites (Phase 2); Os-in chromitites (Phase 1); Au-in magmatites and metasomatites of all phases except for serpentinite; Ag-in the rocks of all phases except for the rocks of Phase 3, magnetitites and serpentinites. The concentrations of elements are commonly lower than in chondrite (C-1), except for silver in almost all rocks and palladium in phlogopite porphyrite (Phase 3), agpaitic nepheline syenite (Phase 5) and mylonite of dunite. Table 3).
The maximum content of PGE (mainly due to palladium) in the rocks of the differentiated complex was identified in phlogopite porphyrites (1.9 ppm), agpaitic nepheline syenites (1.76 ppm) and dynamometamorphites of dunites and peridotites (3.1 ppm). The highest gold grades were recorded in melilitolites and their skarned varieties, magnetitolites and dolomite carbonatites.
Thus, igneous rocks of Phases 1, 2, 3, 5 and 7, and the associated alkaline and mafic metasomatites, take part in the formation of geochemical anomalies of precious metals. Dunite, among the chromite aggregates, also contains particles of the Ru-Os-Ir composition in an aggregate with chalcopyrite and Ru-Os-Ir sulfide (Figure 4a). Dunite olivine contains aggregates of djerfisherite (K 6 (Fe,Cu,Ni) 25 S 26 Cl) with talnakhite and chalcopyrite with bornite ( Figure 4b).
Alluvival Mineral Associations of the Guli Intrusion
Minerals of the precious-metal complex, discovered in placers within the Guli intrusion, are represented by native osmium and gold ( Figures 5 and 6). Isoferroplatinum (Pt 3 Fe) and palladic ferroplatinum (Pt,Fe) are subordinated and form pockets in osmium aggregates. Refractory PGM sulfoarsenides (erlichmanite OsS 2 , laurite RuS 2 , tolovkite IrSbS and irarsite (Ir,Ru,Rh,Pt)AsS) are found in the form of fine accretions in native osmium aggregates.
The compositions of the platinum metal minerals are given in Figure 7. According to the mining data, in the placer of the Gule River, about 10% of the platinum pan sample contained euhedral native osmium monocrystals, the size of the latter being over 2.0 mm. Findings of precious metal minerals in the bedrock occurrence are extremely rare. Isoferroplatinum and palladic ferroplatinum form microscopic pockets in native osmium aggregates. Refractory PGM sulfoarsenides (erlichmanite, laurite, tolovkite and irarsite) are found in the form of fine accretions in native osmium crystals. Iron, nickel and copper sulfides are found with them. The paragenetic interrelations between the noble metal minerals are illustrated in Figure 8. Native gold was studied in gold-bearing bedrock crushed samples and prospecting working pan samples along the Selingde, Vetvistaya, Ingaringda, Vostochnaya, Pois-kovaya and Gule Rivers. Native gold particle sizes predominate in the less than 2 mm fraction. About 5% of the metal was recorded in some cases in the 1-2 cm fraction. The roundness of the particles increases with their size. Variability in the color of the particles from white to various intensities of yellow and red is typical. The majority of particles have a gold-silver composition corresponding to three chemical types of the mineral (Au wt. %: 61.1-68.6, 76.1-83.6 and 91.0-98.5). Particles of the last two fineness classes are the most widespread. Some compositions of the mineral are characterized by a notable Cu content. Composition points within the diagram are grouped along the line parallel to the solvus isotherm of the Au-Ag-Cu system for a temperature below 400 • C ( Figure 9). Several determinations of the mineral composition correspond to cuproaurite. Emulsive, parallel-plate and lattice-like couproaurite aggregates were observed in gold from samples 833, 750 and in electrum (650‰, 636‰ and 575‰). Sometimes cuproaurite forms a continuous margin around electrum. In this case, worm-like ingrowths of electrum are observed in cuproaurite, and lattice-like aggregates of cuproaurite are observed in the inner zone of the electrum ( Figure 8d). Iron, zinc, lead, arsenic, antimony, tellurium and mercury were measured in gold minerals with concentrations typically ranging from 0.05 to a tenth %. Figure 9. Composition of placer gold from the placer of the Gule River (green triangles) and bedrock gold particles from olivine rocks (red squares) [11].
These minerals are the main rock-forming minerals of the corresponding rocks of the polyphase intrusion. Paragenetic associations of the host minerals of the platinum metals and gold with magmatic and entrapped hydrothermal rock-forming minerals allow inferring their formation and repeated recrystallization due to the polyphase magmatichydrothermal processes of the massif formation. The most reliably established parageneses of the host minerals and entrapped minerals allow making the following conclusions about the time of crystallization and transformation of mineral aggregates of platinum metals (Tables 1 and 4).
1.
The association: native osmium + forsterite + chromite, judging by the presence of forsterite and chromite in it, was formed during the first phase of magmatism in the course of crystallization of olivine rocks and pyroxenites.
2.
The association: native ir-osmium ± hyalosiderite ± aegirine-diopside ± aegirineaugite ± phlogopite corresponds to the fourth phase, the formation of foidolites, in terms of the set of barren minerals.
3.
The association: native osmium ± isoferroplatinum ± ir-osmium ± irarsite + aegirine ± ilmenite ± alkali feldspars ± erlichmanite ± chalcopyrite corresponds to the fifth phase of magmatism, the formation of nepheline and alkaline syenite bodies. Thus, the mineral assemblage found in the stream sediments corresponds to the three phases of magmatism (Table 1). Forsterite and chromite are typical minerals of ultramafic rocks. Diopside, Fe-rich olivine, micas, ilmenite, magnetite and titanite are typomorphic for mafic and alkaline rocks. We think that aegirine, aegirine-augite and arfvedsonite were formed as a result of alkaline metasomatism under the impact on ultramafite minerals as a result of emanation of the alkaline melts, which are parent to the ijolite-carbonatite rock association. Fluids assisted in mobilization of the platinum group elements and their accumulation in apical parts of the ultramafic complexes.
Intergrowths of rock-forming minerals with native gold found in the concentrate are represented by the following groups:
4.
Native Au + aegirine + nepheline (typical association in the rocks of the fifth intrusive phase).
5.
Native Au ± melanite ± rutile ± chlorite ± kaolinite (mineral associations in the rocks of the seventh phase of the intrusion and low-temperature metasomatites).
Therefore, entrapped minerals of all magmatism phases, except for rock-forming minerals of the second phase, are present in native gold. Dark-colored minerals abruptly prevail as inclusions in native gold. Mineral associations of nepheline rocks and carbonatites playing a predominant role in the coarse gold formation are most widely developed. Typical metasomatic minerals (chlorite, kaolinite, garnets, chemically pure magnetite and feldspars) are present in the form of inclusions in all observed mineral associations. Formation of coarse native gold evidently occurred at the stage of metasomatic transformation of magmatic rocks.
Noble Metal Mineralization of the Kresty Intrusion
The Kresty massif is located 54 km southwest from the Guli Pluton. The intrusion has an ellipsoidal shape. Its northern part is overlapped by Quaternary sediments of the Yenisei-Khatanga piedmont depression (Figure 1b).
The central part is represented by dunites, wehrlites, clinopyroxenites and their ore varieties, containing perovskite and Ti-magnetite about 30-40% in volume. In the western and eastern peripheral parts of the massif, melilite rocks have developed ( Figure 10). Rock photographs under a microscope are provided ( Figure 11). Dike bodies of alkaline picrites, trachydolerites, alkaline microsyenites, microgranosyenites, calcite carbonatites and nepheline-melilite lamprophyres are distributed within the intrusion and the hosting effusive stratum. The hosting stratum is represented by predominantly melanephelinites (augitites), as well as subordinated lava flows of meliliteand leucite-containing varieties of melanephelinites and their clastic lavas.
Injection melilitolites-ultramafites, recrystallized olivinites and, frequently, melilitebearing, monticellites skarns and skarned peridotites, are formed due to the formation of melilite bodies in hosting ultramafites. Anenburg and Mavrogenes [23] and Giebel et al. [24] recommend using the term antiskarns for the skarn-like assemblage formed by fluids from alkaline rocks and carbonatites. There is essentially always a fragmentary autoreaction skarn mineralization in melilitolites. Aggregates of micro-granular inclusions of monticellite, andradite, diopside, wollastonite, vesuvianite, phlogopite, pectolite, rankinite and larnite are confined to fractures and melilitolite grain borders. Inclusions of arsenopyrite, syngenetic iron, nickel, copper and lead sulfides, native gold and platinumgroup minerals are found in lower quantities together with skarn-associated minerals.
Secondary gold, platinum and palladium halos were identified on the basis of the results of prospecting geochemical work in the fields of development of fenites, fenitized rocks and skarned ultramafites. Gold halos were identified in the field of development of monticellites. The correlation exists only between the platinum and palladium concentrations.
Platinum minerals represented by native platinum (Pt > 80%), ferriferous and isoferroplatinum of cubic symmetry and anisotropic tetraferroplatinum have been identified in crushed samples of the rocks of the Kresty intrusion. The shape of the grains is xenomorphic, with no straight boundaries. They form aggregates with perovskite, Ti-magnetite and barren rock-forming minerals. Inclusions of graphite, chalcopyrite and pyrrhotite are observed in some large grains of platinum minerals. The composition of the ferroplatinum is as follows (wt. %): Fe-11.56-11.66; Ni-0. 35 During the EPMA study of the composition of ore minerals and the presence of platinum-group elements was noted in titanomagnetite (Pt), picroilmenite (Pt), magnetite (Pt), pyrrhotite (Ir), galena (Ir, Pt, Rh, Os) and djerfisherite (Ir, Pt). PGE in oxides and sulfides are associated probably with submicroscopic inclusions of the platinum-group minerals because their distribution in the minerals is extremely irregular.
In the rocks of the Kresty intrusion, native gold occurs more frequently than other minerals of noble metals. Gold particles are commonly located in aggregates of postmagmatic silicates and in perovskite fractures, not associated with sulfides. Aggregates with sulfides are not typical for native gold, but single inclusions of gold particles in pyrite and pyrrhotite are noted. In areas where native gold particles occur, brown carbonaceous matter and graphite are noted. In the process of the EPMA analysis of Os-Ir-Rt minerals, pyrite and chalcocite grains, the presence of gold probably associated with submicroscopic inclusions of native gold in them is detected. According to the chemical composition, the following mineral varieties are identified: medium-fineness-81.9-83.6%, with copper impurity (2.96-4.64 wt. %); medium-fineness-83.6-86.2%, with insignificant mercury impurity (0.2 wt. %) and without copper impurity, extremely high-fineness-96.3-99.6%, with mercury impurity (0.13-0.88 wt. %); extremely high-fineness-97.3-98.4%, without mercury, with copper impurity (0.1-0.7 wt. %); and low-fineness-65.6%, without copper and mercury impurity-76.4%, with the mercury content up to 0.4 wt. %. Therefore, it is worth noting that the precious-metal mineralization is confined to spot-fracture segregations of bitumen (light, oily, resinous and asphaltene), in which there are micron flakes of graphite.
Ore-Generating Magmas
It is assumed that magmas born in the mantle served as an ore substance supplier. Komatiite-meimechite and high-calcium melilite magmas and their differentiates took part in the formation of intrusions in the Maimecha-Kotuy region. Generation of such magmas occurs at different depths and at varying degrees of melting of the mantle substratum and the fluid flow conditions. According to Sobolev et al. [25][26][27], the melt, from which meimechites were formed, represents a primitive magma of alkaline-komatiite composition similar in the thermodynamic parameters to the Archaean analog, but with a higher content of titanium and alkalis. Its formation is possible at partial melting of garnet peridotite. A melt separated from restite at a depth of 230-300 km at the temperature of 1640 • C as a result of diapirism of the mantle. Its primitive composition is assumed on the basis of the composition of melt inclusions in olivine from meimechite of the Guli volcanopluton (in wt. %): SiO 2 40.81; TiO 2 2.96; Al 2 O 3 3.92; FeO 12.98; MgO 28.29; CaO 6.98; Na 2 O 1.23; K 2 O 1.26; and P 2 O 5 0.20. The primary meimechite melt was probably rich in CO 2 (5.8 wt. %) and H 2 O (1.8 wt. %) in depth conditions. A high content of alkalis in the melts entrapped by olivine testifies to their systematic removal from the rocks at later magmatic and postmagmatic stages. The evolution of such magma leads to formation of ultramafic, mafic with increased alkalinity and feldspathoid syenites.
The high calcium melilite magma cannot be a differentiate of the meimechite melt. Its melting is possible at deeper levels of the mantle as compared to the meimechite melt. The melt microinclusions in melilite rock minerals correspond in terms of the composition to the approximate composition of similar alkaline magma (in wt. %): SiO 2 36.5; TiO 2 12.6; Al 2 O 3 11.1; FeO 6.7; MgO 3.8; CaO 15.0; Na 2 O + K 2 O 9.2; P 2 O 5 1.5; and CO 2 3.6 [10,[28][29][30][31]. It was found out that during perovskite, melilite and monticellite crystallization (1280-1160 • C) the melt suffered repeated stratification into silicate and carbonate fluids in hypabyssal magmatic chambers. The latter liquated repeatedly in the temperature range of 1200-800-600 • C with the formation of alkaline-sulfate, alkalinephosphate, alkaline-fluoride and alkaline-chlorite salt solution melts [1,29]. They are commonly mixed with preservation of their original composition only in case of quick eruption and hardening.
Conditions of Formation and Age of the Precious Metal Mineralization
The mineralogical studies show that the Os-Ir minerals crystallized in the form of minor particles and were entrapped by olivine and chromian spinel of the meimechite melt. It is assumed that the residual refractory PGM in the depleted mantle substratum were present as very fine particles of metallic alloys. Magma chamber formation of more primitive magma was accompanied by the growth of particles of refractory PGM up to the formation of nuggets. Meanwhile, interstitial solutions circulating in the intergranular medium assisted in the growth of osmium and iridium particles. The lower temperature limit of generation of the metallic alloy saturated with PGEs from the meimechite magma is at least 1070 • C [32].
The studies of the isotope system 187 Os-188 Os [7][8][9] showed that the reference age of the ruthenium-iridium-osmium mineralization in the Central block of the Guli Massif is in the interval of 545-615 Ma, and a more ancient reference dating (745-760 Ma) is typical for its southwestern fragment. Our data on the basis of the Sm-Nd reconstruction of the reference age for the Kresty Massif, taking into account the probable variation in such characteristics for a series of alkaline intrusions of the province, supports these data ( Figure 12).
The melilite and monticellite rocks of this unit, which are most similar to the primary mantle source, are characterized by the T DM values in the range of 580-700 Ma [22]. This time interval corresponds quite well to the age of formation of the lithosphere mantle in the Paleo Asian Ocean and allows for the interaction of such a substratum with the material of the Siberian superplume [10,[33][34][35]. A significant variety of the rock composition of the intrusions under study (Table 1, Figure 2) testifies to heterogeneous oxidation and a high degree of oxidation of mantel fluids in the zone of magma generation, which is indicated by [32,33,35,36].
The observed variations in the model values of the age as per the Os system testify to the heterogeneity of the intrusion blocks and centers of magma generation of the Guli pluton. Meanwhile, significant heterogeneity in the chemical composition is noted in the structure of the grains of Os-Ir-Ru minerals. It is probably associated with the duration of their formation, i.e., additional growth and re-crystallization of minor grains to larger ones at alternating values of fO 2 . The age of formation of the Guli intrusion, estimated to be 251 ± 2 Ma [1,17,37], probably means removal by intruding melts of very fine individuals of the Ru-Ir-Os mineralization formed at significantly earlier episodes of magma generation of the mantel substratum in the interval of 765-545 Ma. Studies of silicate inclusions in PGM are of great importance in interpreting the conditions of its formation [38][39][40]. Nixon et al. [41] concluded that PGE mineralization in lode and placer deposits associated with the Tulameen complex (British Columbia) were formed from silicate magmatic melt during chromite deposition. They believe that silicate inclusions in PGM nuggets (clinopyroxene, magnesian flogopite, biotite, hornblende, plagioclase, sericite, chlorite and epidote) were formed during the metamorphism of the greenschist facies of ore-bearing rocks. Johan [39], based on the associations of minerals in inclusions in PGM from the placers of Nizhny Tagil (Middle Urals), came to the conclusion that ore mineralization was formed in two stages-at high and low pressures and temperatures from 1100 to 700 • C. Peck et al. [40], based on the study of mineral inclusions in PGM, concluded that the Os-Ir-Ru alloys from the placers of western Tasmania are spatially related to the basite-ultrabasic complexes. PGM are confined to the dunites. Moreover, PGM crystallized before magmatic melts appeared in crustal magma chambers.
Experimental studies [42] on the behavior of noble metal nanoparticles and Fe-Ti and PGE oxides in silicate melts showed that during slow melt cooling dispersed PGM particles are enlarged. Crystallization of Fe and Cr oxides causes formation of a redox gradient in the silicate melt in oxide-rich zones and the PGM crystallization in these areas [43]. The presence of entrapped minerals, such as augite, aegirine, magnetite and ilmenite in osmium nuggets indicates a locally non-uniform redox environment for their growth. Markl et al. [43] showed that oxygen fugacity is controlled by the potassium/sodium content in the fluid. In our case, heterogeneity of the redox environment of osmium crystallization is confirmed by the presence of the potassium mineral phlogopite-biotite and sodium mineral aegirine in it. The presence of entrapped minerals (augite-diopside, phlogopite, aegirine, magnetite and ilmenite) in nugget Os-Ir-Ru mineral formations testifies to the long-term process of their growth. The presence of inclusions of rock-forming minerals from the 4th and 5th magmatic phases in the overwhelming majority of osmium grains, in addition to the 1st phase minerals, indicates the long-term re-crystallization of Os-Ir-Ru minerals in the process of both differentiation of the basic ultramafic magma and at the metasomatism stage. This is confirmed by the fact that the areas of concentration of the ruthenium-iridium-osmium mineralization are located in contact zones of magmatites of phases heterogeneous in time.
The role of other platinoids in the ore mineral formation of the 1st phase is abruptly subordinated. The thermal properties of the Pt + Pd + Rh and Ag + Au associations lead to their segregation in the lower temperature field of magmatogenic-hydrothermal melts. The level of affinity with the iron as well as hydrogen, oxygen and sulfur determines the accumulation of the Pt + Pd + Rh triad in later differentiates of magmatic melts. Increased platinum concentrations were noted in magnetite-bearing pyroxenites, peridotites, magnetitolites and magnetite-melilite antiskarns. Palladium is concentrated in anomalous values in nepheline rocks and carbonatites. He et al. [44] concluded that sulfides enriched in PGE are converted to sulfate in carbonate melts of the mantle with the release of PGE into the carbonate melt. In our case, carbonatites contain elevated concentrations of palladium, gold and silver (Table 2). Platinum, palladium and rhodium partially form osmium and are concentrated in sulfides, and isoferroplatinum is formed in favorable conditions. Palladium and platinum minerals are found in insignificant quantities in aggregates and inclusions in osmium. Minor inclusions of platinum and palladium sulfides, arsenides, antimonides and tellurides are explained by an abruptly subordinated quantity of sulfur, arsenic, antimony and tellurium in the ore forming system. Anenburg et al. [37] suggest that PGM nanoparticles can be transported by silicate melts from places of origin without concentration by sulfide fluids.
Gold and silver in anomalous concentrations are found in meimechites, dunites, titanium-magnetite peridotites and pyroxenites, magnetitolites, melilite rocks, ijolites, urtites and carbonatites. It is worth noting that, in this case, the rocks contain sulfides, commonly in very minor quantities. The inclusions of rock-forming minerals of all magmatism phases in native gold indicate the participation in the gold ore process of meimechite, melilite, foidite and carbonatite melts in the wide range of mineral deposition temperatures. Enlargement of native gold particles and continuing deposition of gold and silver minerals occurred at temperatures of 415-200 • C at the metasomatic stage [1,11,12]. This is confirmed by the presence of inclusions in gold-bearing minerals of metasomatic paragenesis (rutile, magnetite, diopside, garnets, chlorite, etc.). The upper temperature limit of the postmagmatic ore formation was defined on the basis of the temperature of the stable cuproaurite phase formation. The lower border of the gold-bearing minerals deposition is less definite, and corresponds to the temperature of hydrothermal metasomatism with the formation of gold parageneses with kaolinite. The variety of native gold in terms of fineness classes and the presence of electrum, küstelite and cuproaurite are associated with the heterogeneity of crystallization conditions in the area of shallow depths at the front of mixing of reduced and oxidated fluids. A high concentration (more than 60%) of large native gold particles in placers testifies to the predominance of coarse mineral particles in primary ores formed from the solutions highly saturated with gold, which existed for a long time in the ore formation system.
Conclusions
We presented new data on the relationship between PGM formation and multiphase alkaline-ultrabasic and melilitholite-carbonatite magmatism of the Maimecha-Kotui province. Silicate inclusions were found in native gold and Os-Ir-Ru minerals from placers within the Guli intrusion. Mineral associations of these inclusions correspond to parage-neses of ultramafites, foidolites, alkaline gabbroids, syenites, melilitolites and foskoritecarbonatite derivatives. This can testify to the immediate participation of differentiates of the komatiite-meimechite, melilitite and carbonatite magmas in the formation of the Au-PGM mineralization. The precious metal mineralization in magmatic rocks accumulated in the oily-resinous-asphaltene bitumen of spot-fracture distribution.
Therefore, the massifs of alkaline-ultramafic rocks and carbonatites of the Maimecha-Kotuy Province are potential units for discovering a localized hardrock precious-metal mineralization associated genetically with primary meimechite and high-calcium alkaline (melilitite) magmas and their differentiates.
Close association of native gold and platinum metal minerals with carbonaceous segregations in the rocks testifies to a wide participation of hydrocarbons in the transmission, deposition and accumulation of gold and PGE. This allowed us to identify the carbonaceous-gold-platinum ore formation related to the ultramafic, alkaline and carbonatite magmatism of the central type, widening the idea on the genesis of precious metals and the prospecting area [2,13,45].
|
2021-05-05T00:07:49.789Z
|
2021-03-29T00:00:00.000
|
{
"year": 2021,
"sha1": "e7b2252dc8db74185554b9814d2ff90b199ff78f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/11/4/354/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2ba810d3559313017d4626bb29f1f2426995f49c",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
266097805
|
pes2o/s2orc
|
v3-fos-license
|
Construct validity of a dietary protein assessment questionnaire to explore college students’ knowledge and attitudes towards dietary protein
Introduction Misconceptions about dietary protein may exist due to unscientific information from commonly used sources such as social media. Understanding knowledge and attitudes towards protein is important for developing effective interventions to improve the dietary behaviors of U.S. college students. The objective of this study was to develop a questionnaire to evaluate college students’ knowledge and attitudes towards dietary protein. Methods The questionnaire had 64 questions, including 8 demographic, 24 knowledge, 14 attitude, and 18 behavior questions. Construct validity of the knowledge questions was assessed by performing known-group comparisons using an independent t-test. Exploratory factor analysis (EFA) with principal axis factoring and a promax rotation was used to evaluate the factor structure of the attitude questions. Results Four hundred seventy participants (87.3% female) provided responses for the attitude questions. Fifty-five nutrition and Fifty-one non-nutrition students provided responses for the knowledge questions. Three factors were retained: animal protein sources’ relationship with human and environmental health (Factor 1); organic protein sources (Factor 2); and adequacy of the protein recommended dietary allowance (RDA) for weight loss and vegetarian diets (Factor 3). Mean knowledge responses were 66.4 ± 11.5% and 47.6 ± 16.4% for nutrition and non-nutrition students, respectively (t-test p-value for difference <0.001). Conclusion Protein attitudes appear multidimensional and correlated. Further testing is needed to confirm the three-factor model and to assess temporal reliability.
Introduction
Protein is a major structural and functional component of the human body, accounting for approximately 14%-16% to the total mass of a lean adult (1,2).Dietary protein intake recommendations vary by life cycle phase, disease state, and physical activity (2)(3)(4)(5)(6).The Recommended Dietary Allowance (RDA) for protein for healthy adult men and women is 0.8 grams per kilogram of body weight per day (g/ kg/day), which is based on careful analyses of available nitrogen balance studies (2).Dietary protein recommendations vary across professional organizations, such as the International Society of Sports Nutrition, Academy of Nutrition and Dietetics, and Institute of Medicine, based on physical activity and age (2,3,5).The International Society of Sports Nutrition recommends 1.4-2.0g/kg/day for an athlete who lifts weights or is training for an endurance event (3).The Academy of Nutrition and Dietetics recommends that a protein intake of 1.0-1.6 g/kg/day for older adults >60 years is safe and adequate to meet their needs, while the Institute of Medicine suggests older adults do not have elevated protein needs above 0.8 g/kg/day (2,5).The Acceptable Macronutrient Distribution Range (AMDR) for protein varies by age and is 10%-35% of total calories for adults >18 years (2).These discrepancies are of consequence for health professionals who provide dietary recommendations for patients and for young adults and athletes who seek recommendations from reputable sources.
The average protein intake in the United States (U.S.) is close to the Dietary Guidelines for Americans' recommendations for all age-sex groups; however, the average intake of different protein sources vary in comparison to the recommendations, especially for seafood (7).These dietary behaviors may be due to poor nutrition knowledge, poor attitudes towards food and nutrition, use of unreliable sources, lack of availability and/or accessibility to food sources, and/or unawareness of evidence-based recommendations (8,9).Dietary behaviors are influenced by many factors including nutrition knowledge and attitudes towards food and nutrition (8)(9)(10).Attitudes towards food and nutrition are formed in part by nutrition knowledge.Greater nutrition knowledge has been associated with positive attitudes towards food and nutrition, as well as increased adherence to dietary recommendations (8)(9)(10).Unhealthy dietary behaviors among college students, such as high intakes of fast food and low intakes of fruits and vegetables, have been observed (11,12).Additionally, lack of knowledge about protein has been found among college students, despite common use of protein supplements (13).Low levels of nutrition knowledge, as well as poor attitudes towards protein, may be due to unsubstantiated nutrition information (14).
There is limited research available on protein knowledge and attitudes among U.S. college students; and no validated instrument exists to accurately assess these constructs (9).It is crucial to understand protein knowledge and attitudes to design and implement appropriate education tools, increase awareness, and address misconceptions.Considering these limitations, the Dietary Protein Assessment Questionnaire (DPAQ) is under development to quantify dietary protein knowledge, attitudes, and sources of nutrition information so that researchers can explore the relationships between these constructs and outcomes.The DPAQ will ultimately help professionals create and provide appropriate educational interventions and resources to help improve the health of U.S. college students.This study provides valuable preliminary data on construct validity of the knowledge and attitude questions, which will guide future development of the DPAQ to become the first validated instrument for dietary protein.
2 Materials and methods
Item generation
The items for the DPAQ were generated using principles from Don Dillman's book on survey development (15).The DPAQ consisted of 64 questions on the knowledge, attitudes, and behaviors towards protein, including 8 demographic questions.The knowledge questions consisted of three answer choices (true, false, unsure) and were created to assess respondents' knowledge about dietary protein sources and requirements for various populations, such as physically active individuals and individuals adhering to a vegetarian diet.The attitude questions included a 5-point Likert scale ranging from "strongly disagree = 1" to "strongly agree = 5" with a neutral midpoint to assess respondents' attitudes towards plant and animal protein sources.The behavior questions consisted of multiple-choice answer options to assess respondents' dietary patterns regarding protein.
Nutritional science researchers reviewed the questionnaire for applicability, structure, reading level, and comprehension.The questionnaire was then updated according to feedback.Cognitive interviews were conducted using individuals with no nutrition background to assess information-processing needs of the questionnaire items (16).Researchers and statisticians reviewed the questionnaire to identify appropriate scaling of answer choices and the questionnaire was updated to create the final version prior to distribution.The DPAQ was then administered using PsychData (PsychData.com,LLC, State College, PA).See Supplementary material for the version of the DPAQ that was administered.
Sample and recruitment
In the fall 2018, participants were recruited through an open call email sent to students attending Texas Woman's University.The email informed potential participants of the study's purpose, eligibility requirements, and included a link to the DPAQ.Participants were recruited with the help of professors and researchers to voluntarily complete the questionnaire.The online questionnaire link was also posted on social media sites and spread by word of mouth.Eligibility requirements included individuals ≥18 years of age with a reliable Internet source.
Data were collected from nutrition undergraduate students enrolled in a junior-level nutrition class and from non-nutrition undergraduate students enrolled in a junior-level education class as a comparison group for the knowledge section.Students were offered extra credit in their respective classes for successful completion of the questionnaire.
Approval of the study was obtained from Texas Woman's University Institutional Review Board.Informed consent was collected from each participant before participation in the questionnaire.Data were de-identified except for the nutrition and education students used for the knowledge section.
Validity measures and data analyses
For the attitude questions, participants were randomly partitioned into two analytic samples.One sample was used to identify possible factor structures, while the other was used to re-evaluate the factor structure.The correlation matrix and factor loading scores for both analytic samples were examined, and items were eliminated according to criteria.
Exploratory factor analysis (EFA) with principal axis factoring and a promax rotation was performed on the 14 attitude questions to identify the dimensionality of the attitude constructs for the subjects.The correlation matrix was examined for items exhibiting multicollinearity (r ≥ 0.9).Factor retention criteria included factors ≥|0.4| and factors comprised of two or more items.Composite scores for the factors were calculated according to their factor loadings.Internal reliability was examined using Cronbach's α.The questionnaire responses were then compared across gender, education, and race/ethnicity using an ANOVA and adjusted for multiple comparisons using the Tukey-Kramer adjustment where necessary.
The knowledge questions were evaluated for construct validity by comparing mean scores between nutrition and non-nutrition majors using independent samples t-test.The correct answers were totaled for each student to determine the mean scores.The answers marked "unknown" were given a value of zero and did not contribute to overall mean scores.A p < 0.05 was considered statistically significant for all analyses unless otherwise indicated.All data analysis was performed with SAS ® software, Version 9.4 Statistical Analysis System (RRID:SCR_008567).Copyright © 2013 SAS Institute Inc. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc., Cary, NC, United States.
Questionnaire participants
Four hundred seventy responses were received and 450 provided complete demographic data.Most participants were female (87.3%), and the mean age was 28.2 ± 11.4y.See Table 1 for complete demographic information.
Exploratory factor analysis #1
Two hundred twenty-five participants were randomized to the first EFA; 74.2% provided complete responses for the attitude questions.The data exhibited good sampling adequacy (Kaiser-Meyer-Olkin test = 0.8) and the correlation matrix was suitable for structure detection (Bartlett's test <0.001).There was some evidence of multicollinearity observed within the correlation matrix (determinant = 0.006).A total of five items did not meet inclusion criteria (primary factor loading ≥|0.4|) and were removed for the subsequent EFA.The first EFA retained four factors and explained 62.3% of the total variance.
Exploratory factor analysis #2
Two hundred twenty-five respondents were randomized to the second EFA; 59.1% provided complete responses for the attitude questions.The data demonstrated good sampling adequacy (Kaiser-Meyer-Olkin test = 0.76) and the correlation matrix was suitable for structure detection (Bartlett's test <0.001).The correlation matrix was examined for items exhibiting extreme multicollinearity (determinant = 0.007).There was some evidence of multicollinearity observed among the statements of "meat consumption is unhealthy" and "meat should not be consumed" (r = −90).Three factors were retained, which were comprised of the nine items remaining from EFA #1 and explained 73.9% of the total variance.See Table 2 for variance explained by each factor.All items displayed a factor loading ≥|0.4|.
CKD, chronic kidney disease; *p < 0.05; group difference for the continuous variable was assessed using the independent t-test; group differences for categorical variables were assessed using chi-square test for independence.
Factor 1 included five items related to animal protein sources and their relationship with human and environmental health.Factor 2 included two items pertaining to the healthfulness of organic protein sources.Factor 3 included two items describing the adequacy of the RDA for protein with respect to weight loss and adherence to a vegetarian diet.Factor 1 shared a moderate, inverse relationship with Factor 2 (r = −0.47),and a weak, positive relationship with Factor 3 (r = 0.29).Factor 2 shared a weak, inverse relationship with Factor 3 (r = −0.19).Cronbach's α coefficient for Factor 1 (α = 0.87) and Factor 2 (α = 0.83) displayed evidence of good internal reliability.Satisfactory internal reliability was observed for Factor 3 (α = 0.65).
Knowledge towards protein
Fifty-five nutrition undergraduate students and 51 education undergraduate students' responses were analyzed.The majority of participants were female (95.3%) and the mean age was 27.9 ± 11.2 years.The nutrition students' mean test score was 66.4 ± 11.5% with scores ranging from 42%-92%.The education students' mean test score was 47.6 ± 16.4% with scores ranging from 17%-79%.A significant difference in mean test score values was observed between nutrition and education students (18.8 ± 14.1; p < 0.001).Cohen's d indicated a large, standardized difference between nutrition and education mean scores (d = 1.33).
Discussion
Currently, no validated questionnaires exist that attempt to measure the knowledge and attitude constructs of protein among the college student population (17,18).As a result, studies that evaluated knowledge and attitudes towards specific macronutrients lacked validated instruments (17)(18)(19)(20)(21)(22)(23).This study provided evidence of construct validity for the DPAQ's protein knowledge and attitudes.
The EFA identified a multidimensional structure, and the original 14 attitude items could be shortened by five items without decreasing internal reliability.Five items loaded strongly with human/ environmental health (Factor 1).Items contributing positively to the Factor 1 score included "meat production is harmful to the environment, " "meat should not be consumed, " and "the impact of climate change can be reduced by consuming less meat, dairy products, and eggs." Items contributing negatively included "meat consumption is unhealthy" and "egg consumption is harmful to human health." The inverse contributions of the items "meat consumption is unhealthy" and "meat should not be consumed" to the overall Factor 1 score suggest that college students' consider environmental health more than human health when determining food items that should and should not be consumed.This could be a misconception among college students due to social media platforms being one of their main sources of nutrition and health information.
Two items loaded strongly with organic sources (Factor 2), which suggests that college students believe that organic protein sources are healthier to consume and better for the environment compared to conventional (non-organic) protein sources.Although the exact extent is unknown, this shows that college students place some value on organic protein sources.Two items also loaded strongly with protein RDA (Factor 3), which shows that college students believe the RDA for protein is adequate in terms of healthy weight loss and people adhering to a vegetarian diet.The factor structure provides evidence that attitude constructs towards protein are multidimensional.Future development of the DPAQ should further develop the attitude constructs.An analysis of the relationship between nutrition information sources and the attitude constructs would be beneficial to identify strategies to educate college students.Adding more items related to Factors 2 and 3 may help define the factors and may strengthen the correlations observed among the protein attitudes measured.
The significant difference in mean test scores between the undergraduate nutrition and non-nutrition (education) students indicated that the DPAQ instrument had adequate construct validity.The nutrition students' mean test score was greater than the non-nutrition students, which has been observed in previous studies (23)(24)(25)(26)(27).The mean test score of nutrition students in the current study was lower than those in previous studies, which may be due to many factors, such as administering the questionnaire without prior notice or wording of knowledge statements (19)(20)(21)(22).It is important to note the instruments used in previous studies had content not exclusively on protein, but included content related to general nutrition and salt knowledge among adult and student populations (18)(19)(20)(21)(22).
While studies have shown dietary patterns can be influenced by eating motives and the perceived impacts on human health and the environment, more research is needed (18)(19)(20)(21)(22)28).With further development, the DPAQ may be used to identify knowledge and attitudes towards protein on the topics of human/environmental health, organic sources, and adequacy of the RDA, as well as other topics needed to capture the full nature of protein attitudes.
Due to increased popularity of social media platforms, there has been a commensurate rise in the amount of false nutrition information presented to the public (29)(30)(31).The lack of "media literacy" may contribute to this wide range of false information.Therefore, it is necessary to create validated instruments to assess protein attitudes and knowledge among the public.Identifying protein knowledge and attitudes will facilitate the design and development of education tools to increase awareness and decrease misconceptions currently associated with protein.Interventions targeting various factors, such as eating motives and reliable nutrition sources, may also lead to improved understanding of evidence-based protein intake.
The strengths of this study include sample sizes, internal consistency of items, and utilizing the evidence-based approach for questionnaire development; however, several limitations exist.Although participants were homogenous in gender, age, and race, results may not be generalizable to other populations.It is important to examine validity in a more diverse population before conducting broader population studies.Just like any self-reported item, this study is also limited by the truthfulness of participants.Satisfactory internal reliability (α ≤ 0.70) was identified for Factor 3, which may provide evidence of inconsistent answers to attitude questions regarding protein RDA (32).Future studies should focus on increasing internal reliability of the DPAQ by adjusting the number of items, rewording questions, and reformatting the instrument.The instrument's validity should be examined in a more diverse population with a more equal gender distribution to increase generalizability to the college student population, as well as provide more complex measurements to explore the attitude constructs multidimensionality.
Conclusion
The results of this study provide preliminary evidence for the knowledge and attitude constructs validity within the DPAQ to be used among the college student population.The instrument, and, in particular, the topic on "protein RDA" requires further development.Attitudes towards protein seem multidimensional and correlated.Additional testing is needed on the DPAQ to confirm the three-factor model and to estimate test-retest reliability.A multidimensional approach seems crucial for future development of the DPAQ, as well as for effective interventions.Future development should focus on increasing internal reliability by adjusting the number of items, rewording questions, and reformatting the instrument.This will allow the DPAQ to be administered to more diverse populations, which will enable researchers to accurately measure protein knowledge and attitudes to create effective nutrition interventions for college students.
TABLE 1
Demographic data for questionnaire participants randomized to exploratory factor analysis.
TABLE 2
Exploratory factor analysis pattern and structure matrices with communalities and explained variance by factor (n = 225).
|
2023-12-09T16:19:53.697Z
|
2023-12-07T00:00:00.000
|
{
"year": 2023,
"sha1": "616192fbdf327fd97d924b67dbc7368ad26788ce",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1289946/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d13b19ea60ae5f6ac803d88b5273140abee25913",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Education"
],
"extfieldsofstudy": []
}
|
30729063
|
pes2o/s2orc
|
v3-fos-license
|
Physical and Chemical Processes and the Morphofunctional Characteristics of Human Erythrocytes in Hyperglycaemia
Background: This study examines the effect of graduated hyperglycaemia on the state and oxygen-binding ability of hemoglobin, the correlation of phospholipid fractions and their metabolites in the membrane, the activity of proteolytic enzymes and the morphofunctional state of erythrocytes. Methods: Conformational changes in the molecule of hemoglobin were determined by Raman spectroscopy. The structure of the erythrocytes was analyzed using laser interference microscopy (LIM). To determine the activity of NADN-methemoglobinreductase, we used the P.G. Board method. The degree of glycosylation of the erythrocyte membranes was determined using a method previously described by Felkoren et al. Lipid extraction was performed using the Bligh and Dyer method. Detection of the phospholipids was performed using V. E. Vaskovsky method. Results: Conditions of hyperglycaemia are characterized by a low affinity of hemoglobin to oxygen, which is manifested as a parallel decrease in the content of hemoglobin oxyform and the growth of deoxyform, methemoglobin and membrane-bound hemoglobin. The degree of glycosylation of membrane proteins and hemoglobin is high. For example, in the case of hyperglycaemia, erythrocytic membranes reduce the content of all phospholipid fractions with a simultaneous increase in lysoforms, free fatty acids and the diacylglycerol (DAG). Step wise hyperglycaemia in incubation medium and human erythrocytes results in an increased content of peptide components and general trypsin-like activity in the cytosol, with a simultaneous decreased activity of μ-calpain and caspase 3. Conclusions: Metabolic disorders and damage of cell membranes during hyperglycaemia cause an increase in the population of echinocytes and spherocytes. The resulting disorders are accompanied with a high probability of intravascular haemolysis.
INTRODUCTION
Various physiological, pathological and nutritional conditions such as physical activity, large amounts of sweet food, emotional stress, metabolic syndrome, and diabetes are accompanied by high level of glucose in blood plasma. The high content of glucose in plasma accelerates the probability of non-enzymatic glycosylation of proteins, which induce damage to the cell membrane due to nonspecific aggregation of protein molecules and changes in protein-protein and protein-lipid interactions (Vasilyeva, 2005).
Taken together, these changes initiate the rapid aging of cells and the human organism. Metabolic syndrome significantly accelerates the development of atherosclerotic vascular damage and provokes earlier disability and death. During metabolic syndrome, which is currently the most common pathology of metabolic disorders, glycosylation of erythrocytic membrane proteins induces the impairment of rheological parameters of blood, low deformability and mobility of erythrocytes, high aggregation of erythrocytes and thrombocytes, high blood viscosity, and arterial hypertension (Shilov et al., 2008).
In addition, glycosylation of erythrocytic membrane proteins and hemoglobin during hyperglycaemia increases adhesion to endothelial cells, resulting in membrane destabilization (change in the asymmetry of membrane phospholipids), changes in viscoelastic properties of cells and their morphology (Riquelme et al., 2005). Taken together, these changes can impair the oxygentransport function of erythrocytes and reduce erythrocyte lifespan. Furthermore, the number of damaged circulating cells and, aging erythrocytes will increase (Lang et al., 2006;Mindukshev et al., 2010).
The biochemical mechanisms of impaired growth in human erythrocytes during the development of hyperglycaemia have not been sufficiently investigated. In particular, there are scarce data on the composition and status of the lipid phase of the membranes, the relationship of these processes with the activity of methemoglobin formation and the activity of apoptotic enzymes. Moreover, there is a lack of data in the literature on the effect of these processes on the morphofunctional state of erythrocytes and their oxygen-transport properties.
Therefore, we aimed to perform a comprehensive study of the effects of graduated hyperglycaemia on the composition of phospholipids, the activity of proteolytic enzymes, and, the consequent effect of ongoing processes on the morphofunctional state of erythrocytes and the oxygen-transport properties of hemoglobin.
Blood Samples
This study was performed using pure erythrocyte fractions isolated from freshly obtained, anticoagulated donor blood from the regional station of blood transfusion. The average age of the donors was 28.4 ± 1.7 years. The research was approved by the Local Ethics Board at Mordovia State University in accordance with the principles of Good Clinical Practice (protocol number 12 of 17 September 2014). Informed consent statements were signed by all donors participating in the experiment. Erythrocytes were washed 4 times with PBS until pure fractions were formed and centrifuged at 600 g for 10 min at temperature +4 • C. Incubation of the erythrocytes was performed in a Ringer's solution at a ratio of 2:1 at 37 • C for 30 min with gentle mixing every 5 min (to avoid haemolysis). For incubation, a Ringer's solution was used containing as follow (in mM/l): NaCl-136.75; KCl-to 2.68; NaHCO 3 -11.9 per; Na 2 HPO 4 -0.342; MgCl 2 -0.105; CaCl 2 -1.8, glucose-5 (normoglycaemia).
Hyperglycemia Model
Hyperglycaemic conditions were characterized by glucose content, in which the graduated concentration was accompanied by an equimolar decrease in the content of sodium chloride: 10 mM/l glucose-NaCl: 131.75 mM/l; 15 mM/l glucose-NaCl: 126.75 mm/l; 20 mM/l NaCl; 121.75 mM/L. The incubation medium was separated by centrifugation in the same manner, and erythrocytes were analyzed using spectrophotometry (Zavodnik and Lapshina, 1996) to determine the correlation of the hemoglobin forms.
Conformation and Properties of Hemoglobin
Conformational changes in the hemoglobin molecule were determined by Raman scattering on In Via Renishaw (UK) microscope with a short-focus high-luminosity monochromator (focus distance-250 mm). For excitation of the Raman spectra we used a laser with a wavelength of 532 nm, max radiation power of 100 mW, and 100 × objective. The data recorder was a -CCD detector (1,024 × 256 pixels with Peltier cooling to −70 • C) with a grating of 1800 lines/mm. The digitized spectra were processed in WIRE 3.3 (part of the unit's software). To analyse the conformation of globin haematoporphyrin, we used specific bands of the RS spectrum, which enabled an estimate of the relative amount of oxyhemoglobin and the ability of hemoglobin to bind to and isolate ligands, including oxygen (Brazhe et al., 2009).
Laser Interference Microscopy
The structure of the erythrocytes was analyzed using laser interference microscopy (LIM) in vitro (Byazhe et al., 2006;Yusipovich et al., 2011;Revin et al., 2016) using MII-4 sysytem (Russia). The measurements were performed at room temperature, and a suspension of erythrocytes in the incubation medium (1:2) was placed on mirror glass. The smear was prepared and covered with cover glass. Images of 10 sites, with a monolayer arrangement of cells in an interference channel, were obtained, with reflected light in each sample. The images were processed using FIJI (Schindelin et al., 2012). The structure of the erythrocytes was assessed by registering the average value of the optical path difference (OPD) and phase image area using at least 100 cells from each sample. The phase volume of the erythrocyte was calculated using the following formula: Where F mean is the mean value of the optical path difference, proportional to the thickness of erythrocyte; S is the phase image area of the cells; n cell is the refractive index of erythrocyte, equal to 1.405; n m is the refractive index of the surrounding solution (1.333).
Determination of the Activity of NADN-Methemoglobinreductase
To determine the activity of NADN-methemoglobinreductase, we used the P.G. Board method (Board, 1981). The degree of glycosylation of the erythrocyte membranes was determined using a method previously described by Felkoren et al. (1991). Before defining the proteolytic enzymes, the suspension of erythrocytes was haemolysed by the addition of a buffer of 20 mM three-HCl containing 2 mM EDTA, pH 7.5 in a ratio of 1:9. The haemolysate was maintained at a temperature 2-4 • C for 15 min and centrifuged at 16,000 × g for 40 min.
The supernatant was used to determine the activity of µ-calpain with the release of enzyme using ion exchange chromatography (column 3 × 15; DEAE-cellulose) and eluted by a gradient of 0.1-0.4 M NaCl. Next, the mean calpain activity of the fractions was determined using incubation medium previously described by Sorimachi et al. (1997) (imidasole buffer, 4% casein, 50 mM of CaCl 2 , 50 mM of cysteine) (Stroev et al., 1991;Elce John, 2000;Sorimachi et al., 2000). Then, the general proteolytic activity, in ascending order of the absorption level at a wavelength of 280 nm, was measured following incubation of the haemolysate and protein deposition by 5% trichloroacetic acid (TCA) (Bazarnova et al., 2008). The release of fractions with calpain activity occurred during movement in a reverse gradient from 0.2 to 0.1 M of sodium chloride immediately after the release of hemoglobin. The activity of µ-calpain was calculated as the difference between activity with and without the inhibitor in the incubation medium (phenylmetylsulfonylfluoride, 2 mM PMSF, Sigma, USA). The content of peptides in the incubation medium and erythrocytes was determined using the Lowry method, with the Bio-Rad DC Protein Assay a set of protein assay Dc reagents (Bio-Rad). The content of peptides in the erythrocytes was determined after deproteinisation by 5% of TCA. The active concentration of caspase-3 in erythrocytes was recorded using an enzyme immunoassay (BD Biosciences, USA) with Stat Fax 3200 microplate reader (USA).
Analysis of Membrane Phospholipids
To analyse the state of the membrane phospholipids, we isolated membranes from the haemolysate using 5 mM NaH 2 PO 4 + 0.5 mM PMSF (phenylmetylsulfonylfluoride) solution, cooled to 0 • C, pH 8.0 in a ratio of 1:20. The mixture was incubated for 10 min at 4 • C and then centrifuged at 20,000 × g for 40 min (0 • C). The supernatant was removed and the residue was resuspended in lysis solution and centrifuged in the same manner. The sample was washed three times. Lipid extraction was performed using the Bligh and Dyer method (Bligh and Dyer, 1959).
To separate the phospholipid fractions we used onedimensional chromatography combined with chloroform/ methanol/glacial acetic acid/water in a ratio of 60/50/1/4 (Evans et al., 1990). Chromatographic separation was performed in a thin layer of silica gel deposited on a glass plate. Standard plates from HPTLC Silicagel 60 F254 (Merck. Germany) were used. To separate DAG and FFA we used the combination of heptane/diethyl ether/glacial acetic acid (60/40/2 by volume). Detection of the phospholipids was performed using V. E. Vaskovsky method (Vaskovsky et al., 1975). The amount of phospholipid fractions and free fatty acids in the erythrocytic membranes was determined using a densitometer TCL Scanner 3 (Gamag, Switzerland) at an absorption wavelength of 360 nm using a deuterium lamp and winCATC software.
Statistical Analysis
The statistical processing was performed using the Statistika 0.06 software package. To determine the significance levels, the Student's test was used. Repeats in the variation series of different indicators were from 8 to 20 values.
RESULTS
Hyperglycaemic conditions are characterized by a low affinity of hemoglobin to oxygen, which is manifested as a parallel decrease in the content of hemoglobin oxyform and the growth of desoxyform (Table 1). Upon a 30-min incubation of the cells, an increase in the glycosylation of hemoglobin occurred only when the glucose concentration was 20 mM/l, whereas a small but reliable growth in methemoglobin formation, increase in membrane-bound form of hemoglobin and degree of glycosylation of proteins in erythrocytic membranes has been previously observed when the glucose level exceeded a two-fold level (10 mm/l) ( Table 1).
The data obtained using the spectrometric measurements were confirmed by Raman scattering ( Table 2).
We found a reliable decrease in the relative content of oxygenated hemoglobin, with an average of 2%. Moreover, we obtained similar spectrophotometric percentages of hemoglobin forms (Tables 1, 2). The conformational changes observed indicate an increased ability of hemoglobin to bind to ligands and the reliable occurrence of low hemoglobin affinity to oxygen in the case of 20-mM/l hyperglycaemia, at approximately 11%. Furthermore, there is a low intensity of symmetric and asymmetric vibrations of pyrrole rings, which indicated a less conformational mobility of the haeme structures and an impairment of the effective binding of ligands ( Table 2).
At 15-and 20-mM/l glucose hyperglycaemia, we detected a high level of lysophosphatidylcholine (LFH) in the erythrocyte membrane, and the content of phosphatidylcholine (PFS) was reliably low under in conditions of increasing amounts of glucose, i.e., up to 10 mM/l. In addition, we recorded a low level of sphingomyelin (SM) and fractions of phosphatidylinositol + phosphatidylserine (PI + PS). Strong hyperglycaemia (20 mm/l) is characterized by a low content of all fractions with a simultaneous increase of in LPC (Table 3). The percentage of increased lysophosphatides under strong 20-mm/l hyperglycaemia was 160%. The SM level was 2 to 3 times lower, with PC-3 times; PI+PS-2.35%; and PEA-4 times. The low content of phospholipids and high content of lysophosphatides indicated the activation of phospholipase A and C, which was accompanied by loosening of the membrane and higher ion permeability (Mills and Needham, 2005).
Analysis of the content of free fatty acids (FFA) and the product of the reaction catalyzed by phospholipase C and diacylglycerol (DAG) showed growth that was proportional to the level of hyperglycaemia, and the percentage of growth with 20 mM/l hyperglycaemia is 133 and 182%, respectively (Table 4).
These results clearly demonstrate the activity of phospholipase.
The normal correlation of phospholipid fractions predetermines the effective regulation of active and passive transport of substances, the sensitivity of cells to the action of ligands, and the activity of membrane-bound enzymatic systems. The optimal operation of ion-carrying systems (Ca 2+ and Na + , K + -ATP-PS) is facilitated by stable PEA content. During severe hyperglycaemia (20 mm/l), a sharp decrease in PEA content will be accompanied by a defect in ion-carrying processes (Table 3). Moreover, it will trigger a general change in the structure of the membrane phospholipid bilayer, as PEA is a structural phospholipid (Delaunay, 2002). Importantly, the low content of PEA causes a disorder in endoglobular homeostasis and inhibition of the antioxidant activity of cells in (Afanasiev et al., 2007). The low content of phospholipids with polyunsaturated fatty acids (PC, PI, PEA) triggers defatting of membranes and a high correlation of cholesterol/PL, which is accompanied by changes in physicochemical properties, namely, high microviscosity. These findings indicate that hyperglycaemia is followed by a disruption in membrane permeability.
In hyperglycaemia, human erythrocytes demonstrate a high content of peptide components, which was specifically high when the content of glucose in the incubation medium was 10 mm/l ( Table 5). A further increase in the level (degree) of hyperglycaemia was accompanied by an increased release of protein compounds into the incubation medium due to higher cell membrane permeability ( Table 5).
Measurement of the proteolytic activity revealed interesting patterns. The release of µ-calpain fractions and identification of the average activity of the enzyme showed a decrease with a parallel increase in the degree of hyperglycaemia, which was statistically significant when the concentration of glucose reached 20 mM/l. A low activity of caspase 3 was detected, which was also strongest under conditions of severe hyperglycaemia ( Table 6). Increased proteolytic activity can be detected only by the identification of the overall trypsin-like activity of cytosol with respect to the high content of peptides in the haemolysate, without the isolation of enzymes and addition of third proteolytic substrates ( Table 6).
Laser interference microscopy showed distinct structural changes of erythrocytes with increased concentrations of glucose in the incubation medium ( Table 7, Figures 1-4).
During normoglycaemia, erythrocytes appeared similar to discocytes, with a normal distribution of hemoglobin inside the cells. During hyperglycaemia, the estimated parameters were high overall. In addition to the relative areas of the erythrocytes, all indicators were high in conditions of 20-mm/l hyperglycaemia ( Table 7).
With a high content of glucose in the incubation medium, echinocytes accumulated in the population of cells, and the cell profile demonstrated visible outgrowths on the erythrocyte surface. At up to 15-mm/l glucose, spherocytes appeared, and the cells appeared more swollen (Figure 3). At 20-mm/l glucose, the cells were slightly compressed compared with 15-mm/l glucose; however, other changes became more robust (Table 7, Figure 4).
DISCUSSION
Disorder in the metabolic and functional state of human erythrocyte membranes at a high concentration of glucose in the incubation medium occurs primarily due to the general activation of biochemical and physicochemical processes in cells. The shift in the dissociation curve in the direction of deoxygenation causes increased methemoglobin formation as DOHb, which is less resistant to auto-oxidation compared tooxyform (Zavodnik and Lapshina, 1996;Ivanov, 2001). The gradual reduction of the conformational mobility of haeme structures ( Table 2) is most likely caused by the high degree of the glycosylated protein component of hemoglobin, its permeability in the membrane, followed by a gradual increase in membranebound hemoglobin ( Table 1).
The observed increase in the degree of glycosylation of membrane proteins induces changes in the state of membrane phospholipids. The change in protein-lipid interactions results in the activation of membrane phospholipase, causing an increase in LPC and DAG, and results in a high content of FFA in erythrocytic membranes. Membrane permeability under these conditions increases, and the peptide and protein compounds are released into the incubation medium. Some reduction in their release at 20-mm/l hyperglycaemia may be accounted for by the low activity of µ-calpain and caspase 3 due to less affinity to the glycosylated form of hemoglobin. However, their content in the erythrocytes and incubation medium was higher compared to normoglycaemia, most likely due to the continued high trypsinlike activity of the cell cytosol.
Metabolic disorders and injury to the cell membrane trigger morphological changes in erythrocytes. Previous research findings have shown a gradual accumulation of structurally damaged cells in the population. First, in 10-mM/l hyperglycaemia, the number of echinocytes increases (reversible morphological changes) and the phase volume and thickness of the cells increases. Next, stomacytes and spherocytes appear in the population. LIM microscopic results indicated strong cell swelling. In 20-mm/l hyperglycaemia, the number of spherocytes increased. Furthermore, the cells showed some shrinkage, but the ratio of surface/volume values reached a critical level. Thus, haemoglytic destruction is very likely.
The destruction of erythrocytes in the bloodstream in hyperglycaemia will result in a chronic inflammation response, which can damage the body's blood vessels.
Changes in the erythrocytic membrane not only result in morphological disorders of the whole erythrocyte, but they can also consequently result in conformational changes in hemoglobin as well as its oxygen-binding and oxygen-transport function. The low content of oxyhemoglobin, weak symmetric and asymmetric oscillations of pyrrole rings and increase in hemoglobin affinity to ligands, including oxygen, has been demonstrated.
CONCLUSION
Taken together, we demonstrate the significant risk of the prolonged availability of high glucose concentrations in plasma. Nonspecific glycosylation of membrane proteins and erythrocytesin hemoglobin results in a weak affinity of hemoglobin to oxygen and its loss by cells enroute to tissues in the human body. Moreover, the resulting damage to membranes and cell metabolism increase the probability of an accumulation of functionally defective aging erythrocytes in the circulating population. More rapid spherization of erythrocytes, in the absence of physiological mechanisms of apoptosis, introduces the possibility of necrotic cell death in the bloodstream, resulting in the gradual development of chronic states of hypoxia and inflammation.
AUTHOR CONTRIBUTIONS
VR: designed experiments, interpreted data, wrote manuscript. NK, NG, IG, IS, AT, AP, AS, and ER: designed and performed experiments wrote manuscript. KP: performed experiments and analyzed results. NZ and JCB: made substantial contribution to the analysis and interpretation of the data, revised and critically reviewed the manuscript.
|
2017-08-31T05:41:23.921Z
|
2017-08-30T00:00:00.000
|
{
"year": 2017,
"sha1": "8737f1ecdda0f17eab29274fa3c2ad792c0e29e7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2017.00606/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8737f1ecdda0f17eab29274fa3c2ad792c0e29e7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
239471673
|
pes2o/s2orc
|
v3-fos-license
|
Neoadjuvant Immunotherapy in Resectable Non-Small Cell Lung Cancer. A Narrative Review
Lung cancer is one of the most common malignant tumors and it is the leading cause of cancer-related mortality worldwide. For early-stage Non-Small Cell Lung Cancer (NSCLC), surgical resection is the treatment of choice, but the 5-year survival is still unsatisfying, ranging from 60% to 36% depending on the disease stage. Multimodality treatment with adjuvant chemotherapy did not lead to clinically relevant results, improving survival rates by only 5%. Recently, immune checkpoint inhibitors (ICIs) are being studied as neoadjuvant treatment for resectable NSCLC too, after the satisfactory results obtained in stage IV disease. Several clinical trials are evaluating the safety and feasibility of neoadjuvant immunotherapy and their early findings suggest that ICIs could be better tolerated than standard neoadjuvant chemotherapy and more effective in reducing cancer local recurrence and metastasis. The aim of this review is to retrace the most relevant results of the completed and the ongoing clinical trials, in terms of efficacy and safety, but also to face the open challenges regarding ICIs in neoadjuvant setting for resectable NSCLC.
Background
Lung cancer is one of the most common malignant tumors and it is the leading cause of cancer-related mortality worldwide, with 1.76 million victims annually [1]. Non-Small Cell Lung Cancer (NSCLC) accounts for about 80-85% of them [2]. Thanks to the spread of computed tomography (CT) and international screening programs, an increasing number of patients receive their diagnosis in the early stage, when surgical resection with curative intent is still possible and represents the best chance of cure [3].
However, the 5-year survival rates of these patients are still unsatisfying, ranging from 60% in stage IIA to 36% in stage IIIA [4]. Moreover, between 30% and 60% of them will develop a metastatic disease after radical resection [5].
Moreover, it is now common knowledge that resectable NSCLC benefits from a multimodality treatment rather than surgery alone. Adjuvant chemotherapy, for example, has been shown to improve 5-year survival rates by around 5%, though it cannot be defined as a successfully achieved goal [6].
At the same time, we have limited evidence regarding the efficacy of the induction chemoradiotherapy (CRT) followed by surgery. The INT0139 study, a phase 3 trial, compared the standard of CRT vs. induction CRT (45 Gy) followed by surgery for pathologically diagnosed cN2 resectable NSCLC [7]. In an exploratory subset analysis, pneumonectomy after CRT induction was associated with a treatment-related mortality rate of 26% and a Life 2021, 11, 1036 2 of 8 worse OS than radical CRT. However, lobectomy after CRT induction was associated with a treatment-related mortality rate of 1% and significantly improved OS compared to radical CRT (median OS, 33.6 vs. 21.7%, p = 0.002).
ICIs improve the prognosis of patients with stage IV NSCLC [8][9][10][11][12][13]. These results have encouraged the anticipated use of immunotherapy in a setting of adjuvant, neoadjuvant therapy. Several large-scale phase 3 studies are in progress in an adjuvant setting, investigating the efficacy of ICIs after complete resection in patients with pathological stage IB to IIIA NSCLC. There are currently studies including ICI monotherapies and combination therapies of ICI and conventional chemotherapy.
Neoadjuvant therapy may control micrometastases in the early phases and may offer an opportunity to evaluate drug sensitivity. Adjuvant therapy may or may not be performed if the patients are not fit for chemotherapy after surgery. Neoadjuvant therapy can be performed with good compliance, but may cause increased postoperative complications and treatment-related adverse events. This is why there is an urgent and unmet need to seek novel and more effective treatments for NSCLC, such as neoadjuvant therapy. In particular, neoadjuvant immunotherapy with immune checkpoint inhibitors (ICIs) is being explored in an increasing number of studies and clinical trials that are moving from metastatic disease to early-stage NSCLC looking for efficacy in resectable patients too.
The Rationale of Neoadjuvant Immunotherapy in NSCLC
ICIs have been successfully used against many solid tumors such as triple-negative breast cancer, melanoma, and urothelial carcinoma [15][16][17]. We can say that the PACIFIC trial definitely demonstrated the efficacy and the great potential of immunotherapy in lung cancer [18]. In this prospective randomized trial patients with unresectable stage III lung cancer were randomized after chemo-radiation therapy to durvalumab or placebo. After 24 months of follow-up, the overall survival rate was significantly higher in patients who received durvalumab versus the ones matched to placebo, 66.3% and 55.6%, respectively. Moreover, the trial highlighted that 85% of patients treated with durvalumab who underwent disease progression presented local recurrence (lung or regional lymph nodes).
The clinical data from this trial stated not only the efficacy of durvalumab in prolonging the overall survival of non-resectable NSCLC, but also suggested that immunotherapy could be applied in a multimodality treatment alongside surgery too, thanks to its control over distant metastases.
The rationale of using immunotherapy against NSCLC (and any other type of malignancies) lies in the clinical need to inhibit the tumoral pathways that downregulate the patients' immune T-cell response. The majority of the drugs used (durvalumab, pembrolizumab, nivolumab, and atezolizumab) modulate the interaction of programmed cell death protein 1 (PD-1)/programmed cell death protein 1 ligand (PD-L1). The antibodies lead to the blockade of this immunosuppressive interaction and allow the patient's T cells to recognize the antigen-presenting tumor cells [19].
Ipilimumab, on the other side, modulates the cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) to block its pathway and to ensure the immune T-cell response [20]. This is why the use of immunotherapy in the neoadjuvant setting seems a reasonable and effective application, in fact, the entire tumor exhibits a high number of antigens that can be linked to the antigen-presenting cells to induce a stronger and prolonged immune T-cell response, preventing tumor recurrence.
The majority of the advantages in administering ICIs as neoadjuvant therapy have been highlighted by Liu and colleagues in their preclinical study [21]. The authors used two immunocompetent murine models of triple-negative breast cancer and treated them with neoadjuvant or adjuvant immunotherapy (anti-PD-1 alone or in combination with anti-CD137). The study showed that the group treated with neoadjuvant immunotherapy had a long-term survival of 40% versus 0% of the adjuvant group. Moreover, the peripheral blood examination of the neoadjuvant group displayed a significantly higher number of tumor-specific CD8+ T cells compared to the adjuvant group.
There are several more benefits that make neoadjuvant immunotherapy a suitable treatment option.
First of all, one of the goals of administering neoadjuvant ICIs is to reduce the size of the primary tumor and, as a result, to increase the chance of radical surgical resection, but also to control and ideally eradicate the circulant micrometastases. In fact, activated T cells run through the lymphatic and the bloodstream both to the primary tumor and against micrometastatic sites to perform their tumor-killing effect [22]. The rationale of administering ICIs before surgery also lies in the integrity of the blood and lymphatic flow that can lead the activated cells to the tumor site unhindered.
Moreover, the interaction of these drugs with the microenvironment of the tumor induces, among other things, a devascularization of the tumor itself, that may result in adhesions and fibrotic retraction. This effect may represent the downside of neoadjuvant immunotherapy, leading to a more challenging surgical field, as well as after neoadjuvant chemotherapy.
In addition, administering drugs to a patient with an intact immune system gives earlier endpoints to assess the patient's response to therapy in terms of sensitivity or resistance to it. The data collected during this phase of the therapeutic pathway enable a more accurate choice of the most appropriate treatment regimen after surgery and ineffective agents can be stopped and substituted with alternatives.
Moreover, even if immunotherapy showed some adverse effects such as pneumonia, myocarditis, and neuromuscular toxicity [23,24] their impact seems significantly lower and better tolerated compared to the toxicity related to traditional chemo-radiation therapy.
Neoadjuvant immunotherapy may also shorten clinical trials and lead to a more diffuse and widely accepted use of surrogate predictors of overall survival, such as major pathological response (MPR) and pathological complete response (pCR).
These short-term efficacy indicators are increasingly used as primary endpoints in the majority of completed and ongoing clinical trials.
MPR, for example, has been found to correlate with long-term survival in patients with NSCLC treated with neoadjuvant chemotherapy, and its use is accepted as a reliable endpoint in this setting [25,26].
The use of these surrogates may fasten the approval of immunotherapy agents too and they are defined as follows: MPR: The resected specimen presents ≤ 10% of viable tumor cells. It is the most commonly used in the ongoing trials; pCR: The evaluation of the specimen and the regional lymph nodes does not detect any residual invasive cancer [25]. Unfortunately, pCR is rarely reached in NSCLC; thus it is not such a feasible endpoint to assess the efficacy of neoadjuvant immunotherapy.
The aim of this review is to examine the clinical results of the most relevant studies on neoadjuvant immunotherapy in resectable NSCLC and to discuss the emerging data from the most innovative ongoing clinical trials. We will also focus on the potential problems related to this new approach.
Materials and Methods
Clinical data and results were found by searching PubMed for articles only in the English language from 2018 to 2021. The keywords searched were neoadjuvant immunotherapy, early-stage NSCLC, resectable lung cancer. Moreover, clinicaltrials.gov was searched by inserting the words neoadjuvant immunotherapy in NSCLC. In this review, we decided to cite the more relevant studies after an accurate screening of the ones found.
The Results of Clinical Trials
The results of the most relevant clinical trials using neoadjuvant immunotherapy alone are depicted in Table 1. The CheckMate 159 by Forde and colleagues [27] explored for the first time in a prospective trial the efficacy of neoadjuvant immunotherapy for NSCLC. The study enrolled 22 patients with resectable (stage IB-IIIA) NSCLC to receive two cycles of nivolumab before surgery. The results were promising: 9 patients out of the 20 (45%) who received the planned therapy achieved MPR and none of them presented delay in surgery.
The Lung Cancer Mutation Consortium conducted the LCMC3 trial [28] where two cycles of atezolizumab were administered to 181 patients with stage IB-IIIA and selected IIIB. The primary endpoint was MPR which was received by 30 patients (21%). Grade 3 or greater treatment-related adverse events manifested in 6% of patients and 22 (12%) did not undergo surgery. It is interesting to underline that the MPR of patients expressing more than 50% of PD-L1 was 33% versus 11% of patients with expression lower than 50%.
The IONESCO trial [29] set as the primary endpoint the complete surgical resection (R0 according to RECIST 1.1 criteria) in patients with stage IB-IIIA NSCLC after neoadjuvant durvalumab. The study was stopped before the scheduled date because of a high 90-day postoperative (4 patients, 9%). Actually, none of these deaths was related to immunotherapy treatment.
Neoadjuvant immunotherapy has been and is being tested synergically with chemotherapy. Clinical trials evaluating neoadjuvant immunotherapy in combination with chemotherapy and with or without adjuvant therapy are reported in Table 2. The NADIM trial by Provencio and colleagues [30] assessed the safety and feasibility of neoadjuvant nivolumab plus carboplatin and paclitaxel in 46 patients with stage IIIA NSCLC. After surgery, one year of adjuvant nivolumab was administered too. A total of 89% of the study population underwent surgery and MPR was achieved in 83% of patients. After 18 months of follow-up, the progression-free survival rate was 87%. A total of 93% of patients presented treatment-related adverse events during neoadjuvant therapy, such as nausea, alopecia, fatigue, and neurotoxicity, but these did not lead to treatment interruption or delay in surgery.
Neoadjuvant immunotherapy is being tested not only in monotherapy or in combination with standard chemotherapy, but dual immunotherapy is being investigated too. The phase 2 randomized trial NEOSTAR [31], for example, used neoadjuvant nivolumab or a combination of nivolumab and ipilimumab in 44 patients with resectable NSCLC, stage I-IIIA. The primary endpoint was MPR and it was evaluated individually in the two arms of the study. Out of the 37 patients who underwent surgery, 8 patients of the 16 (50%) treated with nivolumab and ipilimumab achieved MPR, while the nivolumab arm presented a 24% (5/21) MPR rate. Compared with nivolumab, the association between nivolumab and ipilimumab was shown to be superior in terms of higher pCR too (10% versus 38%) and it seems to enhance immunologic response and memory. This data suggested that combination immunotherapy could be more effective than single drug immunotherapy, but could be related to higher toxicity.
The Ongoing Clinical Trials
The majority of clinical trials investigating neoadjuvant immunotherapy are still ongoing and only partial results are being released. The current studies are focusing not only on single drug neoadjuvant immunotherapy, but also on neoadjuvant chemoimmunotherapy and on multimodality treatment with neoadjuvant and following surgery adjuvant immunotherapy. The endpoints are still heterogeneous and evaluate the efficacy of neoadjuvant immunotherapy through survival surrogates (MPR, pCR) and the safety and feasibility of ICIs.
The PRINCEPS trial enrolled 30 patients with clinical stage IA-IIIA resectable NSCLC [32]. The primary endpoint was the rate of toxicities and morbidities after one month of surgery in patients who received one cycle of neoadjuvant atezolizumab. None of them were delayed in surgery and 29 received complete resection (R0). In contrast with other trials, no MPR was observed, but this could be explained by the short delay between the infusion of atezolizumab and surgery, which happened from 21 to 28 days after.
Moreover, the trial proved again the safety and feasibility of neoadjuvant immunotherapy, in fact, there was only one treatment-related adverse effect, a grade 1 parietal pain.
The NEOMUN trial is a single-arm monocentric study that aims to study the safety and feasibility of neoadjuvant pembrolizumab in patients with resectable NSCLC stage II-IIIA [33]. The first clinical results enrolled 15 patients who, after completion of immunotherapy, underwent surgery with curative intent. In this phase, 13 patients received the scheduled immunotherapy and 4 (27%) of them reached MPR. Moreover, the study investigated the clinical response too by the decreasing PET activity of the tumor, which was detected in the same four patients.
Grade 2-3 treatment-related adverse events happened in five patients (33%), the overall postoperative morbidity was 7% and 30-day mortality was 0%. In conclusion, neoadjuvant pembrolizumab resulted as a feasible and safe treatment.
Shu and colleagues assessed four cycles of atezolizumab plus carboplatin and nabpaclitaxel in a neoadjuvant setting in 30 patients with stage IB-IIIA NSCLC [34]. The primary endpoint was MPR and at the cutoff data and it was achieved by 17 patients (57%). In this phase II trial, the most common treatment-related adverse events (grades 3-4) were neutropenia and thrombocytopenia.
CheckMate 816 (NCT02998528) is a phase 3 randomized multicenter trial whose results have been recently announced. It randomized 358 patients with resectable NSCLC to receive three cycles of neoadjuvant nivolumab and histology-based platinum doublet chemotherapy or neoadjuvant chemotherapy alone. The primary endpoints are pCR and event-free survival (EFS). The group treated with neoadjuvant nivolumab reached 24% of pCR versus only 2.2% of the group matched to platinum doublet alone. The significant improvement in pCR is the first to be demonstrated in patients with NSCLC treated in a multimodality treatment with neoadjuvant immunotherapy and it is not related to PD-L1 expression. One of the secondary endpoints is MPR, which is reached in 36.9% of patients treated with nivolumab, while patients undergoing chemotherapy alone reached an 8.9% MPR rate.
Moreover, 83% of patients who received nivolumab underwent surgery and achieved a complete surgical resection (R0). Surgery-related and treatment-related adverse events were similar in both arms of the study. In conclusion, CheckMate 816 demonstrated that neoadjuvant chemo-immunotherapy does not affect the feasibility of surgery and meanwhile increases pCR.
Another phase 3 trial is the Aegean trial (NCT03800134), which is a double-blind multicenter study. It is still recruiting patients and it aims to randomize approximately 800 patients with stage II and III NSCLC to receive neoadjuvant durvalumab or placebo and concurrent platinum-based chemotherapy and adjuvant chemotherapy with durvalumab or placebo.
The primary endpoint is pCR, while secondary endpoints comprehend safety assessments, MPR, and overall survival. The estimated study completion date is April 2024.
It is also worth citing the NeoCOAST trial (NCT03794544), which evaluates in patients with stage I-IIIA NSCLC the safety of neoadjuvant durvalumab alone or in combination with novel agents oleclumab (+MEDI9447), monalizumab (IPH2201), and danvatirsen (AZD9150). The primary endpoint is MPR and the secondary includes feasibility of tumor resection and pCR. The study ended in January 2021 and its results will be likely published soon.
Discussion
An increasing number of trials are demonstrating the feasibility and safety of immunotherapy in neoadjuvant settings against NSCLC. Furthermore, many trials are still recruiting and their first results are encouraging.
On the other hand, the majority of the collected data use surrogate predictors of survival, while the goal is to assess the impact of neoadjuvant immunotherapy in terms of overall survival of patients. This lack of uniformity in clinical trials endpoints could lead to heterogeneous results that are difficult to compare. On the other side, there is an unmet need to discover new and more effective treatments for NSCLC, which is the leading cause of cancer-related mortality worldwide and it could be counterproductive to assess long-term endpoints of five years or more. This is why pCR and, more frequently, MPR are currently used to predict patients' response to neoadjuvant therapy, but cannot provide long-term disease monitoring which will be necessary to definitely validate the efficacy of neoadjuvant ICIs in NSCLC.
Despite the encouraging results from the clinical trials, there are several challenges still to solve.
For example, PD-L1 expression level, which is the main biomarker of ICIs, has to be further investigated, since studies reported conflicting results on its role in predicting the curative effect of the drugs mentioned above. In fact, while in 2014 Velcheti and colleagues [35] stated that higher PD-L1 expression led to higher effectiveness of ICIs, LCMC3, and NEOSTAR showed treatment response in both PD-L1 positive and negative NSCLC. MPR in LCMC3 was significantly higher in patients with PD-L1 expression, >50% than in patients with lower protein expression.
Timing and the model of neoadjuvant immunotherapy are still a matter of debate too. Clinical trials are simultaneously evaluating the efficacy of single drug neoadjuvant immunotherapy, a combination of different neoadjuvant ICIs and ICIs with standard chemotherapy, as well as adjuvant chemotherapy and maintenance immunotherapy.
In conclusion, the data collected show that ICIs induce promising degrees of MPR (up to 45% alone and even up to 83% combined with chemotherapy) indicating that induction immunotherapy is now emerging as the new effective standard of care for patients with resectable NSCLC. The definition of the better schedule and patient selection remain fundamental to optimize the therapeutic effectiveness and, in addition, to limit useless therapy.
|
2021-10-18T17:35:57.082Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "d0cb439f2b7e2b7a23a3c876f63c8cc17324b1fe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/11/10/1036/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6da06558393362a7ff6e1e66b5d6a98ccbdf734",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211540277
|
pes2o/s2orc
|
v3-fos-license
|
Design and evaluation of a biologically-inspired cloud elasticity framework
The elasticity in cloud is essential to the effective management of computational resources as it enables readjustment at runtime to meet application demands. Over the years, researchers and practitioners have proposed many auto-scaling solutions using versatile techniques ranging from simple if-then-else based rules to sophisticated optimisation, control theory and machine learning based methods. However, despite an extensive range of existing elasticity research, the aim of implementing an efficient scaling technique that satisfies the actual demands is still a challenge to achieve. The existing methods suffer from issues like: (1) the lack of adaptability and static scaling behaviour whilst considering completely fixed approaches; (2) the burden of additional computational overhead, the inability to cope with the sudden changes in the workload behaviour and the preference of adaptability over reliability at runtime whilst considering the fully dynamic approaches; and (3) the lack of considering uncertainty aspects while designing auto-scaling solutions. In this paper, we aim to address these issues using a holistic biologically-inspired feedback switch controller. This method utilises multiple controllers and a switching mechanism, implemented using fuzzy system, that realises the selection of suitable controller at runtime. The fuzzy system also facilitates the design of qualitative elasticity rules. Furthermore, to improve the possibility of avoiding the oscillatory behaviour (a problem commonly associated with switch methodologies), this paper integrates a biologically-inspired computational model of action selection. Lastly, we identify seven different kinds of real workload patterns and utilise them to evaluate the performance of the proposed method against the state-of-the-art approaches. The obtained computational results demonstrate that the proposed method results in achieving better performance without incurring any additional cost in comparison to the state-of-the-art approaches.
Introduction
The pool of virtually unlimited on-demand computational resources, provided by cloud providers (CPs), and many attractive features of cloud computing, such as pay-as-yougo pricing and on-the-fly re-adjustment of hired computational resources (elasticity), is a perfect match to host web applications that are subject to fluctuating workload conditions [1,2]. The cloud's elasticity allows applications to dynamically adjust the underlying computational resources in response to the changes observed in the environment, thus enabling application service providers (SPs) to meet application demands and pay only for the resources that are necessary [3].
The existing research literature on cloud elasticity differs in various aspects, e.g. triggering behaviour (Reactive/ Predictive/Hybrid), scope (CPs/SPs perspective), dependency on metrics (CPU utilisation/Response time, etc.), and the implementation technique (Control Theory/ Machine learning/Rule-based, etc.). Despite such differences most of the existing methods can generally be grouped into Fixed or Adaptive categories based on their design and working mechanism to analyse their pros and cons as a whole [25].
The Fixed class refers to the family of all elastic methods that are designed off-line and remain fixed at runtime. On the other hand, the Adaptive class indicates methods that are equipped with an on-line learning capability that is responsible for adaptation at runtime in response to changes in the working environment. The Fixed approaches are simple, easy to design and better for systems with uniform workload behaviour, e.g. rule-based systems and fixed gain elastic feedback controllers. However, the performance severely affects systems with variable workloads due to lack of adaptability at runtime. In contrast, the Adaptive approaches are more flexible due to on-line learning capabilities and they perform better in scenarios with slowly varying workload behaviour. However, they are also criticised for their additional computational cost caused due to the online learning [26], long training delays, their associated risk of reducing the quality assurance of the resulted system and the impossibility of deriving a convergence or stability proof [25].
In contrast to the families mentioned above, this paper advocates a fixed-adaptive (also referred to as Hybrid by Gambi et al. [25]) approach, a method commonly associated with the biologically-inspired multi-model switching and tuning (MMST) methods. Using such an approach, an elastic method follows a Fixed design principle, but also achieves certain level of adaptive behaviour at runtime. The review of existing state-of-the-art elasticity research (Section 6) indicates that such an approach for implementing cloud elasticity has not received much attention.
Another important factor identified in the existing elasticity literature is the importance of addressing the uncertainty related issues, e.g. impreciseness in domain knowledge and noise in monitoring data. Jamshidi et al [14,27] and Farokhi et al [28] stressed the importance of the uncertainty aspects required to be considered while designing elastic controllers. However, despite the importance, the implementation of uncertainty in the context of cloud elasticity has not yet been well received [28]. The methodology proposed in this paper is also a step forward in this direction.
This paper addresses the horizontal elasticity problem from a SP perspective and particularly focuses on contributing towards resolving the following issues in the existing elasticity literature: (1) The lack of adaptability and static scaling behaviour whilst considering completely fixed approaches; (2) The burden of additional computational overhead, the inability to cope with sudden changes in workload behaviour and preference of adaptability over reliability at runtime whilst considering the fully dynamic approaches; (3) The lack of considering uncertainty aspects while designing auto-scaling solutions; and (4) Lastly, the unavailability of solutions that facilitate qualitative elasticity rules to resolve the quantitative nature of the commonly used rule-based approaches. This paper investigates the synergy between the biologically-inspired multi-controller approach and fuzzy control system, to provide a holistic solution to address the aforementioned issues.
The rest of the paper is organised as follows. The next section provides the design of our proposed biologicallyinspired cloud elasticity framework. The design, however consists of two different modes, termed as Hard switching and Soft switching. Therefore, each mode is explained in different section. Section 3 elaborate the customised settings used for experimentation and the results obtained in the case of Hard switching approach. Similarly, Sect. 5 presents the results obtained in the case of Soft switching approach. Section 6 comparatively summarizes the research undertaken in the field of cloud elasticity research. Lastly, Sect. 7 concludes this paper.
2 Biologically-inspired elasticity framework: hard switching The proposed multi-controller with fuzzy switching framework consists of the use of an array of controllers, where each controller is particularly designed to achieve better performance in a different situation and the selection of a suitable controller is realised at runtime. The architectural diagram of the proposed control methodology can be seen in Fig. 1 which extends and builds on the classical feedback loop model.
The key idea behind the proposed framework is to divide the complexity of the overall system by constructing multiple fixed gain controllers, where each controller depicts a separate elastic policy that carries out scaling actions at different intensity level. The design of the proposed methodology (or any switched method in general) involves the following two key challenges: (1) how to partition the system among multiple controllers? (2) How to switch (or formulate) the final decision? Due to the lack of a standard approach for partitioning the system among sub controllers [29], this research realises the use of expertoriented distribution of workload intensity into various categories such as low, moderate and high. For each category, a system model is constructed, based on which a controller is designed. The final decision is carried out by the selection of a suitable controller at runtime using an intelligent switching mechanism, implemented as a fuzzy control system, i.e. also formally called as Fuzzy Inference System (FIS) as represented in Fig. 1.
The proposed control method is responsible for the readjustment of the number of Virtual Machines (VMs) to maintain the average CPU utilisation of hired VMs running at that time. The proposed methodology incorporates three Fixed gain controllers termed Lazy, Moderate and Aggressive. In theory, the number of controllers depends on the adaptation and application scenario. Increasing the number of controllers facilitates more fine-grained control over cloud resources, however, it also increases the design complexity of the elastic method. Each controller depicts a different elasticity policy, and theoretically they can be implemented using any suitable technique.
The incorporation of fixed controllers with switching ability enables the adaptive behaviour of the system to respond appropriately to the needs of the system in case of changes in workload without the need of any on-line learning algorithm. Each of the controllers is designed to react differently in the various situation. In this case, as their name specifies, they indicate three different scenarios, i.e. to perform scaling action at slower, moderate and aggressive intensity level. The selection of one of this policy depends on the behaviour of the system at that point in time. The behaviour of the system can be identified using the latest status of the following three aspects including application performance, workload arrivals, and resource utilisation. These aspects are represented as Response time, Arrival rate and Control error respectively in Fig. 1.
The System Monitor component of the proposed methodology is responsible for obtaining the latest status of the three parameters mentioned above. These measurements (as shown in Fig. 1) are provided to the FIS. The FIS then decides using the collection of elastic fuzzy rules (Sect. 2.2), what level of intensity is needed for the readjustment of resources (VMs) to meet the desired performance objective (explained in Sect. 2.2.2). The output of the FIS is one of the employed controller that is responsible for making scaling decisions.
Feedback control
The design and development of the feedback control part of the methodology follows the process flow proposed by Antonio et al. [30]. This process flow consists of the following steps: Defining the goal of control methodology, Identification of control input and devising of system's model and finally, the development, deployment and evaluation of the control methodology. The details of the control system goal, control input, system model and control design in the prospect of our proposed methodology are provided in the following subsections. Whereas, the deployment and evaluation are discussed in Sect. 3.
Goal of control methodology
The goal of the control methodology is to adjust the number of VMs (will also be referred as Cluster size in the rest of the paper) at runtime in response to changes in In the context of control system, based on the abovementioned goal description, CPU utilisation becomes the Measured output of the system and we have to identify the Reference input, i.e. the target CPU utilisation that results in achieving the desired performance level. For any given application scenario, the desired performance level is the acceptable level of performance, i.e. the mean response time (mRT), that the application owner desire to maintain for their application. In this paper, for the evaluation of the proposed method, we consider the value (mRT B 0.6 s) as the desired performance measurement. Hence, the scaling mechanism will make changes to the system resources such that the performance of the application acheive the mRT B 0.6 s. However, response time is an application level metric. Therefore, we need to identify the corresponding CPU utilisation level, where the system will be able to maintain the application mRT B 0.6 s.
The key reasons for using CPU utilisation as the system output are the following: (1) The CPU utilisation is directly obtained from the CPs provided monitoring Application Programming Interface (API). Hence it does not require application level monitoring efforts. (2) It is a system specific metric and no runtime relation identification between application metric, e.g. Response time, is required. Hence it does not involve additional overhead at runtime.
(3) More importantly with respect to our methodology, we have already catered application level metric (i.e. Response time) for decision-making. Thus using CPU utilisation as another metric strengthens the decision-making mechanism by taking into account the system's resource utilisation perspective. Hence, the proposed methodology becomes hybrid in contrast to most of the existing methods that either rely on application [14,31] or system level metrics [11,32,33].
The measurement for Reference CPU utilisation can be obtained using system identification (SID) experiments by establishing a relationship between VM CPU utilisation versus performance. This experiment and all other such SID experiments are conducted using an extended version of a well-known cloud simulation tool named CloudSim [34].
The SID experiment records the measurement of CPU utilisation and mRT against several workloads that differentiate regarding the number of incoming requests ranging from 50 requests per minute (rpm) to 950 rpm. Each measurement of CPU utilisation and mRT against the specified rpm is obtained from sub experiment, where the corresponding number of rpm are sent for 30 minutes to the system, which consists of one VM. The arrival time of job requests in a minute and the service time of each request is randomly assigned. This whole experiment is repeated 100 times and the average for each measurement is recorded. The obtained results are presented in Fig. 2.
It is evident from Fig. 2 that the increase in the number of rpm makes the mRT slower. The dashed line in Fig. 2 represents the desired performance measurement, and we are interested in the maximum rpm measurement for which the obtained performance is less than the desired target, i.e. (mRT B 0.6 s). This criterion is satisfied by 850 rpm. However, in this case, there were 13% Service Level Objective (SLO) violations observed, which is not acceptable as per the employed performance objective (will explain in Sect. 2.2.2). Therefore, we do not select the 850 rpm and consider the next measurement, i.e. 800 rpm, that satisfies the criterion mentioned earlier. This means that on average one VM can fulfil maximum 800 rpm on a per minute basis, while obtaining the desired performance level. Analogously, the number of rpm has similar effect on CPU utilisation, i.e. the increase in rpm results in an increase in the CPU utilisation as well. For the Reference input, we record the corresponding measurement of CPU utilisation from Fig. 2 against the 800 rpm, which is 55%. Thus the control methodology is responsible to maintain the measurement of 55% as the Reference CPU utilisation.
Control input
The number of VMs is used as the Control input. This choice is obvious considering horizontal elasticity. Furthermore, we also perform an experiment to demonstrate the impact on mRT with a change in Cluster size. Figure 3 demonstrates the obtained results that indicate that increasing the number of VMs reduced the response time.
System modelling
This section identifies the system model that describes the relationship between input (number of VMs) and output (CPU utilisation) of the system. We follow the black box modelling approach that mainly consists of SID experiments to obtain training data, building and evaluating the model. The following subsections explain the process. SID experiments design The SID experiments record the training data consisting of input-output pairs of system by changing the control input in a systematic way during the experiment. During this experiment, we assume that the historical information related to system workload is available and on that basis, we use domain experts based distribution of workload into three categories namely Low, Moderate and High. Using these categories and following the principles of Gain scheduling technique where workload-specific models are developed [35]. We conduct three workload category specific experiments. During each experiment, the value of control input is changed as per the discrete Sine wave equation given below: The m in above equation represents mean, A represents amplitude and t represent time step. The time period for each experiment is 540 min long. The difference between each experiment is the use of different pair of (mean, amplitude) values and the use of different workload. The coverage of the input values generated using Eq. 1 during the experiments can be seen from Fig. 4a-c and the corresponding system output recorded in response can be seen from Fig. 4d-f respectively. In the case of system output, the vibrations in the measurement occur as a result that the majority of requests were cancelled because they were unable to complete their execution at a predefined maximum time (2 s). System model and evaluation The Autoregressive Exogenous Model (ARX) approach is employed to describe the relationship between the number of VMs and CPU utilisation. The following equation represents the general form of an ARX equation.
yðk þ 1Þ ¼ a 1 yðkÞ þ Á Á Á þ a n yðk À n þ 1Þþ The above equation represents a single input, single output system. The u and y represent the input and output of the system respectively. According to this equation, the output in next time unit (k þ 1) depends on the n number of previous output values and the m number of previous input values. The a k and b k are the constant coefficients values for each output and input value, whereas the m and n represent the order of the model. We use a 1st order ARX model of the following form that can be derived from Eq. 2 by setting m ¼ n ¼ 1.
The 1st order model, in contrast to m and n order model, relies on the input and output from the previous time unit only. The key reason of selecting the 1st order model is its simplistic nature and the ability to avoid over-fitting [35]. We have to find values for parameter a and b of the above equation from the training data obtained from the SID experimentation. For this purpose, we employ the commonly used least square regression method to estimate the model parameters for all the three experiments mentioned in the previous sections, and the outcome is in the following equations: yðk þ 1Þ ¼ 0:89yðkÞ À 0:18uðkÞ ð 4aÞ yðk þ 1Þ ¼ 0:93yðkÞ À 0:07uðkÞ ð 4bÞ yðk þ 1Þ ¼ 0:95yðkÞ À 0:03uðkÞ ð 4cÞ These models after validation can be used to design controllers and the following two approaches are normally followed. Firstly, each model could be used to design a different controller as it is obtained based on the average rate of each workload category and thus can be treated as workload-specific models. Secondly, one model could be used to design different controllers where each differs from others based on the controller properties. We follow the second approach and use the model of Eq. 4a for controller design (explain in Sect. 2.1.4).
The next step is to evaluate the model to quantify its accuracy. For this purpose, we employ a widely used method known as the coefficient of determination (denoted by R 2 ). The value of R 2 can be calculated using the following equation:
Cluster Computing
The y in above equation represents the actual system output value, whereŷ indicates the predicted value computed by the model. The R 2 value indicates the quality of the model, where a value ! 0:8 is considered as an acceptable range [35]. In our case, the value of R 2 is 0.96, which indicates a good fit. However, according to Hellerstein et al. [35], a larger value of R 2 can also be misleading in cases where data points are grouped together around extreme values. Therefore, to confirm the accuracy of the model, residual analysis plots are often recommended. Such a plot, in the context of our model, can be seen in Fig. 5 where the actual values of the output signal are plotted against the predicted values. It is evident from this plot that apart from few points, all other points are grouped around the diagonal line, which indicates better accuracy of the model.
Controller design
The goal of the controller design step is to select the control law and any required parameters for the Controller component of the feedback control methodology. The control law determines the structure of Controller component and describes how it will operates [36]. In this paper, we adopt the Integral control law for each of the three employed controllers, i.e. Lazy, Moderate, and Aggressive. The key reasons behind this selection is its simplistic nature and its extensive use for similar problems, e.g. [9,[37][38][39][40].
The integral law can be defined using the following equation: is the control error that represents the difference between the desired and measured output, i.e. eðtÞ ¼ y ref À y t , and K i is referred to the integral gain parameter. In this paper, the number of VMs is the control input, whereas CPU utilisation is the measured output. The control error represents the difference between the desired CPU utilisation (i.e. 55%) and the measured CPU utilisation. The integral gain parameter indicates the aggressiveness of the controller that determines how fast the system will respond. The higher this value, the faster the system will react. However, careful attention is required while deciding the gain of the controller as higher value of the gain parameter could cause oscillation and may lead the system to instability. All the three employed controllers adopt the Cluster Computing same integral law specified by Eq. 6. However, their integral gain parameter is different. The following equations represent each employed controller: The gains K L i , K M i and K A i are derived using the standard procedure of Root-locus that provides a systematic method to analyse and design feedback controllers. The Root-locus method require the transfer function of the feedback control system. Such a transfer function can be obtained by the corresponding transfer functions of the different components of the feedback loop. In our case, the different components include the integral controller (represented by Eq. 6) and the target system (represented by one of the model earlier described in Sect. 2.1.3). The transfer function of integral controller is given in Eq. 10, whereas the transfer function of the system model of Eq. 4a is provided in Eq. 11. Based on these equations, the transfer function of the entire feedback loop [35] is provided in Eq. 12.
Using the Root-locus method by taking into account the transfer function of feedback loop (Eq. 12), we finalise the following values À 0:06, À 0:2, and À 0:5 for K L i , K M i , and K A i gains respectively. The analysis performed using Rootlocus indicate that the system remains stable (always reach to equilibrium) and accurate (steady-state error reach to zero) using all the selected gains. The finalised value has a settling time of less than 10 time interval, whereas, the maximum overshoot recorded is less than 15%.
2.2
The switching mechanism: a fuzzy control system
Overview
The deployed application over cloud environment automatically inherits the uncertainty related challenges associated with the cloud environment [41]. Hence the elastic method, responsible for the resource management of the application, has to deal with these challenges. The examples of such uncertainties, summarised from [14,27,28,41,42], include impreciseness in domain knowledge, noise in monitoring data, inaccuracies in performance model, delay caused due to actuator operation and unpredictability in workload. Jamshidi et al. [14,27] and Farokhi et al. [28] stressed the importance of the uncertainty aspects to be taken into consideration while designing the elastic controller. Otherwise, scaling decisions often result in unreliability as the available resources may fail to fulfil the requirements, or may not be costeffective [28]. However, despite the importance, the implementation of uncertainty in the context of cloud elasticity has not yet been well received [28]. A step in this direction is the work of Jamshidi et al. in [14], where they proposed a fuzzy control system focusing mainly on two issues: (1) The quantitative nature of the Rules-based method by introducing the idea of qualitative elasticity rules; and (2) The lack of consideration regarding uncertainty raise due to noise in monitoring input data. Their fuzzy controller introduces elasticity rules of the following nature: IF workload IS high AND responsetime IS slow THEN add 2 VMs The elasticity engine executes such rules at runtime and makes decision, based on Arrival rate and Response Time. The output of their controller is the number of VMs to be added or removed. Their approach facilitates a dynamic response based on the aforementioned two parameters by making a scaling decision with different intensity levels, and consequently it improves the static scaling issue of the Rule-based approaches. However, the output (number of VMs) is a pre-defined range of constant integers, and these numbers are set-up based on the experiences of the experts rather than rely on a well-founded design approach. In contrast, our proposed approach relies on the systematic method of control theory to compute the number of VMs. Moreover, our approach is hybrid in nature, i.e. it also incorporates both the performance and capacity based metrics as opposed to their performance based approach only. This paper compliments and extends the work of Jamshidi et al. [14] aiming to develop a fuzzy control system to implement the switching mechanism of the proposed framework. The following subsections explain the design process of this switching mechanism.
The design process
The construction of a fuzzy system involves the following three steps: establishing domain knowledge, designing membership functions and composing fuzzy rules. The details of each of these steps in the context of our switching mechanism are provided below.
Domain knowledge The domain knowledge is concerned with the identification of inputs and outputs of the system. The inputs specify factors of the system that are important Cluster Computing to be considered for decision-making purposes. As mentioned earlier, the proposed method considers three different aspects of the system for decision-making. These aspects are the inputs of the fuzzy system and their brief description are provided below: The Control error is the difference between the measured and desired CPU utilisation.
These inputs cover performance, disturbance and resource utilisation aspects in the decision-making mechanism.
Contrary to the fuzzy controller of Jamshidi et al. in [14] that directly produces the pre-defined constant number of VMs as a scaling decision, the output of our fuzzy system is one of the employed controllers that will be used to compute the scaling decision. The next step is to define fuzzy set for each input and output (commonly known as fuzzy variables). The fuzzy set of each variable comprises of defining linguistic terms and assigns ranges of values to them. Table 1 provides the definitions of all the linguistic terms for each fuzzy variable and their corresponding ranges, whereas their brief description is given as follows: -The linguistic terms and the corresponding ranges for the Workload (i.e. Arrival rate) variable are adapted from the work of Jamshidi et al. in [14], where the knowledge base is constructed using domain experts, i.e. architects and administrators. They constructed a fuzzy set of five linguistic terms for Workload variable including Very low, Low, Medium, High and Very high.
We reduce them to three to minimise the number of rules, hence reduce the complexity. However, more fine-grained control over resources can be obtained by increasing the number of workload categories or the number of controllers. -The linguistic terms of Response time variable reflect the overall performance objective of the application that can be defined by the SPs. In Table 1, we use symbols b 1 , b 2 , b 3 and b 4 to represent the customisable aspect of these parameters. Jamshidi et al. [14] in contrast, distributed the Response time into five categories with the values obtained from domain experts. However, considering that the application performance measurement for different applications is different, the values of the linguistic terms of Response time are customisable to reflect the desired performance objective and have to be defined by the SPs. In the current settings of this paper, we adopt the following values for evaluation purposes, i.e The linguistic terms of Control error are obtained by distributing the Control error measurement into five categories. An increase in these categories can provide more fine-grained control. However, it will also increase the complexity of the proposed method. The ranges of these linguistic terms are obtained using trial and error method, where various experiments are carried out using different ranges. -The linguistic terms of Controller variable are the possible outcomes. These terms depend on the number of controllers, which in this case are three. We also consider one more output, i.e. No-scaling that specifies no action is required. The ranges of these linguistic terms are set based on the approach adopted in [43], where no overlapping of the range is required because the final decision represents a range that corresponds to a single output rather than a numerical value.
Membership Functions The next step is to define the membership functions that convert the crisp inputs into the corresponding fuzzy values. The membership function defines the degree of the crisp input against its linguistic variables in the range of 0 to 1. The design of the membership functions, adopted from Jamshidi et al. [14], use Cluster Computing triangular and trapezoidal types of function. These functions have the advantage of being simple and efficient in comparison with other types of membership functions [44]. Figure 6 represents the membership functions of our fuzzy control system. Fuzzy rules The fuzzy rules describe the relationship between the inputs and outputs of the fuzzy control system. Each fuzzy rule, in this case, determines the type of the controller that makes the scaling decision. The fuzzy rules are made of using fuzzy logic statements and follow the ifthen pattern. The fuzzy rules of the switching mechanism are made using the linguistic terms of the fuzzy variables explained earlier in Sect. 2.2.2. An example of such a rule is provided below: IF arrivalRate IS high AND responseTime IS desirable AND controlError IS wePos THEN controller IS lazy.
In the above example, a Lazy controller is selected based on the values of Arrival rate, Response time and Control error. Such rules for an application scenario can be designed using the combination of linguistic terms provided for each parameter in the rule (see Table 1). Such rules can also be tuned for different situations using optimisation approaches. A full list of the rules employed for the experimentation conducted in this paper are provided in Table 2. These rules are designed using the following considerations: (1) Select those rules that react quickly if the application performance is poor; (2) If the application performance is desirable then aims to reduce system running cost; (3) Aim to maintain the CPU utilisation around the desired reference value.
Experimentation and computational results I
The experimental environment used for the evaluation is developed using Java language that integrates a wellknown cloud simulation environment called CloudSim [34] and an external Java-based library called JFuzzyLogic [45].
The following subsections explain the various aspects of experimentation and the obtained computational results.
Workloads
The commonly used approach to test an auto-scaling methodology is to evaluate its performance against different workloads, based on certain desirable criteria. Gandhi et al. in [46] and Jamshidi et al. in [14] evaluated their proposed elastic methods using workloads that follow different patterns. The key reason of using such an approach is to evaluate and analyse the performance of an elastic method in different scenarios. The workload patterns that they have used include Quickly varying, Slowly varying, Dual phase, Tri phase, Big spike and Large variations. Similarly, Mao and Humphrey [47] used Stable, Cyclic, Growing and On-off set of patterns. Each of these patterns represents a different class of applications [26]. This research also adopts the patterns mentioned above to analyse the performance of the proposed method. In this paper, we identify seven different workloads that can be seen from Fig. 7 to represent a single or multiple patterns. Amongst these, one is synthetically generated, whereas the remaining six are derived from the following real Internetbased sources including Wikipedia [48], FIFA World Cup [49] and WITS (Waikato Internet Traffic Storage) [50] project. All the derived workloads traces are vertically scaled to a maximum of 60,000 rpm and the number of arrivals on per minute basis is obtained from the count of actual arrivals except for the synthetically generated one. Furthermore, the service time of each job request is randomly generated between 100 and 500 ms to incorporate the stochastic behaviour of the incoming arrivals.
Fixed gain feedback controller
We have used Fixed gain feedback controller as one of the benchmark methods. The key reason behind is that our proposed method is an extension of such an approach, where we use multiple Fixed gain controllers simultaneously. The individual elastic controllers are termed Lazy, Moderate and Aggressive respectively, thus aiming to demonstrate the effect of using the same controllers independently versus using them collectively as in the proposed framework. The nature of the individual controllers, i.e. Lazy, Moderate and Aggressive, in general are similar to those used in related elastic methodologies such as [9,11,37,51]. The individual controllers are implemented following the proportional threshold approach of [9], where the Reference input is considered as a range rather than a scaler value. This approach avoids the unnecessary oscillations by restricting the controller not to take a decision if the measured output is within a certain range. In this paper, we consider a range AE10% of Reference input (55%), because it is the same as the range of Normal linguistic term of Control error fuzzy variable used in our proposed switching mechanism.
RightScale: a rule-based approach
The RightScale [5] is a 3rd party commercially available auto-scaling approach, which is a Rule-based method. In the RightScale method, each VM engages in a voting process, where every VM decides whether a scaling decision is required or not. The decision by individual VMs is based on the set of elasticity rules. The implementation of RightScale includes the setting of decision threshold value for the voting process. For this purpose the value 51% is used. This represent, if just more than half of the VMs are in favour of the decision then the action will be performed.
Otherwise, it will be ignored. Another important aspect of RightScale implementation includes the determination of system metric to be used for setting up the rules. For this purpose, we use CPU utilisation as a system metric based on its usage as the Reference input in the proposed method. The value use for thr up is 55%, i.e. the desired Reference input of our proposed method as we already know, the performance degrades when CPU utilisation becomes higher than 55%. The value for thr down obtained by trying different possible values such as (20%, 30% and 40%) and then selected, the value that produces the better result regarding the evaluation criteria (explain in next section). Another important configuration required is the settings of values for s a and s r . For this purpose, we use the following four different settings: (1) s a ¼ s r ¼ 2, (2) s a ¼ 2; s a ¼ 1, (3) s a ¼ 4; s r ¼ 2 and (4) s a ¼ 10%; s r ¼ 5%. Lastly, the t in both of the above rules specifies.
Evaluation criteria
The key objective of implementing cloud elasticity is to improve the utilisation of computational resources whilst maintaining the desired performance of the system and reducing its operational cost. This statement hints on the fundamental criteria, i.e. Performance and Cost for the assessment of an auto-scaling mechanism. The brief details for each aspect in the context of this paper is as follows.
Service level objective (SLO) Violations We consider
Response time as a criterion to measure the performance of the elastic method. The requirement regarding desired performance objective in cloud computing is defined through SLO specification. In this paper, we consider that each job request of the workload must be completed in the pre-defined desired time, i.e. 0.6 s. Thus an SLO violation is considered, if the desired Response time for a job request has not been achieved.
Cost
The Cost refers to the operational cost of the rented VMs. These VMs are used to execute the workload and each VM is associated with a cost per time unit. The total running time of all VMs is recorded for the entire experiment. This includes the time when a VM starts to the time it finishes execution, either as a result of a Scale-down action or when the experiment finishes. A rate of 0.013$ per hour is applied to calculate the final cost based on the Amazon pricing [52] for the VM instances of ''t2.micro'' type.
Computational results and analysis
The benchmark methods as well as the proposed methods are implemented into the CloudSim environment. Cloud-Sim is extensively used in the cloud related research activities for modelling and simulation of cloud computing systems and applications. We have used, and extended where necessary, its various functionalities, such as the scheduling strategies, creation and deletion of VMs, etc.
For the experiments, all VMs are identical and are considered as abstract servers, that imitate to serve a specific purpose, e.g. act as web servers. Furthermore, for each particular method, i.e. the benchmark methods and the proposed methods, the following related aspects of the simulation environment remain the same: -VM creation The focus of our proposed method is from the SP perspective, where the main concern is with the management of rented VMs and not the underlying physical hardware that host VMs. Therefore, in this research work, we are not considering aspects like optimal placement of VMs on physical hosts, which is in itself researched as an independent problem. For the simplicity of the implementation, the default allocation and scheduling policies of CloudSim concerning the VM and Host related assignment and execution are used. -VM deletion In the case of scale down operation, the VM with lowest number of jobs is selected to delete.
Cluster Computing
The action of delete is however not immediate and the deletion process wait until the completion of all jobs. -Jobs allocation Analogous to the VM and Host related allocation and scheduling policies, the assignment of incoming jobs to the already available VMs are handled through a round robin policy.
The computational results obtained from the experimentation can be seen from Fig. 8. In this figure, rs_21, Fig. 8 present an aggregated view of Cost versus Performance aspect of the overall experiment for each method. Some of these plots do not show results of few methods. The reason behind is that in such cases, the number of SLO violations were recorded as ! 5%, i.e. higher than the desirable performance objective. Therefore, those results were not of interest and are excluded to improve the readability of plots. The only exception to this criteria is in the case of On off scenario, where all methods results in ! 5% SLO violations, except thle proposed method, i.e. HS. The plots in the right column present the corresponding time series view of the number of SLO violations in an hourly basis for the three methods that obtained comparatively better aggregate results. This section briefly discusses each of the applied methods in light of the obtained computational results.
1. Rightscale It is observed from the obtained results that some settings of the Rightscale method produce better performance in comparison to the other approaches, i.e. Lazy, Mod, Agg and HS. However, this better performance is obtained with a very higher cost. Such phenomena are only observed in those scenarios, where transitions in workloads are comparatively smooth, e.g. in the case of Dual-phase, Cyclic and Slowly varying scenarios. In other scenarios where sharp changes occur in workloads, e.g. in the case of Large variations and On-off scenarios, the performance is comparatively poorer than HS and Moderate despite being expensive. A key reason behind is the underlying static scaling behaviour of the Rightscale method, where a scaling action is performed using a uniform quantity. 2. Aggressive It is observed that the aggregated results of performance obtained using the Aggressive approach in the case of Dual phase and Quickly varying scenarios are comparatively better than HS. However, the time series analysis of those scenarios indicates that the performance of the system is poor in certain hours specifically when the arrival rate of the workloads is low. The key reason for this behaviour is the inappropriate scaling intensity that causes a bigger change in some cases, e.g. observe the time series view of CPU utilisation in Fig. 9 The above discussion indicates that using a uniform fixed policy is unable to cope with changing workload conditions. In contrast, the proposed Hard switching consists of the collection of the same policies with an additional switching mechanism result in an improved system performance without an increase in the operational cost.
Biologically-inspired elasticity framework: soft switching
The hard switching approach described in previous section has the potential to improve system performance in comparison to the benchmark methods. However, such methodologies are often criticised for their associated unwanted behaviour, termed as bumpy transition, that could lead the system to an oscillatory state [35,43,53]. Figure 10 demonstrate the occurrences of such unwanted Cost SLO Violations Cluster Computing mechanism in contrast to hard switching has the possibility to select multiple actions rather than one best choice. The key benefits of such an approach include: (1) avoidance of singularity and sensitivity problems, (2) improvement of robustness and stability aspects and (3) elimination of chattering issues [54]. This section aim to explore the capabilities of a biologically (cognitive) inspired action selection process to implement soft switching behaviour, hence seeking the possibility of more smoother (bumpless) transitions to improve the stability perspective. Formally, an action selection is the process of deciding what to do next from a set of available actions by an agent, based on some knowledge of the internal state and some provided sensory information of the environmental context to best achieve its desired goal [55]. Over a period of time, researchers have learnt that in animal's brain, the problem of action selection is handled through the use of a central switching mechanism [56,57]. This mechanism is implemented by a group of subcortical nuclei collectively refers to as Basal Ganglia (BG) [56,57]. For a functional anatomy of BG, refer to [55]. To incorporate such a mechanism into our framework, we integrate a well established BG based computational model of Gurney et al. [58,59]. The key advantages of this computational model include its biological plausibility and computational efficiency [60]. The block diagram of the enhanced framework (Soft switching) can be seen from Fig. 11. Comparing this diagram with the Hard switching approach, the following three differences can be observed: (1) the integration of the BG component, Cost SLO Violations Cost SLO Violations Cost SLO Violations [58,59,[61][62][63]. Amongst these models, we utilised the computational model proposed in [58,59]. However, any model can be used as our aim is not to identify the best action selection or biologically-inspired computational model rather to demonstrate the effectiveness of such an approach in the context of the cloud elasticity. Focusing on Gurney et al. [58,59] computational model, the brain subsystems send excitatory signals that represent the behavioural expressions to the BG. Each behavioural expression defines an action in BG and its strength is determined by the salience that represents the activity level of its neural representation. These actions are mediated through the release of inhibitory signals. Thus in each iteration, the functional model accepts a set of salience signals and produces a set of selected and unselected output signals. The functional model can select a maximum of one action (referred as Hard mode), similar to the Hard switching approach earlier described in Sect. 2. Alternatively, the functional model can also have the possibility to select multiple actions (referred as Soft mode). In this research work, we are interested in the Soft mode of the functional model, where it result in the selection of multiple actions. For a detailed description of the functional model refers to [58,59].
The BG component, shown in Fig. 11, accepts three inputs namely lazySalience, modSalience and aggSalience. These inputs represent the strength of selection for each controller (depicting as action). The values for these salience signals are computed by the FIS (details provided in the next section).
The modified FIS
The BG based computational model requires salience signals as inputs. Thus the first issue to be dealt with is the generation of salience signals. The method to generate the salience signals can make use of system's internal state, various performance metrics or available sensory information [60]. Therefore, we have extended the FIS, used as a switching mechanism in previous section, to generate the inputs (salience signals) for the BG component of the framework. The inputs of the modified FIS remains the same, i.e. Workload, ResponseTime and ControlError. However, the output is changed from one, i.e. Controller to three lazySalience, modSalience and aggSalience. Each of these outputs represents the salience strengths for the selection of each of the three controllers. The details of the changes carried out are as following: 1. Membership functions The inputs of the modified FIS do not change, and therefore the corresponding membership functions of the input fuzzy variables remain the same. However, the output is changed, therefore, the Controller membership function is replaced with three new membership functions, i.e. one for each newly introduced output. Similar to the Controller membership function, we have used the basic triangular type for all the outputs. The membership function for each salience signal variable is of the form shown in Fig. 12. 2. Fuzzy rules The fuzzy rules are responsible to generate the salience inputs. The fuzzy rules described in previous section are revised accordingly. The inputs of the rules are the same as in the case of Hard switching. However, the output can be formed using the linguistic terms (weak, average and strong) for each salience. An example of such rule is provided below: IF arrivalRate IS medium AND responseTime IS desirable AND controlError IS stPos THEN modSalience IS strong AND aggSalience IS average
Derivation of final output
As mentioned earlier, in terms of the adopted functional model, we are interested in the mode, where it result in the possibility of multiple actions section. Hence the final decision, i.e. the number of VMs, will be derived using the output signals returned by the BG component and the outputs of the individual controllers. The following equation represents this derivation.
The u t in the above equation represent the final decision. u L ðtÞ, u M ðtÞ and u A ðtÞ represents the output of individual controllers, i.e. lazy, moderate and aggressive respectively. These outputs are computed as per the equations described in Sect. 2.1.4, i.e. Eqs. 7, 8 and 9 respectively. Whereas, g L , g M and g A are the output signals returned by the BG component. The values of these signals lies between 0 to 1 and they signify the proportion of each action. The denominator g represents the number of those output signals with a value higher than zero. However, it is not always the case that more than one controller to be selected.
Computational results II
We have used the same experimental settings and scenarios (i.e. gains for controllers and workloads) as in the case of Hard switching to evaluate the Soft switching approach. It is already discussed previously that the Hard switching achieves better results compared to the benchmark methods. Therefore, in this section, we only compare Soft switching to Hard switching approach. We present and discuss the obtained computational results in the following two aspects: Figure 13 shows the aggregated view of the results obtained using both the approaches, i.e. Soft switching and Hard switching. These approaches are represented as SS and HS respectively in the reported results. Considering the number of SLO violations, it is evident that the SS approach has obtained, lower number of SLO violations to that of HS, in each employed scenarios. On the other hand, the comparison of the cost perspective indicate the similar level of spending by both approaches, i.e. SS and HS. This demonstrates that the SS approach results in better performance compared to that of HS without increasing the operational cost of the system. Figure 14 provides an insight into the performance of both the approaches on an hourly basis. Each plot of this diagram represents the result for every employed workload scenario. The analysis of these plots hints the following: (1) The performance obtained, in the cases of each scenario, using SS approach in almost every hour is either similar to that of HS or comparatively better. This indicates a higher potential to maintain better performance during the entire period of the experiment. (2) The SS and HS approach behave almost similarly in scenarios, when there are sharp increases in workload, e.g. the 6th hour in the case of Cyclic, the 7th hour in the case of Large variations and the 7th and 19th hours in the case of On-off scenarios. (3) The SS approach performs comparatively better when the arrival rate of the workload remains low, e.g. initial 5 hours period in the case of Dual-phase and the hours from 15th to 18th in the case of On-off. This indicates that at the time of low workload, the decision of HS affects the performance more due to its best controller selection strategy in comparison to that of SS approach.
Oscillatory behaviour
The results presented in the previous section demonstrate the effectiveness of the Soft switching approach regarding the improvement of the overall performance. This section discusses the possibility of reducing the likelihood of bumpy transitions and oscillation in comparison to the Hard switching approach. Figure 15 presents the measured CPU utilisation recorded for HS and SS approaches. The analysis of these plots hints on the following insights: TriPhase, where the variation in the case of SS from 4th to 6th is fewer than that of HS. 3. The red dashed line in each plot of Fig. 15 represents the mean CPU utilisation obtained using the respective methods in each corresponding scenario. In all of the given three scenarios, the mean obtained using SS approach is comparatively less than that of HS, e.g. in the case of On-off, the means are 52.56 and 54.19 recorded using SS and HS respectively. This demonstrates that the SS approach is comparatively better and maintains the CPU utilisation below 55% more often than the HS approach.
In light of the above discussions, we can claim that the BG based Soft switching approach has higher potential to reduce the number of SLO violations, hence, result in better system performance. Moreover, compared with the HS approach, it has demonstrated the possibility of reducing the likelihood of bumpy transitions and oscillatory behaviour. The intuitive explanation for this improvement is the integration of controllers, (shown in Eq. (13)), in a biologically-inspired fashion augmented with the BG process that facilitates the natural selection of actions, hence result in less 'bumping' at switching time [64]. Moreover, the computational model of [58,59] in particular is successfully validated to avoid oscillation [62].
Related work and discussion
The proposed elasticity methods are developed using control-theoretical based multiple controllers and fuzzy control system. This synergy, on one hand, enables us to address the inherent uncertainty related issues of a cloud environment using the fuzzy control system. On the other hand, the systematic design of model-based feedback controllers helps in strengthening the reliability of the system. In this paper, we chose to address the cloud elasticity from a Service Providers (SPs) perspective. The key motivations behind this choice are that the cloud based applications are subject to varying workload conditions, and the Cloud Providers (CPs) lack control and visibility regarding application performance aspects that make it difficult to perform efficient scaling decisions [65]. In contrast, the SPs have full control and visibility of cloud resources using monitoring and management APIs provided by CPs, as well as an up-to-date knowledge of an application status using custom or 3rd party tools. The proposed methods are hybrid in nature, and consider application level metric (Response time) as well as system level metric (CPU utilisation). Additionally, we consider the Arrival rate that represents the incoming workload intensity level into the decision making process. The consideration of these three parameters empower the proposed methodology to make an informed scaling decision, as opposed to the majority of the existing related approaches that either rely on application level [14,31] or system level metrics [11,32,33].
The existing Rule-based solutions in general are prevalent due to their intuitive, simplistic and, more importantly, commercial availability factors [66]. Such approaches [4][5][6][7][8] are easy to design and well understood by the system designers and administrators alike. However, such methods lack a formal systematic design process as they are designed based on previous experiences or applying a trial and error approach [66,67]. Moreover, they are criticised for the difficulty in setting-up various thresholds of the rules and their inability to cope with the changing environment behaviour [14,26]. This is evident from the configurations and results of the RightScale approach discussed in Sect. 3.
The feedback control solutions [9-12, 33, 37, 68] follow the fixed gain design principle of control theory. Such fixed gain methodologies in general work well for systems that are subject to stable or slowly varying workload conditions [67]. However, due to the lack of adaptive behaviour at runtime, the performance suffers in scenarios where the operating conditions change quickly or when the environmental conditions and configuration spaces are too wide to be explored effectively [25]. The lack of adaptivity issue has been addressed by incorporating online learning algorithms such as the use of linear regression [69], optimisation [70], Kalman filter [71] and reinforcement learning [72]. In general, such adaptive control methodologies have the ability to modify themselves to the changing behaviour in the system environment that make them suitable for systems with changing workload conditions. However, they are also criticised for the additional computational cost caused due to the online learning [26], their associated risk of reducing the quality assurance of the resulted system, and the impossibility of deriving a convergence or stability proof [25]. Moreover, they are unable to cope with sudden changes in the workloads.
Al-Shishtawy and Vlassov [73] addressed the elasticity problem using a two-level approach, where they utilised a combination of an Model Predictive Controller (MPC) based feedforward control solution and a Proportional Integral (PI) based feedback control method. Using such an approach, the feed-forward method follows a predictive approach that takes scaling decisions for a longer time in advance; whereas the feedback method is responsible for making gradual changes in a reactive style. Such two-step hybrid control solutions are effective; however, currently our focus is on the efficiency of elastic solution implemented at the 2nd level that follows a reactive strategy. Al-Shishtawy and Vlassov [73] utilised a fixed gain PI feedback controller that suffers from various issues discussed earlier in this section, whereas the approach adopted in this thesis uses multiple fixed gain controllers. Wang et al., followed a similar approach, i.e. the combination of feed-forward and feedback. However, they have focused on vertical elasticity.
The following proposals have also adopted a similar approach as employed in this paper. For example, Grimaldi et al. [32] used a PID gain scheduling. Their gain scheduler is an optimal controller that derives the gains using an optimisation based tuning procedure. The key issues of such an approach are similar to that of an adaptive methods discussed earlier in this section. Saikrishna et al., Qin and Wang, and Taneli et al. [77][78][79] followed a Linear Parameter Varying (LPV) approach. CPU utilisation is considered as the single scheduling parameter by Saikrishna et al. [77], whereas Qin and Wang, and Taneli et al. [78,79] rely on arrival rate and service rate. Patikirikorala et al. [31] followed a MMST based control solution. Their method use two different operating regions and consist of two different fixed gain controllers with an if-else switching that is based on Response time only. Saikrishna et al. [80], in contrast, used ten distinct operating regions and Arrival rate as a switching signal.
Jamshidi et al. [14,15] highlighted the uncertainty related issues and the idea of qualitative elasticity rules using a fuzzy control system to address the issues of Rulebased approach. The inputs to their method consist of Arrival rate and Response time, whereas the output is the number of VMs to be added or removed. Their approach facilitates a dynamic response based on the aforementioned two parameters by making a scaling decision with different intensity level, and consequently it helps avoid the static scaling issue of the Rule-based approaches. However, the output (number of VMs) are a pre-defined range of constant integers, and it is not clear how these numbers are setup. Therefore, it creates similar problems to that of the Rule-based approach, i.e. difficulty in setting-up threshold values of rules and lack of a well-founded design approach. On the other hand, machine learning based control solutions that utilise either reinforcement learning [19,20] or neural networks [81,82] provide high levels of flexibility and adaptivity. However, such flexibility and adaptivity come at the cost of long training delays, poor scalability, slower convergence rate, and the impossibility of deriving stability proof [25,26,83,84].
It is concluded from the above discussion that the different elastic controllers due to their underlying implementation techniques have different pros and cons, hence there is no best solution and the choice of selecting suitable approaches depends on the requirements [25]. The research work carried out in this paper advocates the idea of a fixed-adaptive approach (also referred to as hybrid by Gambi et al. [25]) in contrast to either completely fixed or fully adaptive methods. The proposed elastic methodologies are implemented using the combination of the modelbased control-theoretical approach and the knowledge based fuzzy control system. This combination, in comparison with the existing fixed-adaptive methods [31,32,[77][78][79][80], addresses the uncertainty related issues and enables us to provide qualitative elasticity rules as well.
Conclusion
This paper investigates the horizontal elasticity problem from the SPs perspective and proposes biologically-inspired auto-scaling solutions. The proposed elastic methods follow a Reactive triggering approach, target Web applications, and aim to maintain the desired performance level whilst reducing operational cost. The proposed methods are implemented using a Control theoretical feedback technique and a Fuzzy control system. The proposed approach integrates a functional model of basal ganglia (BG) that augments the methodology to select the right set of controllers in a natural biologically plausible way thus reducing the likelihood of oscillation and enhancing the stability perspective of auto-scaling. We evaluate the proposed methodology using a large set of different real workload patterns against some of the existing elasticity methods. The experimental results demonstrate that the biological inspired method performs better in both evaluation perspective (i.e. performance and cost) than all other approaches. Moreover, the Soft switching method reduces the bumpy transitions and oscillatory behaviour observed using the proposed Hard switching approach, thus having the potential to increase the stability of underlying system. In future, we aim to extend the developed framework in the following ways: (1) a detailed theoretical convergence and stability analysis is required to formally evaluate the proposed approach against other state-of-the-art approaches, (2) enhancement of switching rules to learn at runtime and (3) to explore the possibility of enhancing the framework by incorporating the vertical elasticity as well.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. Amir Hussain is Professor Science at the Edinburgh Napier University in Scotland. He obtained his B.E. in Electronic and Electrical Engineering (with the highest 1st Class Honours, with distinction) and Ph.D. (in novel neural network architectures and algorithms for realworld applications), both from the University of Strathclyde in Glasgow, UK, in 1992 and 1997 respectively. Magazine. His research interests are cross-disciplinary and industry focused, aimed at pioneering next-generation brain-inspired multimodal Big Data cognitive technology for solving complex real world problems. He has (co)authored over 300 publications (including over a dozen Books, over 100 journal papers, and the worlds first research monographs on the multi-disciplinary areas of: cognitively inspired audio-visual speech filtering for multi-modal hearing-aids, sentic computing for natural language processing, and cognitive agent based computing). He has led more than 50 major multi-disciplinary research projects, as Principal Investigator, funded by national and European research councils, local and international charities and industry. He has supervised more than 30 Ph.D.s to-date, and serves as an International Advisor to various Governmental Higher Education and Research Councils, Universities and Companies. He regularly acts as invited Keynote Speaker, and has organized (as General/ Organizing co-Chair) over 50 leading international Conferences todate (including IEEE WCCI, IEEE SSCI, IJCNN, BICS and INNS Big Data Conference series). He is an invited member of several IEEE TCs, including the IEEE SMC TC on Cognitive Computing, and the IEEE CIS Emergent Technologies TC. He is Chapter Chair of the IEEE UK & RI Industry Applications Society Chapter, and founding co-Chair of the INNS Big Data Section. He is a Fellow of the UK Higher Education Academy (HEA), and Senior Fellow of the Brain Sciences Foundation (USA). More details on his research profile can be found on his homepage: https://www.napier.ac.uk/peo ple/amir-hussain.
Cluster Computing
|
2020-02-28T16:31:11.017Z
|
2020-02-28T00:00:00.000
|
{
"year": 2020,
"sha1": "ae55ec5c2d6a93e92e4c080c37c366594f24af0d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10586-020-03073-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "ae55ec5c2d6a93e92e4c080c37c366594f24af0d",
"s2fieldsofstudy": [
"Computer Science",
"Biology",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
221198999
|
pes2o/s2orc
|
v3-fos-license
|
An evaluation of a mental health literacy course for Arabic speaking religious and community leaders in Australia: effects on posttraumatic stress disorder related knowledge, attitudes and help-seeking
Background Australia is an ethnically diverse nation with one of the largest refugee resettlement programs worldwide, including high numbers of refugees with an Arabic speaking background. Evidence suggests that refugees can demonstrate high levels of psychological distress and are at a higher risk of developing mental illness such as posttraumatic stress disorder (PTSD) and major depressive disorder (MDD). Notwithstanding, research has also shown Arabic speaking refugees have lower levels of professional help-seeking behaviours, postulated to be related to mental health literacy levels. Methods A culturally sensitive mental health literacy (MHL) training program was developed and delivered in Arabic to Arabic speaking religious and community leaders using a 1-day training workshop format. An uncontrolled pre-, and post study design was used to provide a preliminary evaluation of improvement in PTSD-related knowledge, attitudes and help-seeking measures. Results A total of 54 adults were trained, with 52 completing the pre- and post-intervention questionnaire. Significant differences were found post-training in measures such as the ability to recognise mental health problems (p = 0.035) and an increased recognition of the role that medication can play in the treatment of PTSD (p = 0.00). Further, an improvement in negative attitudes such as a desire for social distance (p = 0.042) was noted and participants reported more helpful strategies in line with promoting professional help-seeking following training (p = 0.032). Conclusion Our findings indicated the training led to an improvement of some measures of MHL. To the best of our knowledge, this is the first time that the MHL program has been tailored for Arabic speaking religious and community leaders; who assist refugees with an Arabic background. By equipping community leaders with the knowledge to better respond to mental health problems, the overall goal of improving the mental health outcomes of Arabic speaking refugee communities is closer to being realised.
Background
The world is facing an unprecedented challenge, with alarmingly high numbers of forcibly displaced persons. Recent figures reported by the United Nations High Commissioner for Refugees (UNHCR) indicate this number to be in the vicinity of 70 million [1]. Disturbingly, the ongoing conflicts in the Middle East have resulted in 10 million displaced people most originating from Syria and Afghanistan alone [1]. Australia has one of the largest resettlement programs worldwide [2] providing a durable solution and protection to individuals through the Humanitarian Visa Program. In the 2018-19 period, the Australian Government allocated 18,750 places to refugees and others who were displaced as a result of conflict, persecution and human rights abuses [3].
Data from the Australian Bureau of Statistics (ABS) indicates that a majority of these ethnically diverse groups choose to resettle in major cities in Australia, and predominately in New South Wales (NSW) (33%) [4]. Relatively, Metropolitan Sydney had the largest overseasborn population of all the capital cities [4], concentrated primarily in South Western Sydney. More specifically, since 2016 approximately 3000 Arabic speaking refugees have chosen to settle in the Local Government Area (LGA) of Fairfield alone. As such South Western Sydney is currently facing an unprecedented challenge in meeting the health and mental health needs of these new arrivals.
Evidence suggests that refugees demonstrate high levels of psychological distress and are at a higher risk of developing mental illnesses such as posttraumatic stress disorder (PTSD) and major depressive disorder (MDD) [5]. Research has also demonstrated that Arabic speaking refugees have lower levels of professional help-seeking behaviours [6][7][8]. An important concept that may be related to professional help-seeking behaviours is mental health literacy. The term 'mental health literacy' (MHL) was introduced by Jorm and colleagues [9] as an extension of the concept of 'health literacy' . It is defined as "knowledge and beliefs about mental disorders which aid their recognition, management or prevention" [9] and includes: the ability to recognise specific disorders; knowing how to seek mental health information; knowledge of risk factors and causes, of self-treatments and of professional help available; and attitudes that promote recognition and appropriate help-seeking. Australia is a world leader in MHL research and this research has been used, with encouraging results, to inform the conduct of community-based health promotion programs designed to improve public awareness and understanding of mental health issues and facilitate early, appropriate helpseeking among individuals with mental health problems [10]. By contrast, the MHL of culturally diverse communities is an emerging area of research [11]. Specific to this study is the evidence related to Arabic speaking refugee groups, which has demonstrated that differing levels of knowledge and beliefs about the nature and the management of mental health problems may act as barriers to help-seeking [11,12]. It is postulated that culturally tailored mental health education and promotion programs addressing these barriers are required. Relatedly, research has demonstrated the well-regarded mental health first aid (MHFA) program delivered to culturally diverse groups such as the Chinese and Vietnamese communities in Australia, can lead to improvements in the MHL in these groups following such interventions [13,14]. Spurred on by such successes, a number of MHFA interventions have since been undertaken within refugee populations including the capacity of building community workers to assist refugees with mental health problems [15] and improving the MHL and help-seeking behaviours of teens from culturally diverse backgrounds and the responsible adults who work with them [16].
Religious and other community leaders: the need for mental health literacy training to develop mental health allies
Another equally important support group working with refugees are religious and community leaders [17][18][19]. Importantly, research has demonstrated that clergy in Arabic speaking communities are highly revered and considered to be the first point of contact for people who are suffering from mental illnesses [18]. However, there is also evidence to suggest that such leaders may have poorer knowledge related to the recognition and management of mental illness despite their significant influence [18]. Thus improving their capacity to respond to refugees with mental health problems may play a role in promoting professional help-seeking where it is required.
In a community based study of ethnic minorities residing in south east London, it was noted that Asians and Black Africans were more likely to seek help from religious leaders compared to general population, with rates of 15% and 18% reported respectively [20]. This trend has also been shown to apply to the Arabic community. A qualitative study conducted in Arabic Keywords: Refugee, Mental health, Mental health literacy, Mental health promotion, Community and religious leaders, Arabic speaking speaking communities in Australia, highlighted religious leaders 'would be the person of choice' for advice and counselling in times of distress. Moreover, it was reported that the majority of participants (74%) perceived religious leaders to have spiritual healing powers [18]. Participants noted that approaching religious leaders to help explain and alleviate the confusing circumstances and symptoms arising from a mental illness was viewed to be less threatening than a psychiatrist [18]. However, and despite refugees' perceived benefits around the role of religious leaders, evidence suggests that such leaders can be poorly equipped to provide effective support to those with mental illness [19]. Moreover, evidence on mental health referral behaviour amongst religious leaders suggests that their knowledge of mental health systems can be problematic, highlighting an area for specific mental health educational interventions [17]. Nonetheless, such interventions are rare, with few intervention programs targeting the MHL of religious and community leaders to assist ethnic minorities with mental health problems [21,22]. Specifically, Subedi et al. [21] reported on the impact of a 1-day MHFA training program delivered to Bhutanese refugee community leaders based in the United States. A total of 58 participants completed a pre and post-training survey which was a culturally adapted version of the MHL instrument developed for MHFA training in Australia [21]. Surveys were completed immediately prior to and after the MHFA intervention. The assessment included a vignette describing a person suffering depression followed by questions assessing knowledge and attitudes about mental health conditions and questions regarding post-resettlement stressors. Significant improvement was shown in the correct identification of mental health conditions, knowledge of treatment options for the mental health problem in the vignette and confidence relating to the provision of support for individuals suffering from, mental health problems. However, no change was observed for stigmatising attitudes [21]. A second study undertaken in Ghana, sought to measure the impact of a 3-h MHL programme on community leaders' knowledge about and attitudes toward people with mental disorders using a cluster randomised controlled design [22]. An adapted MHL survey was administered at pre-training and 12-week post-training points. Overall, the findings of the study indicated that using a problem-solving Story-bridge approach, the MHL program led to some improvement in participants' knowledge about and attitudes toward people with mental disorders and was well received by the leaders. These studies suggest that using a MHL intervention program to target community and religious leaders is feasible and can be used in other community groups.
In light of the reviewed literature, the current study sought to evaluate the impact of a 6-h MHL workshop targeted towards Arabic speaking religious and community leaders based in South Western Sydney. This preliminary study aimed to evaluate if the training was successful in improving the recognition of PTSD related problems amongst refugees, knowledge regarding treatments for such problems; the reduction of negative attitudes towards people with PTSD and the promotion of professional help-seeking.
Participants
Being a preliminary trial of a new culturally tailored MHL program, power analysis was undertaken to inform and guide future directions. Assuming training had a small effect size with a medium correlation between pre-and post-training scores, a sample size of 177 was identified as required. This number would give 95% power to detect a small effect size (d = 0.2) from pre-to post-training and follow-up with an alpha of 0.05. Alternatively, a sample size of n = 57 was identified as required if it was assumed that the training intervention would have a medium effect and made the conservative assumption that there would be no correlation between pre-and post-training scores. With these assumptions, this number would give 95% power to detect a medium effect size (d = 0.5) from pre-to post-training and follow-up with an alpha of 0.05.
A total of 54 participants were trained and 52 responded to the pre-and post-questionnaires. Participants were Arabic-speaking self-identified religious and other community leaders residing in South Western Sydney, Australia who represented a variety of organisations such as Churches/Mosques (n = 17), non-government (n = 21) and government organisation (n = 14). The training was promoted through religious centres and community networks including agencies that provide aid to humanitarian entrants in South Western Sydney. Participants were volunteers who made contact with the workshop coordinator (HL) for enrolment. Individuals were eligible to participate if they were from an Arabic background, self-identified as religious or other community leaders, had contact with Arabic speaking refugee groups on a permanent basis and had a good understanding of English language in order to complete the survey measures. Approval for this research was granted by the through South Western Sydney Health Local District Research (SWSLHD) Ethics Committee (reference number 2019/ETH12040) and joint approval with Western Sydney University (H13411).
Intervention
The training intervention was a 6-h, classroom-style education program. It was developed by a working group that comprised of representatives from SWSLHD Health Promotion, New South Wales Transcultural Mental Health Centre, Western Sydney Local Health District (WSLHD), and Western Sydney University. The program was developed as a response to an identified need to improve the MHL of Arabic speaking religious and community leaders, an outcome emerging from a refugee mental wellbeing symposium held in SWSLHD in 2017; 'Refugee Journeys From Surviving to Thriving' . This forum noted that working with Arabic speaking religious and community leaders was an immediate priority given the large numbers of Arabic speaking refugees resettling in the South Western Sydney area.
A culturally-sensitive program was designed by the working group and was based on evidence generated from previous research which had demonstrated a duality of treatment beliefs and preferences exists amongst refugee groups [11,23,24]. Recognising the importance of cultural and religious beliefs in collectivist societies such as Arab communities was considered essential to ensuring engagement and acceptability of the training by the target audience [25]. As such, care was taken to target areas of required knowledge such as recognition of mental health problems, treatment approaches utilised in Australia and challenging negative attitudes and stigma towards mental health problems, while ensuring this information was presented within a cultural valid framework. For example, when discussing depression and it's symptoms, consideration of how it may present in Arabic speaking societies was highlighted. Moreover, when treatment approaches for depression utilising the biopsychosocial approach were discussed, the positive messages from religious teachings were respected as a valid enhancement to promote recovery for some. Further, when delivering knowledge on mental health systems in Australia, the content was interwoven with the role that community and religious leaders play in being the first point of contact and their ability to promote professional help-seeking amongst their community. The program was delivered in Arabic (orally) utilising multiple formats including PowerPoint presentations, video presentations and whole-group discussions to encourage interactive learning. In particular, videos highlighting the experiences of refugees and their mental health and featuring interviews with religious leaders were utilised because they demonstrated an excellent understanding of mental health and wellbeing. Teaching resources were developed to guide facilitation and ensure fidelity and the training was delivered by two bilingual Arabic speaking mental health clinicians (YM and RS) with significant expertise in transcultural mental health. The program was delivered in 2 sessions, breaking for morning and afternoon tea breaks and one longer lunch break. Table 1 presents the units taught.
Statistical analyses
The effectiveness of the training program was evaluated using an uncontrolled, repeated measures pre, and post design. Continuous variables were presented as means, whereas categorical variables are expressed as percentage (%) frequencies at pre-, and post-training points. McNemar and Wilcoxon Signed Ranks Tests were conducted to analyse continuous or binary outcome variables, as appropriate. A p value of less than 0.05 was considered indicative of statistical significance for all comparisons. Statistical data management and analyses of the data was carried out using Statistical Package for Social Sciences (SPSS 26.0 for Windows) [26].
Measures
A self-report survey assessing key aspects of MHL modelled on the survey first reported by Jorm and colleagues [9] and further developed by the authors for refugee populations [11] was utilised. The survey was administered pre-intervention and immediately post-intervention. Socio-demographic characteristics of participants were also collected.
Recognition of mental illness
Recognition of mental health problems was assessed using a culturally-valid vignette that described a fictional male Iraqi refugee named 'Dawood' . Care was taken to ensure that the character met the criteria for PTSD as outlined in the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM 5) [27] while avoiding the use of medical terminology. The use of vignettes in MHL research has been demonstrated to be ecologically valid [28,29]. Following the presentation of the vignette, participants were asked in an open-ended format: `What, if anything, do you think is wrong with 'Dawood'? Labels were coded as correctly identifying the 'PTSD label category' if they contained any of the following wording: 'PTSD'; 'post-traumatic stress disorder'; 'post-trauma/tic stress/disorder' and 'PTS' . Researchers were also interested in examining recognition of a 'general mental health problem' and labels coded as correct for this category were 'mental problem' , 'mental illness' and 'mental disorder' .
Treatment knowledge: concordant, discordant and culturally informed treatment practices
Participants were next asked to rate the perceived helpfulness of different possible interventions-actions/activities, treatment providers and medications-for someone with Dawood's problem. For the purposes of the evaluation, interventions classified into three categories, those that were concordant with evidence-based treatment approaches, those considered discordant or unhelpful, and those that had a culturally informed background. Interventions were classified as being concordant with evidenced-based treatment of PTSD using the framework developed by Reavley and colleagues [30]. These included the following: a typical family general practitioner (GP) or doctor; a psychologist; psychiatrist; becoming more physically active and improving diet; reading about people with similar problems and how they have dealt with them; relaxation, stress management and meditation; psychotherapy focused on thoughts and behaviours (cognitive behavioural therapy); and education/psychoeducation on the problem. Discordant or unhelpful items were 'drinking alcohol to relax' and 'trying to deal with the problem alone' . Finally, items considered to comprise culturally informed care were: reading the Bible or Koran; having a prayer session with a clergy; talking with a clergy member; attending a social club of same cultural background; speaking with a close friend; and speaking with a family member. Scoring was undertaken by assigning one point for every intervention reported from the previous list which resulted in a total possible score out of eight for concordant interventions, out of two for discordant items and for cultural informed care, the total was out of six.
Negative attitudes towards mental illness
Participants' negative attitudes towards those with mental illness were assessed using the modified Personal Stigma in Response to Mental Illness Scale [15,28,31]. Personal stigma was assessed by asking participants to respond to statements concerning the person described in the vignette using a 5-point Likert-type scale (1: 'strongly disagree' to 5: 'strongly agree'). For purposes of analysis, the statements were divided into three components; 'weak-not-sick' , 'I would not tell anyone' and 'dangerous/unpredictable' subscales, as previously used and validated [32]. The 'weak-not-sick' subscale focuses on the belief that the person is not ill and can control their behaviour (e.g. 'Dawood could snap out of it if he wanted to'). The 'I would not tell anyone' subscale focuses on the belief it is better not to tell anyone about mental illness (e.g. 'You would not tell anyone if you had a problem like Dawood's). The 'dangerous/unpredictable' subscale focuses on the belief that someone with a mental illness is dangerous or unpredictable (e.g., 'Dawood's problem make him unpredictable'). Higher scores indicated greater personal stigma for each component. The 'desire for social distance' was assessed using five statements from the social distance scale developed by Link and colleagues [33], used in previous research by the authors [15]. Participants were asked to consider whether/to what extent they would be pleased to spend time with Dawood in different situations, for example 'living next door to Dawood' and 'having Dawood marry into your family' . Responses to these items were scored on a 4-point Likert scale ranging from 1 ('Yes, definitely') to 4 ('Definitely not'). A total social distance score was calculated as the sum of responses to the individual items, with higher scores indicating greater desire for social distance.
Providing support and helping advice
Participants were asked to 'Describe all the things you would do to help Dawood' using an open-ended format. De-identified responses were scored by a researcher (YM) who was blinded to whether they were collected at pre-, or post-training. A quality scoring system developed by the researchers was utilised to measure the quality of support and helping advice offered. Responses are scored across two categories, those that 'promote appropriate help-seeking' , and those that promote 'engaging with the person with the mental illness and offering support' . Specifically, a 1 point per item (up to maximum of 2 points) was awarded if the response mentioned encouraging Dawood to see a GP; psychologist; psychiatrist; social worker; community mental health services; and torture and trauma services such as NSW Service for the Treatment and Rehabilitation of Torture and Trauma Survivors (STARTTS). Similarly, 1 point per item (up to maximum of 2 points) was awarded if the response suggested engaging with the person in ways such as listening to the person; asking if they are OK; asking if they can help in any way; and offering practical assistance such as taking them to GP, helping with filling out a form; writing a letter of support; and calling other support services on their behalf. These components were included as they are deemed to be best practice and are recommended in the mental health care with refugee populations [34,35]. A total possible score out of four was awarded with higher scores denoting higher quality of support and helping advice.
Results
Training workshops were held between September and October 2019. Figure 1 displays participant flow through the research stages. Demographic data on the participants are presented in Tables 2, 3.
Recognition of mental illness
In response to the question 'What, if anything, do you think is wrong with 'Dawood'?, 51% of participants correctly recognised the problem as 'PTSD' prior to the workshop which increased to 61.5% following the intervention, however, this increase was not significant (p = 0.125). To assess whether there was an increase in participants being able to recognise the problem described in the PTSD vignette as a 'general mental health problem' over time, the frequencies of all other responses that represented a mental health related label ('mental illness' , 'mental problem' , 'mental breakdown' , 'mental issue') were included. Post-intervention there was a significant increase (p = 0.035), with the percentage reporting such labels increasing from 62.7 to 80.8%.
Treatment knowledge: concordant, discordant and culturally informed treatment practices
Participants knowledge regarding treatment deemed to be concordant with evidence-based treatment of PTSD did not significantly increase post-intervention (6.
Medications
There was a significant increase in the endorsement of antidepressants as being helpful in the treatment of PTSD following training (60% initially versus 82.7%; p = 0.000).
Negative attitudes towards people with mental illness
There was a non-significant decrease in the belief that the character in vignette was 'weak-not-sick' from pre-to post-intervention (
Support and helping advice
Following the intervention, there was a significant increase in participants scores on the quality of support and helping advice offered to the vignette character (1.90 versus 2.24; p = 0.032).
Discussion
The current study sought to undertake a preliminary evaluation of a culturally tailored MHL intervention for Arabic speaking religious and community leaders of refugee communities residing in South Western Sydney, Australia. Using a pre and post study design, this pilot trial sought to measure if the intervention was effective in changing participants' knowledge of common mental health problems in refugee populations, their attitudes towards those with mental health problems and advice provided to such individuals. A PTSD vignette based MHL survey was utilised to assess changes. Following the training, participants demonstrated a significant improvement in recognising the problem described in the vignette as a general mental health problem. and a greater understanding of the helpfulness of antidepressants in the treatment of PTSD. Further, post intervention, there was a reduction in the desire for social distance, a measure of negative attitudes, and the quality of helpful advice offered to those with a mental health problem also improved. However, not all measures demonstrated a significant improvement with knowledge of treatment approaches considered concordant with evidence-based treatment for PTSD increasing post-intervention but not significantly. Similarly, while recognition that the vignette described a person with PTSD increased, it did not reach statistical significance.
The treatment gap
This program was developed to respond to the evidence that posited religious and community leaders to be gatekeepers in refugee communities, providing mental health support and potentially facilitating professional helpseeking processes [18]. Relatedly, research has also demonstrated limited mental health service uptake amongst such individuals even when presenting with severe levels of psychological distress [5,8]. Several factors have been postulated for this impaired help-seeking behaviour including negative perceptions of mental health treatment and the fear of being considered 'crazy' within their own community [18]. Such negative attitudes towards those with mental illness comprise an important aspect of the of MHL, which as previously stated has been demonstrated to influence professional help-seeking. Within the Arabic speaking refugee communities, MHL has been found to be problematic, including that of its community leaders [11,12,18]. Recognising this need, SWSLHD health promotion partnered with Arabic speaking mental health professionals, refugee mental health expert academics and transcultural health promotion experts in order to develop a culturally tailored intervention.
Our aim was to deliver new knowledge on the mental health problems of refugees, mental health treatment approaches and Australian mental health systems; but to present this information respectfully alongside culturally and religiously informed practices such as seeking spiritual guidance and prayer. This dual stance recognises the importance of the client's knowledge and was deemed necessary in order to engage the target participants. It also represents a point of difference from other training workshops on seeking to improve mental health knowledge.
Recognition of mental illness
Recognition of mental health problems have been found to be crucial in facilitating help-seeking [36]. It is argued that once individuals are able to recognise a particular mental health problem this can activate a schema about the appropriate action to take [36]. Thus our finding that training led to improved recognition the vignette described a 'mental health problem' is encouraging and necessary in order for leaders who are working with their community to better provide correct advice and guidance. This is even more so, because of the evidence that such leaders are likely to be the first point of contact in this community.
Treatment knowledge-the role of antidepressants
Another important finding was that participants knowledge on professionally-aligned and recognised interventions for managing PTSD such as the use of antidepressants [31] which was found to increase following training. Improved understanding of the role psychopharmacology may play in treatment of mental health disorders can lead to a reduction in negative attitudes associated with such treatment. This knowledge is important given that research has shown there is limited treatment awareness on the benefits for prescription medication in psychiatric care in the general public [37] and in ethnic minorities [38]. Further, increasing the understanding of the role antidepressants can play in treating some mental health disorders can have positive impacts on medication adherence through the action of community leaders, encouraging those they encounter in their community to use medication where it is needed.
Negative attitudes towards those with mental illness
Negative attitudes towards individuals with mental illness remains a significant problem in society despite considerable efforts to address these. Often, people with a mental illness report feelings of rejection brought on by fear-based exclusion processes such as the 'not in [33]. This can be even more pronounced in the Arabic speaking community [39]. As such, the finding that our intervention demonstrated a significant reduction in the desire for social distance, one measure of stigma towards those with mental illness (in this case PTSD) is heartening. It could be argued that achieving such changes in religious and community leaders can have a significant follow on effect by the virtue of the reach and influence such individuals hold. By taking a more positive view of interacting with those with mental illness, leaders are in better positions to respond in ways that will promote help-seeking rather potentially causing such individuals to hide for fear of shame and exclusion.
Support and helping advice
The quality of advice provided to a person with mental illness was also found to significantly improve. Postintervention, participants described actions they would take to assist 'Dawood' which aligned with best practices in the mental health care with refugee populations [34,35]. In particular, there was an increase in promoting professional help-seeking such as encouraging consulting with a GP or specialist refugee mental health services such as STARTTS. In addition, participants described useful approaches such as listening and asking if they are 'OK' and offering practical support such as writing letters of support and offering transportation to services.
Study limitations and strengths
A number of limitations should be noted. Firstly, not all the MHL measures demonstrated a statistically significant improvement post intervention. Notably, participants ability to recognise the vignette as describing a person with PTSD did not significantly improve despite the fact that PTSD was one of the mental health problems discussed in the training. Further, knowledge on concordant treatment approaches and measures of personal stigma did not significantly improve. Issues with curriculum design or the facilitation of the program may have contributed to the failure to find significant changes. However, arguably it is more likely to be related to the small sample size which limited statistical power to detect small effect sizes, particularly in light of the fact that all measures demonstrated non-significant improvement post intervention. Other limitations were the use of a quasi-experimental pre-post design which precluded assessment of changes over time and the recruitment of a convenience sample of volunteers limiting the current sample being representative of all Arabic speaking religious and community leaders. Using a PTSD based vignette to assess changes means that only recognition and attitudes towards PTSD were evaluated. This limitation could be addressed in future with a larger sample size that would allow for multiple mental illness vignettes being presented. Finally, there was no evaluation of the impact this training had on the actual mental health or help-seeking behaviours of Arabic speaking refugees that our leaders serve. In future, a larger sample size with a follow-up arm and incorporating qualitative measures on the curriculum content and its delivery would provide further insights. Additionally, the use of a control group who are provided matched non-mental health training such as education on physical health conditions and surveyed on such could allow for parsing out of training effects in general versus the content of our program. Strengths of this study include being the first program of its kind that aimed to improve MHL of Arabic speaking religious and community leaders using a culturally sensitive approach. Targeted areas of required knowledge such as recognition of mental health problems, treatment approaches utilised in Australia and challenging negative attitudes and stigma towards mental health problems were delivered alongside the recognition of the importance of cultural and religious beliefs. Our training served to complement the role of religious and community leaders and equip them with knowledge to serve as mental health allies promoting professional help-seeking where it was needed. Nonetheless, a recommended direction for future research would be to undertake a study using a Delphi methodology to further substantiate if culturally informed treatment approaches are recommended practices agreed to by a panel of experts in both refugee mental health along with community and religious leaders. The delivery of the program in Arabic by experienced mental health clinicians to ensure engagement and better comprehension was also considered a strength and likely to be a significant factor in the program being well-received. While there has been an increased emphasis on cultural competency in mental health care and the delivery of evidence-based psychosocial services for ethnic groups [34], to date, culturallyappropriate psychoeducation initiatives are limited and those directed towards community and spiritual leaders even more so. Finally, the processes and mechanism utilised in this study can potentially serve as a framework to shape culturally-appropriate MHL programs targeting leaders from other culturally-and-linguistically-diverse and or refugee groups in Australia.
Conclusion
To the best of our knowledge, this is one of the first culturally sensitive programs focussed on improving the MHL of Arabic speaking community and religious leaders. Our findings suggested that the intervention was able to improve some measures such as the desire for social distance and the quality of support and advice provided to those with mental health problems. Recommended next steps should be tailoring and modifying the program by addressing the identified limitations and then undertaking a further roll out and evaluation of this training. In conclusion, this program represents the necessary first step needed to equip community leaders with the knowledge to better respond to mental health problems. As such the overall goal of improving the mental health outcomes of Arabic speaking refugee communities is closer to being realised.
|
2020-08-21T13:27:24.222Z
|
2020-08-20T00:00:00.000
|
{
"year": 2020,
"sha1": "20aafe0bd172354c3ce132c341e6a3248072e34f",
"oa_license": "CCBY",
"oa_url": "https://ijmhs.biomedcentral.com/track/pdf/10.1186/s13033-020-00401-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "069aaf4a84206d9b31709a08d49293d4bbd6a0f3",
"s2fieldsofstudy": [
"Psychology",
"Sociology",
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
49345210
|
pes2o/s2orc
|
v3-fos-license
|
Nonlinear Refraction of Peripheral-Substituted Zinc Phthalocyanines Investigated by Nanosecond and Picosecond Z Scans
The singlet and triplet excited-state refraction cross-sections of dimethyl sulfoxide (DMSO) solutions of ten zinc phthalocyanine derivatives with monoor tetra-peripheral substituents at 532 nm were obtained by simultaneous fitting of closed-aperture Z scans with both nanosecond and picosecond pulse widths. Self-focusing of both nanosecond and picosecond laser pulses was observed in all complexes at 532-nm wavelength. The complexes with substituents at all of the four α-positions exhibit relatively larger refraction cross-sections than the other complexes. The wavelength dependence of the singlet refraction cross-section of a representative complex was observed to be non-monotonic in the range of 470 550 nm.
Introduction
In recent years, there has been a growing interest in materials with a large and fast nonlinear optical response for optical device applications [1][2][3].Among the various organic materials investigated, metallophthalocyanines (MPcs) have attracted considerable attention because their structures can be easily modified without affecting their stability or altering their processability features [4][5][6].Most recently, the photophysical properties of ten new zinc phthalocyanine derivatives with mono or tetraperipheral substituents (structures shown in Figure 1) were studied [7].All complexes were found to exhibit reverse saturable absorption of nanosecond pulses at 532 nm and of picosecond pulses over a broad visible spectral range.The singlet and triplet excited-state absorption cross-sections were obtained by fitting nanosecond and picosecond open-aperture Z-scan data using a five-level model.It is found that the complexes with tetra substituents at the α-positions exhibit larger ratios of triplet excited-state absorption to ground-state absorption crosssections (σ T /σ g ) than the other complexes; and the ratio of singlet excited-state absorption cross-section to groundstate absorption cross-section decreases from 470 nm to 550 nm [7].This study is quite intriguing; however, no information was obtained on the nonlinear refraction of these complexes from the open-aperture Z-scan study.
In order to gain an understanding of how the peripheral substituents influence the nonlinear refractive properties of these complexes, nanosecond and picosecond closed-aperture Z-scan measurements were carried out in this study.Using the previously measured values of excited-state absorption cross-sections, values of the singlet and the triplet excited-state refraction cross-sections were determined from the closed-aperture Z-scan data; the results are reported in this paper.In addition, the wavelength dependence of the singlet excited-state refraction cross-section over a range extending from 470 nm to 550 nm was studied using picosecond closed-aperture Z scans.
Experimental
The Z-scan experimental setup was described previously [7].A Quantel Brilliant Nd: YAG laser operating at its second-harmonic output (4.1 ns, 10 Hz) and a picosecond EKSPLA PG 401 optical parametric generator (OPG) pumped by the third-harmonic output of an EKSPLA PL 2143A Nd:YAG laser (21 ps, 10 Hz) were used for the closed-aperture Z-scan experiments.The HWe -2 M beam waist was measured to be 32 m for the nanosecond Z scans and 40 m at 532 nm for the picosecond Z scans by knife edge.The sample solutions were placed in a 2-mm quartz cuvette for the ps measurements and in a 1-mm cuvette for the ns measurements, which were less than the Rayleigh ranges to fulfill the thin sample approximation condition.A small aperture (S = 0.22 for the ns measurement and 0.35 for the ps measurement at 532 nm, and S = 0.34 -0.42 for the ps measurements from 470 nm to 570 nm) was placed before the detector at the far field for the closed aperture experiments.
In order to abstract the nonlinear refraction cross-section, a five-level model described previously [7] was used to fit the open-aperture and the closed-aperture curves.First, the values of the excited-state absorption cross-sections for the singlet and triplet excited states (σ S and σ T , respectively) were obtained by simultaneously fitting nanosecond and picosecond open-aperture Z scans at 532 nm, in which the effects of excited-state refraction are absent, using the five-band model that employed independently measured values of the excited-state lifetimes and of the triplet yield.The values of σ S and σ T obtained from the open-aperture Z scans, together with the measured values of the excited-state lifetimes and of the triplet yield, were then reinserted in a generalized five-band model that includes the effects of both excited-state refraction and excited-state absorption.
The effects of excited-state refraction are described mathematically by the nonlinear phase computed at each point on the exit face of the sample from the following equation: Where ( ) r S and ( ) r T are the refraction cross-sections of the singlet excited state and the lowest-lying triplet excited state, respectively.The number densities of molecules in these states, denoted by n S , and n T respectively, are functions of both the position (r, z) in the sample and the time t; they are computed as described in [7].For each ZnPc derivative, a single pair of refractive cross-section values ( ( ) r S , ( ) r T ) was chosen to simultaneously fit both the picosecond and the nanosecond closed-aperture Z scans.In fitting closed-aperture Z scans at wavelengths other than 532 nm, the value of ( ) r T was assumed to be the same as that obtained from the simultaneous fitting of the corresponding 532-nm Z scans.
Results and Discussion
The electronic absorption spectra of complexes 1 -10 in dimethyl sulfoxide (DMSO) have been reported previously [7].Figure 2 shows the absorption spectrum of complex 5, which is fairly typical of the spectra of all ten complexes.As noted in [7], all complexes possess a fairly broad "transparency window" between the B-bands and Q-bands (425 nm and 575 nm, respectively), and all complexes exhibit strong reverse saturable absorption for both nanosecond and picosecond laser pulses at 532 nm.
Figure 3 shows the closed-aperture Z-scan curves for complex 6, which are representative of all ten ZnPc derivatives.The valley-peak feature is indicative of a positive refraction nonlinearity (self-focusing).The profile is strongly asymmetric about a normalized transmittance of unity, above which the peak is almost completely suppressed.The asymmetry arises from the presence of an absorptive nonlinearity (reverse saturable absorption).
Our previously published open-aperture Z-scan results demonstrate that all of the ZnPc derivatives display significant reverse saturable absorption [7].
For compounds possess both nonlinear absorption and nonlinear refraction, a simple division of the closed-aperture Z-scan curve by that obtained with the aperture removed (open-aperture) does not necessarily provide a good approximation of the curve that would be obtained with a closed-aperture Z scan of a material having no nonlinear absorption [8,9].For this reason, the two-step procedure described in the experimental section was employed to obtain the fitting curves shown by solid lines in Figure 3.The solid curves in Figure 3 represent the best fit of the experimental Z-scan data for complex 6 and correspond to the values of ( ) r S = (5.0 ± 1.0)×10 -17 cm 2 and ( ) r T = (1.0 ± 0.5) × 10 -17 cm 2 .Table 1 lists the absorption and refraction cross-sections of the singlet excited state and the triplet excited state for the ten ZnPc derivatives in DMSO.The magnitudes of the excited-state refraction cross-sections are all on the order of 10 -17 -10 -16 cm 2 .The three -tetrasubstituted complexes 8, 9 and 10 all possess much larger triplet refractive cross-sections than either the mono substituted complexes (1-5) or the -tetrasubstituted complex (6), which is similar to the trend displayed by the triplet excited-state absorption cross-sections.This phenomenon could be explained by the geometry of the and -tetrasubstituted complexes.As shown in Figure 4, the geometry-optimized structures for complexes 6 and 7 (obtained via B3LYP/3-21g level density functional theory (DFT) calculation in vacuum using Gaussian 09) are drastically different.The steric hindrance imposed by the substituents at the α-positions makes the substituents in complex 7 adopt a nearly perpendicular geometry to the phthalocyanine ring.This makes complex 7 bulkier than complex 6, which has the substituents at the -positions.
The bulkiness of complex 7 would reduce intermolecular interactions, which prevents excitation quenching and stabilizes the excited state.Consequently, the nonlinearities of the complex are higher than those with larger extent of intermolecular aggregation [10].The same effect holds for complexes 8, 9 and 10.In contrast, the singlet excited-state refraction cross-sections do not vary significantly among these ten complexes.
The wavelength dependence of the excited-state refraction cross-section of complex 5 was studied at multiple visible wavelengths; it was found to exhibit self-focusing nonlinear refraction in the range of 470 nm to 570 nm.Assuming ( ) r T values at other wavelengths equal to the value obtained at 532 nm (6.0 × 10 -17 cm 2 ), the following values of ( ) r S
at wavelengths from 470 nm to 550 nm were obtained by fitting the relevant ps Z scans of complex 5: 4.0 × 10 -17 cm 2 (470 nm), 3.0 × 10 -17 cm 2 (500 nm) and 5.0 × 10 -17 cm 2 (550 nm).The parameters used for the fitting are summarized in Table 2.No significant dispersion of the nonlinear refraction cross-section was observed for 5 in the spectral region studied.Linear absorption by the sample creates a thermal lens whose rise time is given by 0 / s w c , where 0 w is the radius of the focal spot and s c is the speed of sound in the solvent used (in the present case, 1493 m/s).However, since these rise times (26.8 ns for the picosecond Z scans and 21.4 ns for the nanosecond scans) are much longer than the corresponding pulse widths (21 ps and 4.1 ns, respectively), thermal lensing can be ignored in fitting the closed-aperture Z scans.
Conclusions
The singlet and triplet excited-state refraction cross-sections at 532 nm of ten novel zinc phthalocyanine derivatives with mono-or tetra-peripheral substituent(s) in DMSO solution were obtained by using a five-level dynamic model to fit nanosecond and picosecond closedaperture Z-scan data.These results, in combination with those for the singlet and triplet excited-state absorption cross-sections previously obtained from open-aperture Z scans, provide a complete picture of the excited-state optical nonlinearities of the ten zinc phthalocyanine derivatives.It has been demonstrated that both the number and the position of the peripheral substituents dramatically affect the triplet excited-state absorption and refraction cross-sections.In addition, the wavelength-dependence of the singlet excited-state refraction cross-section of a representative complex was investigated over the range from 470 to 550 nm.No monotonic dependence of refraction cross-section was observed.
|
2018-06-22T15:46:57.757Z
|
2011-06-28T00:00:00.000
|
{
"year": 2011,
"sha1": "a867defbf77a3573c0f922c9d447fd73d5421fd3",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=5516",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a867defbf77a3573c0f922c9d447fd73d5421fd3",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
256813850
|
pes2o/s2orc
|
v3-fos-license
|
Uterine Rupture With Placenta Percreta Following Multiple Adenomyomectomies
Pregnancy following adenomyomectomy is challenging because uterine rupture or placenta accreta spectrum (PAS) is more likely to occur; however, optimal management has not yet been established. We herein present a case of uterine rupture with placenta percreta in a pregnant woman who underwent adenomyomectomy twice before pregnancy. Magnetic resonance imaging (MRI) was performed in the second trimester and imminent uterine rupture concomitant with PAS was suspected. The patient was immediately admitted to hospital for careful management. Although failed tocolysis forced delivery at 29 weeks of gestation, managed hospitalization allowed cesarean hysterectomy to be performed uneventfully. Extensive PAS was proven pathologically in the removed uterus. Pregnancies following multiple adenomyomectomies are considered to be high-risk. Therefore, a sufficient explanation of the risks associated with future pregnancies is needed, particularly following second adenomyomectomy.
Introduction
Adenomyomectomy is one of the effective treatments for adenomyosis. Although dysmenorrhea and/or severe anemia may markedly improve after this procedure, the risk of uterine rupture in subsequent pregnancies is high [1][2][3]. Uterine rupture associated with a history of adenomyomectomy before pregnancy has two features as follows: it occurs at various stages of gestation [4][5][6] and is often complicated with placenta accreta spectrum (PAS), particularly placenta percreta. Uterine rupture with or without PAS occurring has been reported in pregnant women with a history of adenomyomectomy [3,7,8]. Obstetricians need to consider maternal symptoms, including uterine contractions and abdominal pain, to detect uterine rupture and/or PAS. Most cases achieve a good outcome for both mother and child with the above-described cautious management [2][3][4]7,9]. However, since uterine rupture is rare, risk factors for uterine rupture in pregnant women with a history of adenomyomectomy before pregnancy currently remain unclear. Furthermore, the optimal management of pregnant women following adenomyomectomy has not yet been established. We herein present a case of uterine rupture with placenta percreta in a pregnant woman who underwent adenomyomectomy twice before pregnancy. Magnetic resonance imaging (MRI) was performed in the second trimester and threatened uterine rupture concomitant with PAS before delivery was strongly suspected.
Case Presentation
A 40-year-old primiparous Japanese woman following in vitro fertilization was referred to our institution, which is a perinatal medical center, at 10 weeks of gestation because she had undergone adenomyomectomy twice before pregnancy. She was complicated by asymptomatic cholelithiasis. At 31 years of age, she underwent laparotomic adenomyomectomy for the first time and right ovarian endometrial cystectomy with severe dysmenorrheal symptoms (the weight of enucleated adenomyotic tissue was 104 g) at other institution. Since dysmenorrheal symptoms recurred, the patient underwent laparotomic adenomyomectomy again and bilateral endometrial cystectomy at the same institution at 38 years of age (the weight of enucleated adenomyotic tissue was 32 g). Adenomyotic tissue occupying the entire posterior wall of the uterus was removed each time, and the remaining anterior myometrium wall was used to form the uterus. The patient conceived 26 months after the second adenomyomectomy.
She was admitted to our institute for the treatment of hyperemesis gravidarum from 10 2/7 to 11 2/7 weeks of gestation. Transvaginal ultrasound revealed a subchorionic hematoma (SCH) of 28×16 mm near the endocervical os. SCH extended up to 56 mm in length; however, abdominal pain was not observed at 13 weeks of gestation. Genital bleeding gradually stopped and SCH also shrank. Since the adenomyomectomy scar was present uterine posterior and the placenta was located there, the presence of PAS should be ruled out. However, ultrasound did not clearly show the presence or absence of PAS. MRI was performed at 22 weeks of gestation to evaluate the condition of the uterine wall and placenta because the patient was at high risk of uterine rupture. MRI revealed that the uterine wall was bulging outward from the bottom to the posterior wall of the uterus, suggesting the presence of PAS (Figures 1a, 1b). Furthermore, the posterior wall 1 1 1 1 1 of the uterus strongly adhered to the colon. Although she had no abdominal pain or bleeding, the patient was considered to be in a state of threatened uterine rupture and, thus, was admitted to hospital for careful management from 23 0/7 weeks of gestation. We told her and her husband that we would perform hysterectomy at delivery due to placenta percreta, and we obtained their consent.
FIGURE 1: MRI findings at 22 weeks of gestation.
(a) T2-weighted MRI revealed that the uterine wall from the bottom to the posterior wall of the uterus was thinning and bulging outward (arrows). (b) A part of the maternal side of the placenta was exposed outside the posterior uterine wall, suggesting the presence of placenta percreta (arrowheads).
She was managed with prophylactic tocolysis by ritodrine hydrochloride and bed rest; however, uterine contractions and shortening of the cervix (cervical length: 25 mm) were observed at 26 weeks of gestation. Tocolysis was enhanced by increasing ritodrine hydrochloride. Transabdominal ultrasound failed to identify the myometrium of the left side of the uterine fundus at which the placenta was located and showed multiple placental lacunae with hypervascularity. Although the patient was stable with tocolysis and bed rest, the preterm premature rupture of membranes (pPROM) occurred at 29 1/7 weeks of gestation. The fetus was in breech presentation and uterine contractions were increasing; therefore, an emergency cesarean section was performed under spinal anesthesia on the same day, yielding a female infant (1353 g, APGAR score: 7/8 {1/5 min}, umbilical artery pH: 7.408, base excess: -2.9 mmol/L). The posterior uterine wall was very thin and bulging outward with extensive adhesion of the mesentery of the sigmoid colon. In addition, a part of the maternal side of the placenta was exposed outside the posterior uterine wall. We considered it difficult to preserve the uterus. Since the placenta was partially detached and external bleeding had increased, a total hysterectomy was performed after the rapid dissection of adhesions between the uterus and sigmoid colon under general anesthesia. Intraoperative blood loss was 6060 mL and allogeneic blood transfusion (20 units {U} of red blood cells and 14 U of fresh frozen plasma {FFP}) was performed. The patient was admitted to the intensive care unit (ICU) for postoperative systemic management. She was transfused with an additional 4 U of FFP in the ICU and did not develop disseminated intravascular coagulation; therefore, she was returned to a maternity ward on the second postoperative day. The removed uterus showed that more than half of the placenta adhered to the left posterior side of the uterine fundus (Figures 2a-2c). Pathological findings revealed that most of the adhered part was placenta increta, while the remainder was placenta percreta. The maternal postoperative course was uneventful, and she was discharged on postoperative day 11. The neonate was admitted to the neonatal ICU. No obvious malformations were observed in the neonate. Although the neonate was incubated and managed for transient tachypnea of the newborn, the subsequent course was uneventful. (a) A part of the maternal side of the placenta was exposed outside the posterior uterine wall (arrow). (b and c) The majority of the placenta was adherent to the left posterior side of the uterine fundus and the uterine wall at this location was thinning.
Discussion
This is the first detailed case report of pregnancy following multiple adenomyomectomies. MRI was performed at 22 weeks of gestation, which revealed the thinning and outward bulging of the uterine wall, leading to the diagnosis of imminent uterine rupture. The patient was immediately admitted to hospital for careful management to prolong the pregnancy period and prepare for delivery.
Adenomyomectomy performed before pregnancy increases the risk of a number of adverse events in the subsequent pregnant women, including uterine rupture. In adenomyomectomy, when adenomyotic tissue with an extensive, diffuse, and complex distribution within the normal uterine myometrium is resected as much as possible, the uterus is reformed with the remaining normal myometrium [2,10]. Therefore, the capacity of the uterus decreases and its tolerance for uterine contents with pregnancy is reduced. Furthermore, due to multiple suture points, the remaining uterus that is formed may be fragile. Numerous cases of uterine rupture after adenomyomectomy have been reported to date [1,2,[4][5][6][11][12][13]. In contrast to myomectomy, uterine rupture after adenomyomectomy may occur at any time of pregnancy [4][5][6]. Furthermore, adenomyomectomy increases the risk of PAS in the subsequent pregnancy. When trying to resect as much adenomyotic tissue as possible, the endometrium often has to be opened. Hence, the risk of PAS will increase in the subsequent pregnancy. Therefore, the endometrium is missing, inducing excessive trophoblast invasion into the myometrium in subsequent pregnancies. If the fallopian tubes are removed with the resection of sufficient adenomyotic tissue, assisted reproductive technology is required for subsequent pregnancies. This may be another factor that increases the risk of PAS. Moreover, if total hysterectomy is required due to PAS with massive bleeding, adhesions between the uterus and surrounding tissues increase the difficulty of this procedure. When partial detachment of the placenta occurs concomitantly with PAS, the uterus needs to be removed as soon as possible due to continuous bleeding from the site of detachment. As in the present case, patients with adenomyosis are often complicated by uterine endometriosis, which can cause extrauterine adhesions. Due to these risks, possible uterine rupture needs to be considered in pregnancies following adenomyomectomy, which may occur at any time during pregnancy. In addition, the delivery needs to be managed by a multidisciplinary team including obstetricians, pediatricians, anesthesiologists, and surgeons. Therefore, it is fundamental not to recommend pregnancy to women who underwent adenomyomectomy. However, in fact, an increasing number of women are becoming pregnant after adenomyomectomy.
The present case underwent adenomyomectomy twice before pregnancy. Furthermore, she had uterine endometriosis. The above-described risks (i.e., uterine rupture, PAS, and adhesion to the surrounding tissues) were very high. Although MRI at 22 weeks of gestation suggested imminent uterine rupture and the presence of PAS, we retrospectively considered the PAS lesion at the left posterior wall of the uterus to have formed much earlier. Persistent genital bleeding and SCH in early pregnancy also suggested this. MRI may have been performed earlier (e.g., in the early second trimester) and it is unclear whether this affected the management of this patient. Fortunately, neither intraperitoneal bleeding nor crucial rupture of the uterus occurred during pregnancy. Although pPROM and failed tocolysis forced delivery at 29 weeks of gestation, managed hospitalization allowed cesarean hysterectomy to be performed uneventfully with sufficient manpower.
Pregnancies following multiple adenomyomectomies are rare. Only two pregnancies after multiple adenomyomectomies have been reported to date, and both cases were pregnant women who underwent adenomyomectomy twice and were published in Japanese [14]. We summarized these case reports in Table 1.
Both cases resulted in uterine rupture. One woman miscarried at 16 weeks of gestation and the other had a live baby at 31 weeks of gestation. Neither case was able to deliver after 34 weeks of gestation. The latter required hysterectomy due to placenta percreta, similar to the present case. On the other hand, good perinatal outcomes may be achieved following multiple adenomyomectomies that have not yet been reported when we consider a publication bias. There are currently no guidelines for the management of pregnancy following adenomyomectomy. In our institute, we admit pregnant women after adenomyomectomy to hospital for careful management after 22-25 weeks of gestation even if they are asymptomatic. If uterine contractions become frequent, tocolysis is initiated [9]. However, it remains unclear whether hospitalization with bed rest and tocolysis prolongs pregnancy and decreases the risk of an adverse maternal event.
Conclusions
Pregnancies following multiple adenomyomectomies are at high risk of uterine rupture and PAS. Adenomyomectomy is a good treatment option for adenomyosis and may become more widespread in women who wish for future pregnancy. A sufficient explanation associated with the risks in future pregnancy is needed, particularly after second adenomyomectomy. Although it has not yet been clarified whether bed rest and tocolysis prolong pregnancy, hospitalization may be useful for securing the condition of the patient.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2023-02-13T16:04:59.144Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "1bcec299d114e26721b788ee231f7d3c63261238",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/136849/20230211-22277-tp0m6d.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f867834c44b9c09f0374299ca1bccfff85fb0fd5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
9331233
|
pes2o/s2orc
|
v3-fos-license
|
Morphine enhances renal cell carcinoma aggressiveness through promotes survivin level
Abstract Background: Morphine is an opioid analgesic drug often used for pain relief in cancer patients. However, there is growing evidence that morphine may modulate tumor growth, progression and metastasis. Unfortunately, the results obtained by these studies are still contradictory. Methods: In this study, we investigated the effect of morphine in human clear cell renal cell carcinoma 786-O, RLC-310 cells and whether morphine affects on tumor growth in human clear cell renal cell carcinoma 786-O, RLC-310 cells. The cell proliferation was determined by MTT assay, cell proliferation, migration and invasion assays. Immunofluorescence staining and Q-PCR was used to determine the Survivin expression. Results: It was shown that morphine enhances proliferation of 786-O, RLC-310 cells, whereas morphine promoted the growth and aggressive phenotype of 786-O and RLC-310 cells in vitro though Survivin-dependent signaling. Conclusions: Our data showed that morphine promotes RCC growth and increases RCC progression via over-expression of Survivin.
Introduction
Clear cell renal cell carcinoma (ccRCC) is the most common primary tumor arising from the kidney in adults. 1 Approximately 10-28% of ccRCC will develop a local recurrence or distant metastasis RCC (mRCC) after curative nephrectomy. 2,3 mRCC often causes pain and discomfort, especially in advanced stages of the disease. Therefore, the experience of pain in cancer patients is widely accepted as a major threat to quality of life, and the relief of pain has emerged as a priority in mRCC care. The principles of pain management should be the same as those used for other cancer-related pain, which includes the vigilant assessment of the pain and active pain therapy commensurate with cancer pain treatment guidelines. Opioids, such as morphine, are the most powerful analgesics, which have been the most frequently used to relieve pain in cancer pain with cancer metastasis, including mRCC. However, emerging evidence showed that morphine had extra analgesic effects that appeared to alter tumor progression. [4][5][6][7][8][9][10] Morphine produces strong analgesic effects by stimulating opioid receptor signaling in neurons, which is largely used to relieve pains of patients with cancer in terminal phases, in order to improve their quality of life. 11 However, emerging evidence showed that morphine had extra analgesic effects that appeared to alter tumor progression by activating non-classical opioid receptor signaling. Therefore, understanding the contribution of morphine to cancer growth is an important question because existing reports conflict. 9,12 Morphine inhibits cisplatin-induced apoptosis and suppression of tumor growth in nasopharyngeal carcinoma xenografts. 6 Morphine also activates MAPK/ERK by phosphorylation via PTX-sensitive GPCRs and NO, which leads to the promotion of tumor growth in breast cancer. 8 On the other hand, morphine can inhibit migration of tumor-infiltrating leukocytes and suppresses angiogenesis associated with tumor growth in mice. 9 In addition to these well-recognized effects, various studies have suggested that morphine elicits a variety of biological effects that appear to be independent of its analgesic properties and may affect cell survival or proliferation. Unfortunately, at present the role of morphine in the regulation of tumor cell growth is not yet correctly established. Morphine has been demonstrated to inhibit the growth of various animal models 10,13,14 or human cancer cell lines. 6,15 On the contrary, morphine can protect astrocytes from apoptosis triggered by apoptosis-promoting agents 16 and promote the growth of tumor cells. 5,12 Until now, no studies have examined the effects of morphine in RCC. In this study, we aimed to investigate the role of morphine in RCC.
Cell lines and cell culture
The human RCC cancer cell lines 786-O, RLC-310 and Chinese hamster ovary (CHO) cells were obtained from the American Type Culture Collection (ATCC). All cells were grown at 37 C in a humidified atmosphere containing 5% CO2 in RPMI 1640 medium supplemented with 10% FBS, penicillin (100 IU/mL), and streptomycin (100 mg/ml). Unless otherwise specified, cells were seeded at a density of 2 Â 10 4 /well in 24-well culture plates, or 2 Â 10 5 /well in 6-well culture plates, and after an overnight incubation for adherence, were treated with 1 nM to 10 mM of morphine. After 48 h of incubation, cells were harvested for assay or continued for further experiments.
Cell proliferation assay
We seeded 786-O, RLC-310 and CHO overnight at 5000/ well in a 24-well plate or 50,000/well in a six-well plate. For serum-replete conditions, cells were incubated with inducers/inhibitors for 48 h in complete culture medium without the growth factor. For serum-depleted conditions, cells were serum and growth factor starved overnight and then incubated for an additional 48 h without serum and growth factor but with morphine. Cells were enumerated using a Coulter counter and with WST-8 assay kit (Dojindo Molecular Technologies, Gaithersburg, MD), which forms a colored formazan by the activity of cellular dehydrogenases of viable cells. Optical density obtained was extrapolated for the number of cells using calibration curves for known number of cells.
MTT assay
MTT (Sigma, San Francisco, CA) assay was used to assess the growth of RCC cells. Cells (2.5-5 Â 10 3 ) were plated in 96-well flat bottom plates in a final volume of 200 ll.
When attached to the flat, cells were exposed to drugs for 24-48 h. Cell survival was assessed as the manufacturer's instructions.
Transwell migration and invasion assays
For migration assay, cells (5 Â 10 4 ) pretreated with morphine (0, 1, 10 lM) for 4 days were resuspended in culture medium with the same concentration of morphine and placed into uncoated membrane in the upper chamber (24-well insert, 8 lm, Corning Costar, Corning, NY). DMEM supplemented with 10% FBS was used as an attractant in the lower chamber. After being incubated for 24 h, cells migrated through the membrane were fixed with 4% paraformaldehyle (Santa Cruz Biotechnology, Santa Cruz, CA) and stained with 1% crystal violet (Shanghai Sangon Company, Shanghai, China). The stained cell images were captured by microscope (Olympus, Osaka, Japan), and five random fields at 10 Â magnification were counted. Results represented the average of triplicate samples from three independent experiments. For invasion assays, cells (8 Â 10 4 ) were placed into 50 ll matrigel-(BD Biosciences, Franklin Lakes, NJ) coated membrane in upper chamber and being incubated for 36 h. Following steps were similar with migration assays.
Immunofluorescence staining
Immunofluorescence staining of cells was performed as previously described. Briefly, cells were fixed in 4% para-formaldehyde-PBS at room temperature for 20 minutes and permeabilized in 0.5% Triton X-100 in PBS for 10 minutes at 4 C. Cells were then blocked with 3% BSA and incubated with primary antibody against polyclonal survivin (Abcam, Abcam Biotechnology, Abcam, Cambridge, England) and b-catenin (Millipore, Billerica, MA) followed by a FITC conjugated second antibody (Invitrogen), counterstained with DAPI (1 lg/ ml) and visualized using a confocal microscope (Leica, Lahn, Germany).
Statistical analysis
Each experiment was performed in triplicate and repeated at least three times. All of the data are expressed as mean ± SD. A p values less than .05 was considered statistically significant ( Ã P < .05, #P < .01). Student's t-test was used to compare the expressions of relative mRNA levels, proliferation cells, migrated cells and invaded cells.
Morphine stimulate RCC cells proliferation
We studied the effect of morphine and specific opioid receptor agonists on human on RCC cells. We first confirmed the effect of morphine, D50488H, DAMGO, and DPDPE on 786-O cells. Our results show morphine as well as MOR, DOR, and KOR agonists (at 50 lM) induced significant 786-O proliferation under both serum-free and serum-replete conditions (Figure 1(A)). The degree of stimulation by most individual agonists was similar in both serum-replete and serum-depleted conditions. None of these opioids had any effect on wild-type Chinese hamster ovary cells. Therefore, morphine, and MOR, DOR, and KOR agonists induce RCC cells proliferation directly, and MOR agonist also potentiates the serum-induced proliferation.
We next examined the effect of morphine concentration (1 nM to 10 mM) on RCC cells proliferation. Morphine is used clinically in doses of 10-2450 mg/day, resulting in serum concentrations that are only 2 nM to 3.5 lM. Wild-type Chinese hamster ovary cells, which do not express any opioid receptors; our data show there were little proliferative effect (Figure 1(B)). On the other hand, we found that a significant proliferative effect occurred in the range of 10 nM to 100 lM morphine (P < .01 versus control; Figure 1(C and D)). We could find that the ideal morphine concentration was located between 10 nM and 100 lM (Figure 1(C and D)). Therefore, we used 50 lM of morphine in later experiments.
Morphine promotes the migration/invation ability of RCC cells in vitro
We next examined whether ectopic additional morphine was sufficient to promote the migration/invation capability of RCC cells. After morphine added, the migration/invation capability was significantly increased to approximately 3.5-and 4.0-fold in vitro (P < .01; Figure 2(A-C)), respectively. However, no significant difference was observed between the absence of morphine group and the control group. These results were also confirmed by MTT assay (Figure 2(D)). Taken together, our data showed that morphine promotes metastasis in RCC cells. The same results were also obtained for 786-O cells (data not shown).
Morphine increases the expression of survivin
Survivin is a member of the inhibitor of apoptosis (IAP) family. Survivin protein functions to inhibit caspase activation, thereby leading to a negative regulation of apoptosis. Therefore, it has been characterized to have a strong anti-apoptotic activity. Recently, increased expression of Survivin has been found to be associated with invasion and metastasis of various types of cancers, including RCC. 17 Other contributing effects of morphine include activation of the survival signal PKB/Akt, inhibition of apoptosis, and promotion of cell cycle progression by increasing cyclin D1. 8 Survivin is a bifunctional inhibitor of apoptosis protein that has been implicated in protection from apoptosis and regulation of mitosis. 18,19 Consistent with these effects, to explore the underlying mechanism by which morphine promotes the properties of RCC cells, we examined the expression of Survivin following morphine treatment.
3(A)
). Consistently, Immunofluorescence staining showed that morphine dose-dependent increased the protein levels of Survivin in RLC-310 and 786-O cells; Our results show that dense tumor cytoplasmic and membrane were staining for survivin (Figure 3(B)). These data suggest that morphine may promote RCC cell properties by up-regulating Survivin.
Discussion
Cancer pain is one of the most common symptoms in cancer patients experienced at some point during the course of their illness. However, morphine contributes to the proliferation, invasion and metastasis of cancer cells, so it is important to control cancer pain for the aim at enhancement of life quality originally, leaving out hastening or delaying death. But with the successful achievement of analgesics in cancer pain, the effects on non-neural cells, such as endothelial cells, tumor cells, and mast cells become worthwhile. 8,20,21 However, the results obtained in the studies assessing cancer cell growth in vitro or in vivo are still controversial. Many reports showed that morphine was able to inhibit the growth of various human cancer cell lines, including breast cancer, gastric cancer, lung cancer and prostate cancer. 7,[22][23][24] On the contrary, other studies have shown that morphine increases tumor cell growth in vivo 5,8 and in vitro. 12 In this study, it has been demonstrated that morphine significantly contributes to the proliferation, invasion and metastasis of RCC cell through a Survivin-dependent mechanism. These contrasting results are probably associated with different morphine doses used, route of administration, and/or plasma doses achieved at steady state. In fact, in vitro and in vivo studies demonstrated that tumor-enhancing effects with morphine occur after administration of low daily doses or single dose of morphine, 25 while tumor suppression occurs after chronic high doses of morphine. 13,14 Survivin is a newly identified member of the inhibitor of apoptosis (IAP) gene family that has been implicated in suppression of apoptotic cell death and regulation of cell division. 26 Over-expression of Survivin protein could inhibit tumor cell apoptosis, promote metastatic ability of tumor cells, and increase genomic instability, thereby boosting malignant phenotypes, such as local invasion and distant metastasis 17,27,28 Recent studies demonstrated that Survivin expression was associated with advanced clinico-pathological stages and grades of ccRCC, while ccRCC patients with low Survivin levels had a better survival rate compared to patients with high Survivin-expressed tumor. 17,29 In our research, the Q-PCR showed that the morphine increase the expression of Survivin in RLC-310,786-O RCC cells, while the immunofluorescence staining showed the similar results.
Currently, both morphine and anti-cancer drugs have been simultaneously given to patients, especially those patients with cancer metastasis. Morphine activates MAPK/ERK by phosphorylation via PTX-sensitive GPCRs and NO, which leads to the promotion of tumor growth in breast cancer. 8 Morphine also induces phosphorylation of epidermal growth factor receptor (EGFR) via opioid receptors, promotes cell proliferation and increases cell invasion. 30 In addition, morphine promotes breast cancer cell migration and invasion by increasing the expression of NET1. 10 Until now, little attention has been paid to the RCC during application of morphine. Our study showed that morphine promoted the RCC cells phenotype and induced Survivin over-expression, which could contribute to the cancer development.
It has been proposed that morphine plays also a role in tumor apoptosis. Apoptosis is a form of cell death in which a programmed sequence of events leads to the elimination of cells without releasing harmful substances into the surrounding area. On the other side, Survivin is a member of the inhibitor of apoptosis (IAP) family. Survivin negatively regulates apoptosis by interfering with caspase-9 processing. 27 Survivin may be closely linked to escape from apoptosis of RCC cells and the development of RCC. Our results show morphine augments the growth and aggressive phenotype of renal cancer cells in vitro. We also found that Survivin was the target gene of morphine in RCC cell lines. The results suggest that morpine could play an important role in tumorigenesis and progression of RCC.
|
2018-04-03T02:03:22.891Z
|
2016-11-21T00:00:00.000
|
{
"year": 2016,
"sha1": "6baed347524873f9187f12d4a7a91cf487e13568",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/0886022X.2016.1256322?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6baed347524873f9187f12d4a7a91cf487e13568",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247303657
|
pes2o/s2orc
|
v3-fos-license
|
Imaging Features of Pediatric Left Ventricular Noncompaction Cardiomyopathy in Echocardiography and Cardiovascular Magnetic Resonance
Background: Left ventricular noncompaction (LVNC) is a distinct cardiomyopathy characterized by the presence of a two-layer myocardium with prominent trabeculation and deep intertrabecular recesses. The diagnosis of LVNC can be challenging because the diagnostic criteria are not uniform. The aim of our study was to evaluate echocardiographic and CMR findings in a group of children with isolated LVNC. Methods: From February 2008 to July 2021, pediatric patients under 18 years of age at the time of diagnosis with echocardiographic evidence of isolated LVNC were prospectively enrolled. The patients underwent echocardiography and contrast-enhanced cardiovascular magnetic resonance (CMR) with late gadolinium enhancement to assess myocardial noncompaction, ventricular size, and function. Results: A total of 34 patients, with a median age of 11.9 years, were recruited. The patients were followed prospectively for a median of 5.1 years. Of the 31 patients who met Jenni’s criteria in echocardiography, CMR was performed on 27 (79%). Further comprehensive analysis was performed in the group of 25 patients who met the echocardiographic and CMR criteria for LVNC. In echocardiography, the median NC/C ratio in systole was 2.60 and in diastole 3.40. In 25 out of 27 children (93%), LVNC was confirmed by CMR, according to Petersen’s criteria, with a median NC/C ratio of 3.27. Conclusions: (1) Echocardiography precisely identifies patients with LVNC. (2) Echocardiography is a good method for monitoring LV systolic function, but CMR is indicated for the precise assessment of LV remodeling and RV size and function, as well as for the detection of myocardial fibrosis.
Introduction
Left ventricular noncompaction (LVNC) is described as a distinct cardiomyopathy characterized by a two-layer myocardium with prominent trabeculation, deep intertrabecular recesses, and a thin compacted myocardial layer. LVNC was classified as a primary cardiomyopathy by the American Heart Association in 2006 [1] but remains unclassified by the European Society of Cardiology [2]. It typically involves the left ventricle, although involvement of the right ventricle (RV) has been reported [3]. LVNC can occur as an isolated or non-isolated phenotype. Non-isolated LVNC may be accompanied by congenital heart diseases or features of other cardiomyopathy or neuromuscular diseases. LVNC is a genetically determined myocardial disease, the third most common cardiomyopathy in the pediatric population (after dilated and hypertrophic cardiomyopathies). Molecular studies have confirmed the genetic etiology in approximately 40% of LVNC patients [4,5]. The clinical presentation is very heterogeneous, ranging from no symptoms to major events, such as heart failure, arrhythmias, thromboembolism, and sudden cardiac death [6][7][8].
The diagnosis of LVNC can be challenging due to the non-uniform diagnostic criteria. Echocardiography is the initial and basic tool for diagnosing this cardiomyopathy according to the morphological criteria [9,10]. So far, no separate morphological criteria for LVNC in children have been proposed. The most commonly used echocardiographic criteria are those provided by Jenni et al. [11].
In recent years, cardiovascular magnetic resonance (CMR) imaging has increasingly been used in the assessment of cardiomyopathies. It is currently considered the noninvasive gold standard for the evaluation of biventricular volumes, myocardial mass, regional and global systolic function, and tissue characteristics [12]. CMR may provide clinically relevant information and it allows for LVNC diagnosis, though the proposed diagnostic criteria vary. Because these criteria are based on small samples of patients and various assumptions, and because there is no accepted standard for children, their reliability remains undetermined for the pediatric population [13]. For CMR, Petersen's criteria are most frequently used in clinical practice [14]. The emergence of CMR has enabled high-resolution imaging of cardiac structures, which provides detailed functional and morphologic information and allows for the presence and extent of fibrosis to be assessed [15]. Literature reports indicate that CMR is superior to echocardiography in assessing the extent of myocardial noncompaction, especially in areas which are not accessible by echocardiography, such as the left ventricular apex and the lateral wall [16].
The aim of our study was to evaluate echocardiographic and CMR findings in a group of children with isolated LVNC.
Study Patients
From February 2008 to July 2021, pediatric patients with echocardiographic features of LVNC who were hospitalized in the Department of Cardiology of the Children's Memorial Health Institute were prospectively enrolled. The main reason for referring children to our reference cardiology center was suspicion of LVNC in echocardiography made in district centers and clinical symptoms, such as the following: heart failure, sinus bradycardia, cardiac arrhythmias, syncope, heart murmur, and family history of cardiomyopathy. The criteria for inclusion in the study were an age of less than 18 years at the time of diagnosis and echocardiographic evidence of isolated LVNC, defined as (1) the presence of a two-layer structure with a compacted and noncompacted endocardial layer of trabecular meshwork with deep endomyocardial spaces, (2) a maximal end-systolic ratio between the noncompacted (NC) and compacted (C) layers of 2.0 or greater, and (3) color Doppler evidence of deep perfused intertrabecular recesses. The exclusion criteria from the study were the presence of congenital heart disease, other forms of cardiomyopathy, or neuromuscular disorders. The institutional ethics committee approved this study. Informed consent was obtained from all individual participants included in the study.
Data Collection
Patients' demographics, family history of cardiomyopathies and sudden cardiac death (SCD), and results from echocardiography, 12-lead resting electrocardiographic, 24-hour Holter electrocardiographic, and CMR were collected. NYHA/Ross functional class and clinical symptoms, such as chest pain, palpitations, syncope, pre-syncope, and thromboembolic events, were evaluated in all children. All children referred for CMR presented with echocardiographic features of LVNC and varying clinical symptoms, such as heart failure, cardiac arrhythmias, atrioventricular conduction disturbances, sinus bradycardia, syncope, chest pain, and a family history of cardiomyopathy and sudden cardiac deaths.
Echocardiographic Imaging and Analysis
Echocardiographic imaging was performed using a Philips Epiq7 (Philips Medical Systems, Bothell, WA, USA). Two-dimensional, Doppler, and M-mode echocardiography were performed at rest using standard methods. Echocardiographic images, including parasternal long-and short-axis and apical two-, three-, and four-chamber views were obtained and reviewed by cardiologists certified in echocardiography.
Echocardiographic measurements were reviewed based on Jenni's criteria [11] as follows: a ratio of NC) to C myocardial layer of 2.0 or greater, measured in the parasternal short-axis view in end-systolic phase below the papillary muscle. The NC/C ratio was additionally calculated in the parasternal short-axis projection in end-diastolic phase. Color Doppler imaging was performed in all children with visualization of the recess filling between the trabeculae with blood flowing in from the left ventricle (LV). LV dimension and systolic function were evaluated in detail. Echocardiographic measurements included LV end-diastolic (LVED) and end-systolic (LVES) volume [17] and area [18] in the apical four-chamber view, as well as LV diastolic and systolic diameters in the parasternal longaxis projection [19]. These parameters were evaluated for each patient and indexed to the patient's BSA, according to Du Bois' formula. Moreover, z-scores were calculated using the formula for z-scores reported in the literature [20]. LV systolic function was assessed by calculating the shortening fraction, ejection fraction (LV EF)-according to Simpson's method-the value of mitral annulus peak systolic excursion (MAPSE) in mm, and z-score [21]. Left atrial dimension was measured at end-systole as the anteroposterior linear diameter from the parasternal long-axis view and was indexed to the patient's BSA. The z-score for LAd was calculated with the formula for z-scores [17]. Left atrial enlargement was defined as a z-score greater than 2. It should be pointed out that the echocardiographic study also assessed the RV dimension and systolic function. RV diastolic diameter was evaluated in the parasternal long-axis view (mm, z-score) [17]. RV systolic function was assessed by calculating tricuspid annular plane systolic excursion values in mm and the z-score [22] and by measuring the fractional area change as a percentage of the difference between the RV end-diastolic and end-systolic areas evaluated in the apical four-chamber view.
CMR Imaging and Analysis
CMR imaging was performed using a 1.5-T scanner (Magnetom AvantoFit, Siemens, Erlangen, Germany), with a dedicated cardiac phased-array coil and electrocardiographic gating, as previously described [23]. Steady-state free precession (SSFP) cine images of the heart were acquired in the short-axis and four-, three-, and two-chamber planes with a minimum of 25 phases per cardiac cycle. Late gadolinium-enhanced (LGE) images were acquired in the short-axis and long-axis planes 10-15 min after intravenous administration of 0.1 mmol/kg of gadobutrol (Gadovist, Bayer, Berlin, Germany).
The studies were analyzed using CVi42 software (Circle Cardiovascular Imaging, Calgary, AB, Canada) on a dedicated diagnostic workstation. Cine images were used to determine the left and right ventricular volumes, ejection fraction, and left ventricular mass. The end-diastolic and end-systolic phases were identified based on long-axis and midventricular short-axis scans. The LV endocardial, epicardial borders, and the RV endocardial border were automatically contoured in those phases and then manually corrected to determine the end-diastolic (EDV) and end-systolic (ESV) volumes of both ventricles. Based on the results, the LV and RV stroke volumes (SV = EDV − ESV), and ejection fraction were calculated. Compacted LV mass, including the interventricular septum and the LV papillary muscles, was calculated based on segmentation in the enddiastolic phase. LV global mass was determined by manually drawing the LV endocardial border to include both papillary muscles and LV trabeculation. The LV noncompacted mass was then established by subtracting the compacted LV mass from the global LVM. As per the Petersen criteria, the thickness of the compacted and the noncompacted myocardial layers perpendicular to the compacted myocardium was measured in end-diastole in the three long-axis views (excluding the 17th segment according to the American Heart Association model) and the highest NC/C ratio value was recorded [14]. Additionally, analogously NC/C ratio was measured in diastole in the short-axis view in order to establish the number of segments with values > 2.3.
LV and RV compacted mass, EDV, ESV, SV, and LV trabeculation mass were indexed to the patient's BSA, determined using Du Bois' formula (BSA [m 2 ] = 0.007184 × weight [kg] 0.425 × height [cm] 0.725 ). To identify morphological abnormalities, LV mass, LV EDV, and RV EDV were compared against recently published, multicenter, CMR normative values for children and adolescents, which were determined using the same methods [24]. Z-score values of less than −2.0 and greater than 2.0 were considered pathological.
The studies were visually assessed for the presence of myocardial LGE, which had to be present in two different spatial orientations. Additionally, the extent of LGE was quantitatively assessed using a dedicated module within CVi42, where pathological enhancement was defined as a myocardium with a signal intensity of more than 6 SD above the mean in a remote reference region of effectively nulled myocardium.
Statistical Analysis
The distribution of all continuous variables was assessed using the Shapiro-Wilk test. Normally distributed variables are presented as mean ± SD, whereas non-normally distributed parameters are given as median (interquartile range). Echocardiographic diagnostic performance was assessed in relation to CMR using standard accuracy criteria for binary diagnostic tests (i.e., sensitivity, specificity, and accuracy) with Clopper-Pearson confidence intervals and positive and negative predictive values with confidence intervals, calculated according to Mercado et al. [25]. Pearson's correlation coefficient, and the Bland-Altman plot, were used to compare LV EDV between the imaging methods. Participants with myocardial LGE detected in CMR were compared with the children without myocardial LGE using an unpaired t-test or Mann-Whitney test, depending on the normality of the distribution. Categorical variables between groups were compared using the chi-squared test. p-values of less than 0.05 were considered statistically significant. Statistical analysis was carried out using MedCalc Statistical Software 20.014 (MedCalc Software Ltd., Ostend, Belgium).
Clinical Characteristics
A total of 34 patients with an echocardiographic diagnosis of LVNC were recruited between February 2008 and July 2021. The median age was 11.9 years (6.6-14.7) and 50% were male. The patients were followed prospectively for a median of 5.1 years (2.2-12.2).
In the study group, 3% of patients were under one year of age; 32% were between 1 and 10 years of age; and 65% were over 10 years of age. Family history revealed cardiomyopathy in first-degree relatives in 11 children (32%) (LVNC in 20% of patients; both LVNC and DCM in 6%; LVNC and HCM in 3%; and HCM in 3%). Sudden cardiac deaths occurred in the families of three children (9%). The NYHA/Ross functional class in the majority of patients (74%) was evaluated as grade II; 3% had grade IV, while 24% had grade I. In 24-hour electrocardiographic Holter monitoring, the most prominent features were premature ventricular and atrial contractions, found in 26% and 15% of patients, respectively. Other findings were observed, including sinus bradycardia in 21% of children, paroxysmal thirddegree atrioventricular block in 12%, ventricular tachycardia in 9%, and Wolff-Parkinson-White syndrome in 6% of patients.
Echocardiographic Results
In 31 of the 34 patients (91%), the median NC/C ratio was 2.60 (IQR, 2.22, 3.40). In the remaining three patients (9%) referred from a regional center with a diagnosis of LVNC, the echocardiography performed in our cardiology center did not confirm the diagnosis, as the NC/C ratios ranged from 1.46 to 1.9. These patients were excluded from further analysis and were not referred for CMR examination.
CMR was performed in 27 of the 31 children (79%) who met Jenni's criteria in echocardiography. In four (13%) patients, CMR was not performed due to their severe clinical condition and the implantation of an LV assist device for mechanical circulatory support (n = 1), an implanted pacemaker (n = 2), and hemodynamic instability and low body weight (n = 1). Among the 27 children who underwent CMR, the diagnosis of LVNC was confirmed in 25 (93%), according to Petersen's criteria. In two patients (7%), the CMR investigations did not confirm echocardiographic diagnosis of LVNC, as the NC/C ratio was less than 2.3.
A comprehensive and detailed analysis was performed on a group of 25 patients who met the echocardiographic and CMR criteria for LVNC diagnosis. The baseline characteristics of the study group are presented in Table 1. In the group of 25 patients, the median NC/C ratio in systole was 2.60 (IQR, 2.22, 3.30) and in diastole 3.40 (IQR, 2.77, 4.80). In echocardiography, left ventricular systolic diameter was increased in 10 patients (40%) (LV diastolic diameter, 42-59.5 mm; z-score, +2.5 to +4.6). Of these, four patients (16%) had LV systolic function impairment (LV EF, 50%-55%; MAPSE, 9.6-16.5 mm; z-score, −2.8 to +1.6); in the remaining six children, LV EF was normal. No significant valvular abnormalities were noted in the study group. In two children, a reduction in LV EF was observed without an increase in LV diastolic diameter.
In one patient (4%), apart from LV enlargement and a reduction in LV EF, an impairment of RV systolic function was found (fractional area change, 30%; tricuspid annular plane systolic excursion, 18.5 mm; z-score, −1.5) with RV normal size. On the other hand, in one patient (4%), RV enlargement (36 mm; z-score, +2.8) with normal systolic function was observed. Left atrial enlargement was found in two patients (8%) (LAd, 35 mm; z-score, from +3.4 to +4). Table 2 presents the results of echocardiographic and CMR imaging from 25 patients with LVNC.
CMR Results
Twenty-seven participants meeting the echocardiographic criteria of LVNC underwent CMR with LGE assessment. In 25 out of 27 children (93%), LVNC was confirmed by CMR, according to Petersen's criteria, with a median NC/C ratio of 3.27 (IQR, 2.56, 3.76) and on average 5.1 ± 1.5 noncompacted segments. LV enlargement was diagnosed in 5 out of 25 children (20%) with LVNC, LV function impairment was diagnosed in 6 of the 25 (24%) patients, RV enlargement in four (16%), RV function impairment in seven (28%), and left atrial enlargement in five (20%) ( Table 2). RVEF was strongly correlated with LVEF (r = 0.76; p < 0.001) (Figure 1), though it was not associated with LV or RV volumes. In 6 out of the 25 patients (24%), midwall LGE was observed involving on average 6.6% ± 2.4% of the LV myocardial mass. In all of the patients, LGE was observed in at least one basal segment, and anterior segments (according to the AHA model) were most commonly involved (in four out of the six patients with LGE, 67%). LGE was noted in both, compacted and noncompacted segments. Compared to the children with LVNC without LGE, they had larger LV, characterized by higher LV EDV/BSA (101 ± 34 vs. 78 ± 13 ml/m 2 , p = 0.02) (Figure 2 In 6 out of the 25 patients (24%), midwall LGE was observed involving on average 6.6% ± 2.4% of the LV myocardial mass. In all of the patients, LGE was observed in at least one basal segment, and anterior segments (according to the AHA model) were most commonly involved (in four out of the six patients with LGE, 67%). LGE was noted in both, compacted and noncompacted segments. Compared to the children with LVNC without LGE, they had larger LV, characterized by higher LV EDV/BSA (101 ± 34 vs. 78 ± 13 mL/m 2 , p = 0.02) (Figure 2
Comparison of Echocardiographic and CMR Results
In the CMR investigations, NC/C ratio significantly correlated with echocardiographic NC/C ratio measured in systole (r = 0.41; p = 0.044), but not in diastole.
There were no significant correlations between NC/C and LV volumes or function. When referenced to CMR, which is considered as the gold standard for ventricular size and function assessment, echocardiographic examination had high accuracy for detecting LV function impairment (92% (74%-99%)), with high specificity (95% (74%-100%)), and moderate sensitivity in children with LVNC. However, the LV EF values measured using the two imaging methods were not significantly correlated (r = 0.36; p = 0.08). The mean difference between the echocardiographic and CMR results was −2.4% ± 7.8% and the lower and upper limits of agreement (LoA) were −18.1% and 13.3%, respectively.
Echocardiography had moderate sensitivity for diagnosing LV enlargement
Comparison of Echocardiographic and CMR Results
In the CMR investigations, NC/C ratio significantly correlated with echocardiographic NC/C ratio measured in systole (r = 0.41; p = 0.044), but not in diastole.
There were no significant correlations between NC/C and LV volumes or function. When referenced to CMR, which is considered as the gold standard for ventricular size and function assessment, echocardiographic examination had high accuracy for detecting LV function impairment (92% (74-99%)), with high specificity (95% (74-100%)), and moderate sensitivity in children with LVNC.
However, the LV EF values measured using the two imaging methods were not significantly correlated (r = 0.36; p = 0.08). The mean difference between the echocardiographic and CMR results was −2.4% ± 7.8% and the lower and upper limits of agreement (LoA) were −18.1% and 13.3%, respectively.
Discussion
The main findings of this prospective observational study on LVNC in children are as follows: 1.
Almost one fourth of pediatric patients with LVNC present with features of myocardial fibrosis; 2.
Right ventricular abnormalities, which are often present in children with LVNC, can only be reliably assessed with CMR.
Among the cardiac imaging techniques used in patients with LVNC, echocardiography and CMR are the primary diagnostic methods. The advantages of echocardiography over CMR are that it is more available, the costs of examination are lower, and there is no need for anesthesia in younger children. Consequently, echocardiography is the first choice in the diagnosis of LVNC [26]. Echocardiography, however, has its limitations. First of all, there is a wide range of echocardiographic diagnostic criteria in the literature, based on studies with small samples using different research methodologies [9,17]. The cardiac cycle (end-systole or end-diastole), in which the measurements of the noncompacted and compacted layers are made, is also important as the thickness of the myocardium is maximal in systole and minimal in diastole, which directly affects the NC/C ratio. The next point of discussion is the echocardiographic projection, in which the measurements for the NC/C ratio should be made. Most of the published diagnostic criteria suggest that these measurements should be performed in the LV parasternal short-axis view; however, the apical four-and twochamber views are most commonly used in everyday clinical practice. Finally, there is no uniform consensus on the threshold value of the NC/C ratio to use as a diagnostic criterion for LVNC [17]. The most frequently used criteria are those presented by Jenni et al., which are dedicated to adult patients; the suggested NC/C ratio is 2:1 or higher [11]. These echocardiographic criteria were used in our study, as in other published studies on children with LVNC [27], although some authors have proposed an NC/C ratio of greater than 1.4 as diagnostic criterion for LVNC in the pediatric population [28].
Improvements in cardiac imaging modalities, such as echocardiography and CMR imaging, have increased the identification of LVNC [29]. CMR is superior to echocardiography methodologies with regard to the number of segments that can be analyzed and the evaluation of the extent of two-layered myocardia. Moreover, CMR imaging has the potential to detect segmental non-compaction in any area of the LV wall and can provide supplemental morphological information beyond that obtained from conventional echocardiography [30]. Only a few previous studies have compared NC/C ratios assessed by CMR versus echocardiography [30,31]. The advantage of our study is that, for the first time, it compares data obtained with CMR and standard echocardiography in a larger group of pediatric patients. The results of our study demonstrate that in as many as 93% of children with LVNC features on echocardiography, CMR confirmed the diagnosis of the disease, which indicates that echocardiography is a precise diagnostic method for LVNC assessment in children.
Some authors have emphasized the role of better visualization of the noncompacted layer of the myocardium and trabeculae in end-diastole in echocardiography [32], while others have shown that end-systolic measurements of LVNC in CMR have stronger associations with cardiac events [33]. In our pediatric study, echocardiography images obtained at end-systole and end-diastole were compared with those obtained by CMR at end-diastole to assess NC/C ratio. Only systolic-not diastolic-NC/C ratios measured in echocardiography significantly correlated with NC/C ratio measurements in CMR, which is different from the results of a study on adult patients that reported good agreement between echocardiography at end-diastole and CMR measurements [30]. The results of our study suggest a strong advantage of evaluating the noncompacted myocardium during systole in echocardiographic studies.
Other authors [34,35] have assessed the correlation between NC/C ratio and LV EF. The results of these studies revealed that patients with increasing severity of noncompaction in echocardiography had significantly lower LV EF and LV EF correlated with parameters of specific diagnostic criteria for LVNC in CMR, such as an NC/C ratio greater than 2.3 and a more than 20% proportion of the noncompacted myocardium being LV mass. We did not find such a correlation in our study group. Nevertheless, 24% of the participants with LVNC confirmed by CMR presented with LV systolic function impairment, which is an important finding, as decreased LV EF is a significant risk factor [36]. Moreover, we observed a high accuracy of echocardiography in diagnosing LV systolic function impairment when referenced to the CMR, indicating its utility in patient follow-up. LV enlargement was observed in CMR in 5 of the 25 participants with LVNC (20%), indicating a significant incidence of LV remodeling in children with LVNC. Admittedly, echocardiography had moderate sensitivity for diagnosing LV enlargement (80%), though its specificity and overall accuracy in this aspect was relatively low.
Contrast-enhanced CMR with LGE imaging may detect myocardial fibrosis [37]. It is relatively frequently observed in patients with LVNC, though its presence or absence is not a reliable diagnostic marker of the disease [38]. In a study by Grothoff et al. [39], none of the LVNC patients demonstrated LGE, while other authors have described the presence of LGE in isolated LVNC, which is associated with a poorer LV systolic function [40,41]. In our study, we found LGE in pediatric patients with isolated LVNC and confirmed a relationship between LGE and features of LV remodeling. As in the case of other authors [42], in our study group the presence of LGE was associated with higher values of LVEDV. In contrast, LV EF did not differ between LVNC patients with LGE and other children with LVNC, similar to a study by Andreini et al. [32]. As the presence of LGE in adult LVNC patients was shown to be a significant risk factor [36] of cardiovascular events, our findings of significant incidence of myocardial fibrosis in children with LVNC and associated LV remodeling indicate the clinical importance of CMR imaging in the routine evaluation of those patients [32], as in children with hypertrophic cardiomyopathy [43].
The results of studies published in the literature [28] indicate a significant share of RV systolic dysfunction in patients with isolated LVNC. There are reports [44] that have emphasized the relationship between RV systolic dysfunction and significantly lower LV EF and relevant LV enlargement. According to the authors [28], patients with impaired RV systolic function have a greater LV volume, lower LV systolic function, and more pronounced myocardial fibrosis, which may indicate that RV dysfunction is a marker of a more advanced stage of LVNC. The results of our study showed a significant correlation between left and right ventricular function, but we did not prove a relationship between RV EF and the size of the right and left ventricles. The significant incidence of RV enlargement and RV systolic function impairment observed in children with LVNC further highlights the clinical significance of CMR imaging in this population, since the possibilities of echocardiography RV evaluation are limited.
Based on our experience, we can summarize that echocardiography should be used as the first diagnostic test in LVNC, while CMR is strongly recommended as a complementary examination to accurately assess the extent of myocardial noncompaction and to reliably analyze the size and systolic function of the ventricles.
The results of studies on LVNC in children published so far require further research due to the many unanswered questions regarding diagnostic methods, diagnosis, and clinical management.
1.
In the morphological assessment of myocardial noncompaction among the study group of pediatric patients, there was a very good agreement between echocardiography and CMR imaging;
2.
A significant correlation was demonstrated in the assessment of NC/C ratio from end-systole measurements in echocardiography and in end-diastole measurements in CMR examination; 3.
Echocardiography is a good method for monitoring LV systolic function, but CMR is indicated for precise assessment of the left ventricular morphology and enlargement; 4.
CMR significantly exceeds echocardiography in the assessment of the right ventricle in children with LVNC and should be included in the basic diagnostics of these patients; 5.
CMR imaging allows for the detection of areas of LGE, which are indicative of myocardial fibrosis; 6.
LGE incidence is relatively high in pediatric patients with LVNC and is associated with LV remodeling. As it is also a risk factor of future cardiovascular events, contrastenhanced CMR should be a part of a standard diagnostic work-up of pediatric patients with LVNC. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study and their parents.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
|
2022-03-09T16:12:53.544Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "6600e8420390e6bfc3a18c75554dbbb8c4873ab6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2308-3425/9/3/77/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7e69892efa5ddf2429f5eba74e2990a3dc48f95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9699404
|
pes2o/s2orc
|
v3-fos-license
|
Adult Mitochondrial DNA Depletion Syndrome with Mild Manifestations
Mitochondrial DNA depletion syndrome (MDS) is usually a severe disorder of infancy or childhood, due to a reduced copy number of mtDNA molecules. MDS with only mild, nonspecific clinical manifestations and onset in adulthood has not been reported. A 47-year-old Caucasian female with short stature and a history of migraine, endometriosis, Crohn’s disease, C-cell carcinoma of the thyroid gland, and a family history positive for mitochondrial disorder (2 sisters, aunt, niece), developed day-time sleepiness, exercise intolerance, and myalgias in the lower-limb muscles since age 46y. She slept 9-10 hours during the night and 2 hours after lunch daily. Clinical exam revealed sore neck muscles, bilateral ptosis, and reduced Achilles tendon reflexes exclusively. Blood tests revealed hyperlipidemia exclusively. Nerve conduction studies, needle electromyography, and cerebral and spinal magnetic resonance imaging were noninformative. Muscle biopsy revealed detached lobulated fibers with subsarcolemmal accentuation of the NADH and SDH staining. Realtime polymerase chain reaction revealed depletion of the mtDNA down to 9% of normal. MDS may be associated with a mild phenotype in adults and may not significantly progress during the first year after onset. In an adult with hypersomnia, severe tiredness, exercise intolerance, and a family history positive for mitochondrial disorder, a MDS should be considered.
Introduction
Mitochondrial DNA depletion syndrome (MDS) is usually a severe disorder of infancy or childhood due to a reduced copy number of mtDNA molecules within a mitochondrion. 1,2 Depletion of mtDNA results from a replication defect, which may be caused by mutations in at least nine different nDNA-located genes. 3 MDS with only mild clinical manifestations and onset in adulthood has not been reported.
Case Report
The patient is a 47-year-old Caucasian female, height 158 cm, weight 60 kg, who developed day-time sleepiness, exercise intolerance, and myalgias in the lower-limb muscles since age 46y. She slept 9-10 hours during the night and 2 hours after lunch daily. Her individual history was noteworthy for a number of previous disorders. In 3/96 an ovarian endometriotic cyst, a follicular ovarian cyst, and a hydatoid cyst of the left Fallopian tube were resectioned and an adhesiolysis carried out. In 10/96 she experienced a gastro-intestinal haemorrhage time-linked to menstruation. Endometriosis was suspected. Since 1997 she was diagnosed with migraine with up to 3-4 non-triggered attacks per month. In 2/99 she experienced a second gastro-intestinal haemorrhage, this time requiring 6 blood transfusions. Colonoscopy did not detect any source of bleeding. In 5/02 a periproctitic abscess developed and was adequately treated. In 3/03 erythema nodosa of the lower legs occurred, preceded by diarrhoea. Upon histological examination of the colonic mucosa, Crohn's disease was diagnosed and a therapy with steroids (initially aprednisolone, since 10/03 budesonide) and mesalazine initiated, resulting in remission of the enteritis. In 10/06 a chronic anal fistula was diagnosed requiring surgical intervention. In 12/08 budenoside was discontinued. Since 08 she was taking the pill for endometriosis. Cerebral MRI in 5/09, carried out for work-up of headache, was normal. MRI of the cervical and lumbar spine, carried out for dorsal pain, revealed only slight degenerative abnormalities.
The family history was noteworthy for thyroid cancer (grandmother from the mother's side, sister), gastric cancer (second sister), clinically multisystem mitochondrial disorder [2 sisters, niece, aunt (sister of mother, mother (dilative cardiomyopathy, myopathy, Hashimoto thyroiditis, recurrent synkopes, restless-leg syndrome, tinnitus, endometriosis, easy fatigability)], and sudden cardiac death (mother). Upon a family screening in 4/10 for thyroid carcinoma she, her sister, and her niece were found positive for the rearranged during transfection (RET)-oncogen mutation, which is associated with an increased risk to develop thyroid cancer. 4 Following these results the niece underwent prophylactic thyroidectomy. In 7/10 gastrointestinal bleeding relapsed and budesonide was restarted. In 8/10 budesonide was replaced by infliximab (400 mg every 8 weeks) and mesalazine was discontinued. In 10/10 recurrence of the anal fistula required surgical intervention again. In 3/11 a non-specific impaired aggregation of thrombocytes was diagnosed. Clinical exam in 4/11 revealed short stature, sore neck muscles, exaggerated mas-seter reflex, and reduced Achilles tendon reflexes exclusively. Hyperlipidemia was noted but serum lactate, myoglobin, and thyroid function tests were normal. Nerve conduction studies and needle electromyography were noninformative. Echocardiography and cardiac MRI revealed a bicuspid aortic valve exclusively. In 7/11 a medullary thyroid carcinoma was diagnosed and treated exclusively by resection. For post-operative hypothyroidism she took Lthyroxine. In 3/12 she reported an increase in intensity of migraine attacks, increasing fatigability, and myalgias. She was on a therapy with infliximab (every 8 weeks), L-thyroxin, and the pill for endometriosis.
During thyroid resection in 3/11, a muscle biopsy from the sternocleidomastoid muscle was additionally taken revealing mild to moderate variation in fiber size ( Figure 1A), predominance of type 2 fibers, single fibers with eosinophilic cytoplasmic bodies ( Figure 1B), and single lobulated fibers exhibiting moderate subsarcolemmal accentuation of SDH ( Figure 1C), and NADH ( Figure 1D) activity. Biochemical investigations of the muscle homogenate revealed reduced activity of all respiratory chain complexes in relation to non-collagen protein [NADH-CoQ-oxidoreductase: 6.1 U/g NCP (n, 15.8-42.81 U/g NCP), succinate/ cytochrome C-oxidoreductase: 5.41 U/g NCP (n, 6.0-25.0 U/g NCP), cytochrome-c-oxidase: 45U/g NCP (n, 112-351 U/g NCP)]. Southern blot and long-range PCR were negative but real-time PCR was indicative of a MDS. Other clinically affected family members did not consent with a genetic investigation so far. The amount of residual mtDNA was reduced to 9% of normal. For funding reasons, neither sequencing of any of the nine candidate genes, nor exomesequencing could be offered.
Discussion
MDSs are characterised by severe reduction of the mtDNA copy number. 1 Residual mtDNA copy levels may be as low as 1-2% of that of normal. In up to 50% of the cases, mtDNA depletion may be caused by mutations in at least nine different genes (POLG1, PEO1, RRM2B, SUCLG1, SUCLA2, DGUOK, MPV17, TK2, TYMP). 1-6 Among these, mtDNA depletion is most commonly caused by mutations in the POLG1 gene, encoding the catalytic subunit of DNA polymerase-gamma, the only polymerase to replicate mtDNA. 1,7 In the majority of the cases with MDS, however, the underlying genetic defect remains undetected. The phenotypic expression of mutations in these 9 genes is quite variable. 2 Usually, infants or children are affected and the cerebrum, the liver, or the skeletal muscles are predominantly affected, alone or in combination (myopathic, encephalo-myopathic, or hepato-cerebral MDS). 2,8,9 Patients with POLG1 mutations manifest as non-syndromic hepato-cerebral depletion syndrome, Alpers-Huttenlocher syndrome (AHS), infantile onset spinocerebellar ataxia (IOSCA), non-syndromic encephalomyopathic depletion syndrome, or Leigh-syndrome. 9 Whether the MDS in the presented patient was due to long-term treatment with infliximab remains speculative but previous studies have shown that infliximab at least induces apoptosis of monocytes. 10 An argument against the presence of a MDS in the presented case is that the family history suggestss a maternal trait of inheritance whereas all other MDS so far reported follow an autosomal recessive trait.
Contrary to previous descriptions, the phenotype in the presented patient was mild. Why mtDNA depletion in the presented patient resulted in only minor abnormalities, remains speculative but could be explained by the low rate of depleted mtDNA in tissues other than the muscle or by a mutation in a gene so far not reported in association with MDS. Probably, mtDNA depletion was absent or only mild in tissues other than the muscle. Possibly, the mild phenotype at onset will turn into a more severe presentation during the disease course, but given the stable presentation during the first year after onset and the benign course in other family members, such a scenario is rather unlikely. Assuming, that any of the nine genes so far associated with MDS was mutated in the presented patient, the ones most likely involved are the TK2, SUCLA2, RRM2B, SUCLG1, POLG, or TYMP genes since they have been found most frequently associated with myopathy. 2,8 Among these, mutations in TYMP were found in adult patients with MNGIE. 11 Also twinkle mutations were associated adult-onset MDS. 5 Exercise intolerance has been reported as a phenotypic feature of DGUOK mutations and migraine or migrainelike headache as a feature of PEO1 mutations. 12,13 Hypersomnia or other sleep disorders and myalgias have not been reported in association with MDS. It must be admitted, however, that the phenotype of the presented patient fitted to none of those previously described in MDS. However, muscle biopsy was taken from the sternocleidomastoid muscle and though it showed some changes, myopathological alterations may vary considerably between muscles.
Whether the C-cell carcinoma was causally related to the MDS remains speculative, but preliminary observations in our cohort of patients with mitochondrial disorders suggest that the prevalence of malignancies is increased among these patients. It remains also unclear if endometriosis was causally related to the mitochondrial disorder. Arguments for a causal relation are that mtDNA polymorphisms were made responsible for the development of endometriosis and that mitochondrial biomarkers are increased in eutopic endometriosis. 14,15 It remains also unclear, if Crohn's disease was causally related to the MDS. Since some of the mitochondrial disorders go along with non-specific colitis, 16 it is possible that the gastrointestinal problem was actually a manifestation of the MDS, but missing reports about an association between MDS and enteritis and unequivocal histological abnormalities argue against such an assumption.
Conclusions
In conclusion, MDS may start in adulthood, may be associated with a mild phenotype, and may not significantly progress during the first year after onset. In an adult patient with severe tiredness, exercise intolerance, hypersomnia, and a family history positive for mitochondrial disorder, MDS should be considered. Endometriosis, colitis, or malignancy may be a phenotypic feature of a mitochondrial disorder.
|
2017-04-01T18:04:59.804Z
|
2013-06-25T00:00:00.000
|
{
"year": 2013,
"sha1": "2bbb2dc835ed7f5f04377edd4451fe247ec18cf1",
"oa_license": "CCBY",
"oa_url": "https://www.pagepress.org/journals/index.php/ni/article/download/ni.2013.e9/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76aa937873bac15d53b4bf7c4254edd36d4c2706",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1740352
|
pes2o/s2orc
|
v3-fos-license
|
Climate Change and Public Health Policy: Translating the Science
Public health authorities are required to prepare for future threats and need predictions of the likely impact of climate change on public health risks. They may get overwhelmed by the volume of heterogeneous information in scientific articles and risk relying purely on the public opinion articles which focus mainly on global warming trends, and leave out many other relevant factors. In the current paper, we discuss various scientific approaches investigating climate change and its possible impact on public health and discuss their different roles and functions in unraveling the complexity of the subject. It is not our objective to review the available literature or to make predictions for certain diseases or countries, but rather to evaluate the applicability of scientific research articles on climate change to evidence-based public health decisions. In the context of mosquito borne diseases, we identify common pitfalls to watch out for when assessing scientific research on the impact of climate change on human health. We aim to provide guidance through the plethora of scientific papers and views on the impact of climate change on human health to those new to the subject, as well as to remind public health experts of its multifactorial and multidisciplinary character.
Introduction
The females of most mosquitoes need to feed on the blood of living vertebrates including humans, to successfully reproduce, and in the process may transmit pathogens (viruses, bacteria or parasites) and so serve as vectors of these diseases. Mosquito-borne diseases are especially important vector-borne diseases with malaria, dengue and yellow fever alone affecting millions of people every year (Table 1). [1,2]; 3 [3] Clinical cases only, ¾ of dengue infections are apparent; 4 Who Factsheet N° 100 May 2013; 5 [4]; 6 [5]; 7 Depending on Plasmodium species but when untreated up to a year, exception of Plasmodium vivax with prolonged incubation period up to 5 years; 8 [6] Humans theoretical reservoir (low epidemiological significance); 9 Veterinary vaccines available for horses.
Worldwide, the most important mosquito vector species are members of three genera, Aedes, Culex and Anopheles, each having its own set of climatic and environmental drivers and constraints. Not only can a species occur within its natural geographical range (past or present) and dispersal potential (indigenous species), but it can also occur outside this range through various introduction routes (exotic species). An exotic (or invasive) species may subsequently establish and spread causing economic or environmental impact or harm to human health [7]. The yellow fever mosquito, Aedes aegypti, for example is indigenous to Africa, but is an exotic species in The Netherlands where it has been introduced, but cannot establish due to prevailing climatic conditions [8], and an invasive mosquito in Madeira where it has been established since 2002, and was a vector for a dengue epidemic in 2012 [9].
An established vector population alone does not pose an immediate risk without another critical element: the presence of the pathogen itself. Depending on the pathogen, an infection can cause disease in human, livestock and wildlife. Some mosquito borne pathogens are maintained in a human-vector-human cycle, whilst the lifecycles of others also involve (wild) reservoir host animals. Here, humans frequently act as dead end hosts from which pathogens are not transmitted to other susceptible hosts [10] (Figure 1). Whether actual transmission of mosquito borne pathogens can occur in a specific time and place depends on the vector capacity, a parameter combining the level of intrinsic (genetic and physiological) ability of the mosquito species present to transmit the pathogen (vector competence) with the other factors affecting transmission such as mosquito population and host reservoir density, host preferences, and biting rates [11]. As long ago as 1966, Pavloskiy proposed the concept of focality or nidality of diseases, in which pathogens are associated with specific landscape. The dimensions of possible transmission thus largely depend on the vector bionomics and pathogen natural history [12], including its vulnerable primary hosts, either humans or other vertebrates.
Climate changes may affect both these dimensions, and therefore the spatio-temporal distribution of possible transmission. Using scientific methods, knowledge of these complex systems needs to be accumulated and organized in the form of testable explanations and predictions to support public health policymakers in making decisions on the way forward. However, the nature of scientific information, which is often extensive, complex, uncertain and ambiguous, also complicates the development of evidence-based health policies [13,14] by decision makers who may not be fully trained in the disciplines needed to evaluate the evidence.
In the following sections, we discuss the advantages, disadvantages, pitfalls and lessons, of the different scientific approaches for the development of public health strategies to prepare for climate change. We examine four topics, namely: global warming versus global change, models versus the real world, retrospective versus prospective studies, and generalized versus contextual approaches. We identify a number of lessons to be learned and by doing so hope to support public health policymakers in making decisions their future strategies.
Global Warming versus Global Change
Mosquito vectors, like all cold-blooded animals, are obviously sensitive to (changes in) temperature and, provided the temperature does not exceed a lethal threshold, rising temperatures usually mean more rapid development of the mosquito and replication rate of the pathogen in the mosquito or extrinsic incubation period. Consequently, the majority of climate change research has focused on the assessments of the effect of increasing temperatures on pathogen transmission through the modulation of life history traits of the vector [15]. However, this global warming is telling only part of the story of climate change. Climate change also entails changes in rainfall and wind patterns and consequently relative humidity, rising sea levels and increasing UV radiation [16][17][18]. Consequently, climate change impacts land use and land cover, crop suitability and agricultural patterns and human behavior. The spatial and temporal heterogeneity of climate change may generate novel climates and environments in many geographic regions [19]. Due to their dependence for reproduction on water bodies, mosquitoes (and the diseases they transmit) are particularly sensitive to changes in quantity and quality of these aquatic breeding sites due to for example increased precipitation or drought [20]. Populations of hosts, competitors and natural enemies of vectors are also affected [18,21]. While the outer limits of a species distribution are largely determined by climatic or environmental factors, biotic interactions have also been shown to play an important role in shaping populations within those extents [22]. Dispersal, via human facilitated invasion, is an additional factor; even if conditions are ideal species may not occur, simply because they have not reached the place [23].
A number of adaptations to the effects of climate change can be anticipated. The introduction of green (vegetation) and blue (water) infrastructure in cities to alleviate urban heat islands [24], and construction of water retention and storage facilities to mitigate the impact of changing precipitation intensities and frequencies [25] are examples of adaptation on a community level possibly likely affecting urban mosquito populations. On a more individual level, people might either spend more time outdoors in the country side or in air conditioned locations, thereby affecting their possible exposure to mosquito bites and potentially to pathogens [26].
Undoubtedly, both the incidence and geographical distribution of vector borne diseases are expected to change as a general result of direct and indirect climate change [22]. However, global changes in land use, trade and travel patterns, leisure time, urbanisation, and standard of living play an important role in the distribution of vectors, reservoirs and pathogens, and consequently in the emergence of vector borne diseases [12,[27][28][29][30][31][32]. The vulnerability to outbreaks differs between human populations [33,34]. Whether a mosquito borne disease will actually emerge in a suitable particular place at a particular time, will also largely depend on the array of interventions that can be applied to interrupt disease transmission or reduce disease burden, by personal protection, vaccines or curative medicine (Table 1), or vector management. While vaccines or curative medicine, when available, may prevent or restrict the disease burden in people, zoonotic (within animal hosts) pathogen transmission is often not stopped.
Lesson 1: Over-emphasizing the importance of climate in disease emergence is misleading [33]. Climate change may affect disease burden directly and indirectly in many ways, but needs to be considered alongside a number of other factors, which is a complex process.
Models versus Real World
Predicting the impact of climate change on public health in general and mosquito borne diseases in particular is challenging. In part this is due to the uncertainty in predicting the multifactorial local effects of global changes in climate [35]. But even when assuming a certain scenario as a fact, huge uncertainties about its effect on health remain. To comprehend the complex relationships between climate change and mosquito borne diseases they have been broken down into components. Data on the vector bionomics and pathogen kinetics are predominantly acquired using basic biological observational and experimental research. The latter studies are invaluable for examining the validity of hypotheses under controlled conditions. The validity of laboratory data in the outside world is questionable as responses to varying conditions or key parameters can be missed. Recent studies, however, are increasingly considering the impacts of the changing environment on mosquito bionomics [36,37].
To understand complex systems, to study the effects of different components, and to make predictions about their behaviour, mathematical modelling techniques are used. These models can be broadly divided into two categories: mechanistic and statistical. Reiner et al. defined mechanistic models as those in which the equations, formulae or computer simulations are based on assumptions about the processes or proximate causal mechanisms under consideration [38]. In the course of developing a mechanistic model, the various steps in disease transmission are described. For lack of other data, laboratory results on, for example, critical thresholds and development rates form the input data for process based predictions on distributions of vectors and diseases. A widely used measure of the probability of establishment of a vector borne disease is the basic reproduction number, also referred to as R0 [31]. The value of R0 depends, among other factors, on parameters such as the rate of development of the pathogen, the number of times the vector bites the hosts, the survival rate of the vectors and the population abundance and seasonality of both vectors and hosts. Analogous to laboratory studies, mechanistic models examine the validity of hypotheses under controlled mathematical conditions. These models are developed with specific aims outlined in a certain context and with an underlying set of rules and assumptions. Knowledge of the context and limitations of the models is essential when interpreting the results. Unfortunately, conclusions are often drawn outside the validity range of the assumptions-by the researcher themselves in some cases-but more often by the reader.
Statistical models are commonly used to identify constraints and drivers, including climate, that are currently associated with a vector and/or disease distribution or spread but without identifying the underlying process [39]. Roger et al. [22], states "Many (species) distribution modelling approaches involve a sort of data mining to match pattern of points in a database to sets of environmental and other predictors. It is truism that any pattern can be matched as long as sufficient variables, thresholds and break points are allowed in the models". The fact that underlying processes are not identified hampers the design of intervention measures based on the model results. They are, however, the only technique available if, as is often the case, sufficient details of transmission dynamics are not available, and they do provide estimates of their accuracy. Successful outbreak predictions have been made using this approach [40], [41]. Note that such models need to be evaluated very carefully as it is often not clear how the model outputs actually relate to real disease risk: as pointed out earlier the presence of a vector does not guarantee a disease will occur, nor does the presence of a disease always mean it will persist or spread.
Future threats of vector borne diseases can also be assessed combining both modelling approaches [42,43]. Among others, Hartemink [42] demonstrated that the risk of emergence of vector borne zoonoses displays high spatial and temporal variation due to interplay of multiple factors, using this integrated method. Uncertainty and sensitivity analyses are used to investigate the accuracy and robustness of a study when the study includes some form of model-based and/or stochastic approach. Another approach is to assume certain rather simple constraints on a species performance, without specifying in advance where are the most important variables [22].
Most of current models belong to the reductive analysis approach, aiming to describe patterns and understand how various processes interplay. The output of the model largely depends on the scope (minimise the error or maximise the information), assumptions and the choice of the input data [22,44]. The fact that different models produce different outputs is obviously challenging for developing evidence-based policies.
Lesson 2: Understanding the conditions and assumptions that underlie both laboratory and modelled data are essential when interpreting the outcome; extrapolation to the real world often lies outside the validity range of the research.
Retrospective versus Prospective Studies
An important classifier of investigations into the relation between climate change and vector borne diseases is whether a study looks back (retrospective) or forward (prospective) in time. In the former, explanatory variables from the past are analysed to explain the current situation, events or processes, whilst in the latter, these explanatory factors drivers and constraints, (which themselves may be projected) are used to predict the disease in the future.
Retrospective studies have the advantage that factors are examined in relation to an outcome that is established at the start of the study, when the process is stabilised or in equilibrium, and always statistically bounded [45]. Retrospective researchers, however, have to be alert to potential sources of bias, changes in relationships according to the predictor levels (non-linearity), and the presence of confounding or proxy variables. Bias is a systematic error that leads to an incorrect estimate of effect or association. The non-linearity of the covariates means that the relationship between the outcome and the variable could change according to the level of the variable, so that if we predict the outcome at values out of the variable range used for the retrospective study the analysis is statistically invalid or at least affected by ignorance (that is a component of uncertainty). A confounding or proxy variable is one that, for example, varies in the same way as the real cause of a change in a disease, but is not actually the cause. Indeed, climate may be a confounding variable for any increase of mosquito borne disease incidence or outbreak that has occurred during recent decades (see Box). Establishing actual causality is often overlooked in the popular debate contributing to the general perception that climate change affects vector borne disease emergence. Beware that rare events with major impact, such as a disease outbreak, are frequently rationalized by hindsight, as if it could have been expected [45].
Prospective studies make use of the important drivers and constraints, climatic or not, identified in retrospective studies and then utilize them to estimate risk of occurrence in the future. In such risk assessment, the likelihood that a specified negative event will occur is determined [46]. It indicates the presence of preconditions for an outbreak, but it does not tell you whether it actually will occur, and may not specify its timing, size, location and spatial spread. The latter is still not well understood or appreciated by public health experts, which results in criticism if outbreaks happen in areas of low likelihood or nothing happens in areas with high likelihood. Moreover, if a prediction of elevated risk triggers effective timely and preventive intervention, the outbreak does not happen and the public wonders why the resources were expended to control something that did not occur. On the other hand, science may have provided answers to questions not asked by public health experts. While academics produce maps with spatial distribution of accurate risk outputs of mathematical modelling, the public health experts may want a simple description of the risk: present or absent, or, if there is a risk, they need to know the best and worst case scenarios rather than a prediction of the most likely risk levels. There is a difference, of course, between predicting an increase in an old or endemic problem and the emergence of a new problem. The latter is inherently more uncertain.
Lesson 3: There is a fundamental difference between knowing the past and predicting the future. Lesson 4: High impact rare events occur beyond the realms of normal expectations. Lesson 5: Researchers may not appreciate what information the Public Health professionals actually need to make appropriate decisions, and better communication between the two groups is badly needed.
Generalized versus Contextual Approach
From the preceding discussions it is clear that making generalized statements beyond "climate change is a driver for mosquito borne diseases" is misguided. Even that simple statement only holds true when it embraces both agonistic and antagonistic drivers that favour or hinder vector borne diseases, respectively. The spatial as well as temporal variation in the occurrence of a certain mosquito borne disease is linked to geographic differences in its constraints. Whilst in the tropics, conditions might change beyond the tolerance levels of any given mosquito species, this is not expected to occur in temperate Europe and it is assumed that rising temperature will consistently speed up mosquito vector development and the pathogens in it [18]. Changes in relative humidity in temperate zones may have minimal effects on the adult population of species that predominately inhabit wetlands, as humid shelters should remain relatively abundant. However, adult mosquitoes, inhabiting urban areas by breeding in artificial containers, are likely to be affected negatively by a decreasing relative humidity as a result of the development of urban heat islands [16]. Besides changes in precipitation, rising sea levels will affect the availability and suitability of mosquito breeding sites. Saline and brackish water bodies in coastal areas will increase [47], probably at the expense of fresh water bodies and their aquatic inhabitants. Such changes will, however, create more breeding sites for salinophilic species breeding such as the Dutch malaria mosquito An. atroparvus. Rising sea levels could also potentially reverse the historical reduction in the habitat of this species that occurred in the 1900s [48]. While only examples of the effect of climate change on the vector were given, the same principles hold for hosts, pathogens and with that for mosquito borne diseases.
Already included in the word, climate change refers to variables changing relative to the norm and not to absolute values. An outbreak is also, by definition, an anomaly in expected cases per year. Because of alleviation of the prevailing constraints, outbreaks are noticed in places where they normally do not occur. Finding the causes of mosquito borne disease emergence dominates the research into climate change and vector borne diseases, effectively ignoring the fact that on many occasions, diseases did not emerge on other occasions when conditions were apparently similar, a pitfall of retrospective studies as mentioned earlier. Where and when it happens depends on whether the limiting factor(s) was removed by climate, environmental, socio-economic or other change. Cataloging all possible evidence of a past or predicted impact on any mosquito borne disease, sometime, somewhere without putting it in perspective does not bring public health authorities closer to knowing what to do to be prepared for the in the future. Many such reviews nevertheless exist [20,28,[49][50][51][52][53][54]. There is a need for an approach that brings us beyond the recognition and appreciation of the complexity of climate change and public health, and provides contextual guidance.
Lesson 6: A contextual approach is needed to understand climate and human health and to develop public health strategies.
The Way Forward
Public health authorities are required to prepare for future threats and need predictions of the likely impact of changing climate on public health risks. Usually they focus their preparations on their own geographical region. The threat level of a mosquito borne disease for a particular country can be categorized into one of five contexts, based on the presence or absence of three important facets important for public health: human cases, pathogens and vectors (Table 2) [55]. Mosquito borne diseases pose no risk when neither the pathogen nor vector is present (context 5). Here, future establishment of the vector after introduction is the main concern and information on potential impact of climate change on the disease can be ignored by the national health authority. However, if a disease is endemic in a country (context 1), climate change may affect the size of the established vector population or rate of transmission from vectors to hosts, and consequentially the incidence of human cases. In countries where an established vector population of a vector borne disease is present (context 1-3), the current climatic and environmental conditions are obviously suitable for the vector, but whether the population size will increase or decrease in response to climate change depends on the species-specific requirements. If no established vector population is present (yet) (context 4-5), the current climatic and environmental conditions may either be unsuitable or be suitable, but the vector has yet to reach the region. It is important to keep in mind that the context of a particular mosquito borne disease can differ between countries; West Nile fever belongs to context 3 in the Netherlands, but to context 1 in Italy. ---Japanese encephalitis * Note: * Potentially European mosquitoes are competent to transmit JEV [64], but this has not been validated.
Factors determining the success of a novel or exotic species in a new location differ between the sequential phases, namely the introduction, establishment, and geographic spread. For mosquitoes, arrival in a new area can occur through active migration or passive transport mediated by wind, or by trade and travel movements. In the last thirty years, global trade and travel has increased exponentially, resulting in an increase of the risks of the arrival of novel mosquito species [43], for example, by the trade of used tires or in airplanes [44]. Establishment and subsequent geographic spread of a species depends on whether the introduced species encounters suitable climatic and ecological conditions at the new location. The chances for this to happen, in general, are considered rather small [45], except for a few notorious invasive mosquito species such as Ae. aegypti and Ae. albopictus [65].
While the establishment and spread of a mosquito species after its introduction to a new area are transient processes, the effect of the arrival of a novel pathogen in an area with an established vector population can be very rapid and substantial, as seen with West Nile virus introduction in USA [46]. However, since introductions are often only noticed when causing a significant disease burden, no real insight exists on how often pathogens arrive but do not become established or do not cause an outbreak of disease. As with vectors, the chances of successful establishment and spread of pathogens are also considered to be rather small, considering its dependence on enabling hosts, vectors and environmental and climatic conditions. The chances on disease burden can largely differ between human populations with different socio-economic statuses [66].
For a single country, basic information on vector and host populations present and potentially circulating pathogens are required to assess the contexts of mosquito borne diseases. Subsequently, based on their context, the best surveillance strategy can be developed for each mosquito borne disease, depending on the potential prospectives for action and the costs/benefit analysis. In a time of grim governmental budget cuts, focusing on interventions that achieve the largest health gain per euro spent ever more necessary. For some mosquito borne diseases, taking action (e.g., preventing the establishment of invasive mosquitoes) even when as yet there is no disease, might be more effective than waiting until the disease appears [55]. Once a decision to intervene to decrease the disease burden (or group/category of diseases) or to mitigate a threat has been made, surveillance should be implemented in order to measure the effectiveness of the intervention [8].
The described contextual surveillance for vector borne disease can be easily extended with veterinary and wildlife health along with public health to be applicable in a One Health approach, Surveillance programs providing knowledge on the current distributions of the disease, the pathogen and the vector, are vital in the development of appropriate One Health policies.
Conclusions
Disease emergence in its own right is inherently complex and uncertain, let alone the impact of climate change on this. While the recognition of the complexity of climate change and disease emergence is important, public health authorities need to focus on developing and maintaining contextual surveillance programs.
Climate change, entailing increasing temperature, changes in patterns of precipitation and other meteorological factors, and rises in the number of extreme events, is expected to affect the emergence, incidence and geographical distribution of vector borne diseases. Predictions on the direction and size of these effects are needed to inform an optimal public health response. Complex transmission pathways, typical for vector borne diseases, as well as regional climate change projections are often insufficiently understood and largely uncertain, hence any combination can produce misleading results [67]. In addition, many factors other than climate have been identified as having a significant effect on whether vector borne diseases emerge or not [12,[27][28][29][30][31]54]: these include the increase in urbanization, trade and travel, socio-economic and environmental changes as well as distinct differences in vulnerabilities between human populations [34,[68][69][70][71][72]. Various lists of a(nta)gonistic drivers for emergence of infectious diseases, including climate change exist. While the majority of recent publications acknowledge the overwhelming complexities, unknowns and uncertainties of the relation between climate change and vector borne disease, the generalized idea that the transmission of vector borne diseases is favoured by climate change remains the most widely held working hypothesis and dominates the public debate. By identifying major pitfalls of this working hypothesis and highlighting specific lessons to be learned, we hope to support public health advisors in the development of local evidence-based public health strategies.
Box: Mosquito-borne diseases in Europe The decades following the eradication of malaria in the 1960s and 70s, mosquito borne diseases were not considered important problems for public health in Europe. In this period, only incidental cases and infrequent outbreaks of West Nile fever had been observed except in Italy [73], and the disease burden of the other mosquito borne diseases has also been also low [74]. Endemic malaria cases only occurred in six countries from the WHO European region, (Azerbaijan, Georgia, Kyrgyzstan, Tajikistan, Turkey, and Uzbekistan).
However, in recent decades the situation with mosquito borne diseases seems to have changed. Between 1996 and 1998, serious outbreaks of West Nile virus in Romania, Russia, Italy and Israel have occurred. Since then, WNV circulation has been reported from multiple countries inside the European Union (EU) including France, Greece, Italy, Portugal, Romania, Serbia and Spain, and from close neighbours: Turkey, Russia, Morocco and Israel [73]. Further, in 2007 more than 200 people fell ill from the first European outbreak of chikungunya in Italy [75]. Subsequently, in 2010 the first autochthonous cases of chikungunya and/or dengue were detected in Southern France and Croatia and transmitted by the Asian tiger mosquito, Ae. albopictus. In 2012, Madeira experienced a significant dengue outbreak vectored by Ae. aegypti [9]. Between 2009 and 2012, Greece has also experienced several clusters of locally acquired malaria, predominantly caused by the recent steady introduction of non-symptomatic gametocyte immigrant workers infecting the local malaria mosquito population [76]. In 2009, the first known human cases of Usutu virus infections were described in Italy [77]. In 2008 in his consideration of mosquito borne viruses occurring in Europe since the 20th century, Hubalek [31,56] listed eight viruses that are proven pathogenic to humans, belonging to three families Togaviridae (sindbis, chikungunya), Flaviviridae (West Nile, dengue) and Bunyaviridae (Batai, Tahyna, Snowshoe hare, Inkoo). The recent reports of Usutu (Flaviviridae) infections in humans [77] brings that number to nine ( Table 2).
Correlation of these recent events with the increasing recognition of the process of climate change may have fuelled speculations about causality and implications for the future [31]. Convincing evidence, however, exists that non-climatic processes were the main determinants of these outbreaks. Major changes in the global distribution of chikungunya, for example, have been shown in part to be due to a genetic adaptation of the virus. While its principle vector used to be the yellow fever mosquito, a recent mutation, has meant that it is effectively transmitted by the Asian tiger mosquito, a more temperate species [78]. This virus quickly reached Italy through a travelling viraemic patient. There it found a highly effective resident vector population and infected many people [75]. The latter also holds for the recent autochthonous cases of dengue and chikungunya in France and Croatia. The current occurrence of multiple autochthonous vivax-malaria in Greece is probably caused by a steady introduction of non-symptomatic gametocyte immigrant workers infecting the local malaria mosquito population [76].
Such events imply that these vector borne disease outbreaks occurred because of the arrival of a pathogen in a location suitable for transmission. The chance for such introductions has increased due to the recent enormous growth in trade and travel movements [79], which has increased the vulnerability of Western Europe [80] to introductions from abroad. Since climate change does not seem to play a major role in the introduction of these pathogens, the question arises as to whether it has (retrospective) or will (prospective) facilitate the establishment or spread of diseases. In Western Europe, temperature constraints for life history traits of mosquitoes and the pathogens they carry may be relaxed and transmission season may be extended, which may have and may in future increase the suitability of a region to support some mosquito borne disease [80]. In the light of the many changes occurring, new players may also surface in mosquito borne disease epidemiology, as illustrated by the human-induced expanded distribution of An. plumbeus in Belgium [61].
Conflicts of Interest
The authors declare no conflict of interest.
|
2016-03-14T22:51:50.573Z
|
2013-12-19T00:00:00.000
|
{
"year": 2013,
"sha1": "77fddbed40352f2ae60f7e5d93ce6786bd5d88ca",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1660-4601/11/1/13/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77fddbed40352f2ae60f7e5d93ce6786bd5d88ca",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
}
|
221478382
|
pes2o/s2orc
|
v3-fos-license
|
Schrödinger’s Worker: Are They Positive or Negative for SARS-CoV-2?
In these days of 2020, tests for the diagnosis of SARS-CoV-2, and their use in the context of health surveillance of workers, are becoming popular. Nevertheless, their sensitivity and specificity could vary on the basis of the type of test used and on the moment of infection of the subject tested. The aim of this viewpoint paper is to make employers, workers, occupational physicians, and public health specialists think about the limits of diagnostic tests currently available, and the possible implication related to the erroneous and incautious assignment of “immunity passports” or “risk-free certificates” to workers during screening campaigns in workplaces.
Since SARS-CoV-2 made its appearance and began to spread worldwide causing hundreds of thousand deaths, many authors envisaged the occupational health and safety implications of the epidemic [1,2]. Many workers run a high risk of becoming infected, as well as being carriers themselves. This is especially true for some jobs, such as healthcare workers. In fact, being on the front line, they have a high risk of SARS-CoV-2 infection, of developing COVID-19, and of being a source of contagion for their patients, their colleagues, and their relatives [2]. Many possible factors contribute to COVID-19 clusters among healthcare workers: insufficient or incorrect use of protective personal equipment; close or direct contact with SARS-CoV-2 positive patients; working in confined indoor spaces; and shared canteen space, staff accommodation, transport, and/or social activities [3].
However, Healthcare is not the only sector where workers present a high risk of becoming infected: firefighters, policemen, cleaners, workers employed in care for the elderly, childcare, or education, public transport and taxi drivers, and many others run a significant risk [4]. Factors that increase risk of infection in the above mentioned and in many others work activities were described: working where interpersonal contact between workers is unavoidable; lack of compliance with preventive measures; sharing the same office space or canteen space, the same production line, dressing rooms, and accommodation (sometimes overfull and with poor hygiene conditions); meetings in overcrowd rooms; shared transport; working with clients; lack of facilities to wash hands; language challenges among migrant workers [3].
During the lockdown phase in some countries, only workers engaged in dealing with the COVID-19 emergency and those engaged in essential services were highly exposed; however, after the lockdown phase, millions of workers came back to work.
Employers, and occupational health services, therefore, face three major dilemmas. First: how reliably can we determine whether a worker is currently positive for SARS-CoV-2, and whether they can be a source of contagion for their own colleagues and for the people with whom they come in contact with while on duty? Second: how reliably can we establish whether a worker was previously infected and be assured that they will be indefinitely immune? Third, and most important: how can we reduce the risk of a worker getting the SARS-CoV-2 infection at work?
The aim of this viewpoint is to make the parties involved (employers, workers, occupational physicians, public health specialists) think about the limits of diagnostic tests currently available, and on the possible implication related to the erroneous, and incautious, assignment of "immunity passports" or "risk-free certificates" to workers during screening campaigns in workplaces.
To illustrate the quantum mechanics, the Nobel Prize physicist Erwin Schrödinger used a thought experiment, best known as the Schrödinger's cat paradox. He invited his colleagues to image a cat inside a box. The box would also enclose a mechanism that would release a poison and kill the feline with a 50% probability. It is impossible to know if the cat is dead or alive before opening the box. The feline is in an indeterminate state, both dead and alive. If we do not make a measurement, multiple realities may exist at the same time.
Although the probability of SARS-CoV-2 infection is considerably lower than 50%, the worker is in an indeterminate situation as well, which probably would depend on the prevalence of infection among the general population, the pandemic's curve at the time of observation, and the geographical area. For instance, two surveys conducted in Iceland in March on the general population revealed that, respectively, 0.8% (95% confidence interval (CI), 0.6-1.0%) and 0.6% (95% CI 0.3-0.9%) tested positive for SARS-CoV-2 infection [5]. In a pilot survey conducted in the United Kingdom, from 11th to 24th May, 0.24% (95% CI 0.11-0.46%) of the community population was estimated to be infected [6].
Therefore, to answer the first question "is it possible to determine with certainty whether a worker is currently positive or negative", all we have to do is open the "box". However, unlike the mentioned paradox, the judgment on whether "Schrödinger's worker" is positive or negative would vary by time and space, and the safety probability threshold one would establish to make this choice.
In fact, the sensitivity and specificity of the currently available tests to measure specific IgM and IgG could vary substantially by type of serological test and week of infection [7], and they need further validation to verify their reliability and accuracy [8]. Moreover, a recent meta-analysis showed a pooled sensitivity of 64.8% (95% CI 54.5-74.0%) in rapid testing for SARS-CoV-2, based on the reverse-transcription polymerase chain reaction (RT-PCR) in respiratory samples [9]. This means that the false negative rate (the measure of workers that are infected but that results wrongly negative after being subjected to rapid test) would be, in absolute terms, unacceptably high to allow a safe admission to the workplace on the basis of one single rapid SARS-CoV-2 test. Indeed, giving a worker a "certificate of negativity" incorrectly, without further precautions, would expose their colleagues and/or the public to risk of contagion. Besides a low sensitivity [10], nasopharyngeal swabs often require some time for processing and issuing the results, depending on the number of tests being conducted. New, faster tests and different biological matrices (e.g., saliva) are currently being tested. However, sensitivity and specificity calculations in asymptomatic carriers are not yet available [11].
To open the "box", through the use of the tests currently available in addition to collecting a detailed medical history (e.g., history of fever or respiratory symptoms in the previous weeks, new loss of taste or smell, cohabitation or contacts with subjects tested positive for SARS-CoV-2), might help to answer the second question: has the worker been infected in the past? However, again, the response we can get comes with a margin of error, due to different specificity by type of test used and little evidence about the time of seroconversion; also, evidence about protection from future infection and time of possible protective immunity is still missing [7, 8,10]. Furthermore, recent studies suggested a rapid decay of anti-SARS-CoV-2 IgG levels in asymptomatic subjects [12] and in those with history of mild Covid-19 [13].
The third question, how can we deal with the challenge of protecting workers from SARS-CoV-2 infection during their work activity?
Primary prevention in the workplace could be the way forward: information and worker training about sources of exposure to SARS-CoV-2 and the hazards associated with exposure to the virus; use of appropriate workplace protocols to prevent or reduce the likelihood of exposure (social distancing, clean and dirty pathways, hand and workplace hygiene, practice of good respiratory etiquette, and procedures to isolate and identify suspected cases); use of adequate personal protective equipment, especially during procedures implying high risk (e.g., for health care workers); use of COVID-19 self-assessment tools; and identification of workers who may be at increased susceptibility [14][15][16]. In this regard, as the authors of a recent review and meta-analysis on this topic suggest, both FFP2 and surgical or 12-16-layer cotton masks are more protective than single-layer masks, especially in health care settings. They also suggest that physical distancing greater than 1 m and the use of eye protection may reduce the risk of infection. However, these interventions (even when appropriately used) seem not to be associated with complete protection; other measures, such as hand hygiene, are needed in addition to the use personal protective equipment [17].
These measures, many of which are also valid outside the workplace, could be accompanied by the above-mentioned tests even if, currently, they do not match the requirements for a good screening test that can "certificate negativity". Moreover, there is no evidence for assignment of "immunity passports" or "risk-free certificates" in people who previously received a positive test [8,18]. These subjects, who incorrectly assume that they are immune to a second infection, might ignore public health advices and increase the risks of continued transmission in the workplace and at home [8].
Therefore, until a safe and effective vaccine becomes available, we will have "to open the box" and to shield as safely as reasonably possible its contents, if we want to preserve the health and wellbeing of workers and the community.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-09-02T13:07:52.541Z
|
2020-08-31T00:00:00.000
|
{
"year": 2020,
"sha1": "50dd7b9c6bddf05daddc41868a205cdab4d6b7ce",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/17/6316/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "61cf44da8354ab4c96bc9761d97a023d193fb629",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266144931
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of Isocyanate Content in PF/pMDI Adhesive for the Production of High-Performing Particleboards
Due to the fact that impregnation with fire retardant usually reduces the strength of the produced particleboards, this research was carried out to investigate whether it is possible to use phenol–formaldehyde (PF) resin modified using various amounts (0%, 5%, 10%, 15%, and 20%) of polymeric 4,4′-methylene diphenyl diisocyanate (pMDI) for this purpose. The need to optimize the addition of pMDI is particularly important due to health and environmental aspects and high price. Furthermore, the curing process of hybrid resins is still not fully explained, especially in the case of small loadings. Manufactured particleboards differed in the share of impregnated particles (50% and 100%). The mixture of potassium carbonate and urea was used as the impregnating solution. Based on the outcomes of hybrid resins properties, it was found that the addition of pMDI leads to the increase in solid content, pH, and viscosity of the mixtures, to the improvement in resin reactivity determined using differential scanning calorimetry and to the decrease in thermal stability in the cured state evaluated using thermogravimetric analysis. Moreover, particleboard property results have shown that using impregnated particles (both 50% and 100%) decreased the strength of manufactured boards bonded using neat PF resin. However, the introduction of pMDI allowed us to compensate for the negative impact of fire-retardant-treated wood and it was found that the optimal loading of pMDI for the board containing 50% of impregnated particles is 5% and for board made entirely of treated wood it is 10%.
Introduction
Over the years, adhesives have played and still continue to play a huge role in the effective use of wood resources in various forms [1].For this purpose, the binding properties of some natural polymers have been known and used for centuries in the production of wooden objects.However, the rapid development of plastics chemistry in the 20th century resulted in the development of more cost-effective synthetic adhesives that behaved better in humid conditions [2,3].The most commonly used thermosetting, formaldehyde-based, polycondensation adhesives include urea-formaldehyde (UF) resin, melamine-urea-formaldehyde (MUF) resin and phenol-formaldehyde (PF) resin, and their development is the key factor for the production of functional wood-based products [4].These adhesives gradually took over the market and are currently estimated to account for approx.95% of all adhesives applied in the wood-based materials industry [5,6].Therefore, considering their dominant market share and the growing requirements in terms of the performance of wood-based materials, studies on the possibility of their enhancement are still the subject of numerous scientific works conducted worldwide [7][8][9].
Research on the formaldehyde-based adhesives modified using pMDI (polymeric 4,4 -methylene diphenyl diisocyanate) has shown some promising results in recent years.
Isocyanates as wood adhesives stand out because of their high adhesion and cohesion strength.Moreover, created bonds are characterized by significant hardness and ductility in the temperature range from around −140 to 130 • C, as well as increased resistance to chemicals, biological factors, and aging processes [10].Because of its high reactivity and fast cure, pMDI has been extensively researched for its use as a UF and PF resin modifier, which resulted in the development of hybrid resins such as PF/pMDI and UF/pMDI, which were used in the studies on wood-based materials before [11,12].The results of a few selected ones are summarized in Table 1.
Table 1.Examples of studies on using pMDI-modified hybrid resins for the production of woodbased materials.
As shown in Table 1, the introduction of pMDI to both UF and PF adhesive significantly improved the properties of commonly used wood-based materials such as particleboard, oriented strand board (OSB), and plywood.Interestingly, pMDI-modified adhesives were also found to be suitable for manufacturing composites from particles other than wood, in the case of which the stronger adhesion is needed to meet the standardized strength requirements.Studies have shown that hybrid adhesives worked well as binding agents for the production of boards made of, e.g., rape straw [21], sunflower husks [22], pine bark [23], waste corrugated paper [24], and kraft paper [25].The outcomes have shown that, due to the application of PF/pMDI resin, produced composites were characterized by satisfactory mechanical and physical features, which allowed them to fulfill the requirements of proper standards.
An example of the potential use of PF/pMDI resin due to its high adhesion strength may be the production of particleboards with increased fire resistance.In general, the low fire safety of wood-based materials is usually considered a disadvantage, especially when used in interior or structural applications [26].The presence of unprotected wooden elements may contribute to the spread of a fire and, therefore, as stated by Harada et al. [27], the following characteristics are expected from wood-based materials when used in constructions: they do not breakdown or deform in the presence of fire, the temperature of unexposed side does not exceed burning temperature of the entire material, and the structure of the material does not crack or become otherwise damaged because of the fire outside the building.To achieve these qualities, it is necessary to implement fire protection, for example, by treating the particles using fire retardants [28].It protects the material throughout its entire cross-section, not only on the surface as in the case of fire-resistant coating application [29,30].However, the impregnating solutions can not only influence the curing process of adhesives but also change the condition of the wood surface.According to Ayrilmis et al. [31], using impregnated wood can lead to the deterioration of resultant particleboard due to the change in the pH of wood, reduction in the number of hydroxyl groups available for bonding, and mechanical interference of salt.The outcomes of research regarding the production of boards from the particles impregnated with monoammonium phosphate, diammonium phosphate [32,33], Burnblock ® [34], borax, boric acid [35], and boric-acid-disodium-octaborate [36] confirm there negative influence.Therefore, it is necessary to develop a method of modifying resins to produce impregnated boards characterized by good strength properties.
Taking into account that PF/pMDI resin has not been used before for impregnated wood gluing, a project aimed at determining its suitability for the production of particleboard with increased fire resistance was started.This is the continuation of preliminary research that confirmed the effectiveness of fire protection of a mixture of potassium carbonate and urea and indicated favorable results in the case of modification of PF using 20% of pMDI [37].However, considering the high cost of pMDI [38], the adverse impact of isocyanates on human health [39] and environmental aspects [40], there is a great need to optimize the loading of the modifier.Therefore, we decided to conduct research aimed at determining the effect of various amounts of pMDI on the properties of hybrid resins, which still has not been fully explained and described in the literature, especially in the case of small loadings, and to determine the optimal amount of pMDI that should be applied in the production of particleboards characterized by the increased resistance to fire.
Materials
Phenol-formaldehyde (PF) resin was provided by Silekol (Kędzierzyn-Koźle, Poland).Properties of applied unmodified phenolic resin can be found in Table 2. pMDI purchased from Bayer AG (Leverkusen, Germany) was characterized using the following characteristics: a solid content of 100%, chlorine hydrolytic of 96 mg/kg, NCO content of 32%, and a viscosity of 220 mPa•s.Reagents needed to prepare the impregnating solution such as potassium carbonate (99% pure, Altair Chimica, Saline, Italy) and urea (analytical purity, Chempur, Piekary Śl ąskie, Poland) were used as received without further purification.Pine (Pinus sylvestris L.) wood particles used in industrial conditions to produce the middle layer of three-layer particleboard were supplied by a local manufacturer of wood-based materials.
Determination of Properties of PF and PF/pMDI Adhesives
The adhesive formulations in this research included pure PF resin and PF resin with 5%, 10%, 15%, and 20% pMDI.After introducing the assumed modifier amount, the mixture was stirred manually until a proper homogenization was obtained.
The hybrid resins differing in the amount of pMDI we evaluated in terms of parameters commonly used to assess the quality of resins in the wood-based materials industry [41].The solid content was determined according to EN 827 [42].The viscosity of mixtures was determined using Brookfield DV-II+Pro viscometer (Middleboro, MA, USA).pH measurements were carried out using a Testo 206 pH-meter (Pruszków, Poland).Each test was repeated three times for every variant.
Thermogravimetric analysis (TGA) was used to study the thermal decomposition of the used formulations.The samples of 10 ± 0.2 mg were heated in the temperature range of 30-900 • C with a rate of 10 • C/min using a Netzsch TG209 F1 apparatus (Selb, Germany).The measurements were realized using Al 2 O 3 crucibles in an inert atmosphere (nitrogen).The first mass derivative of the mass degradation (DTG) curves was calculated in reference to the obtained mass vs. temperature curves.
The thermal behavior during curing of the PF and PF/pMDI resins was analyzed using the differential scanning calorimetry (DSC) method.Samples of 20 ± 0.5 mg were placed in hermetic high-pressure crucibles at −10 • C to 280 • C with a rate of 10 • C/min.A Netzsch DSC 214 Nevio apparatus (Selb, Germany) and an inert nitrogen atmosphere were used.
Impregnation of Wood Particles and Particleboard Manufacturing
To determine the size distribution within the mixture of wood particles, the fractional composition was determined.The particles were passed three times through a set of flat sieves with the following square perforations: 0.315, 1.0, 1.5, 2.0, 2.5, 4.0, 5.0, and 6.3 mm.Based on the outcomes, it was found that the dimensions of the vast majority of particles were in the range between 1.0 and 2.5 mm.
The oven-dried particles characterized by a moisture content (MC) of 3 ± 2% were placed in 45 L HDPE containers filled with impregnating solution and soaked for 60 min at atmospheric pressure.A 30% aqueous solution of a mixture of potassium carbonate and urea in a weight ratio of 2:1 was applied for fire protection.Potassium carbonate was selected due to numerous favorable features such as excellent fire retarding efficiency, low iron corrosiveness, fungicidal properties, and no harm to human and animal health [43,44].Moreover, urea included in the formulation of fire retardant has a synergistic effect on the kinetics of thermal decomposition and, thus, the effectiveness of fire protection [45].After the assumed immersion time, particles were left to drain for 15 min and then dried in a laboratory oven to reach the MC of 4 ± 2%.
Particleboard made of unimpregnated particles, bonded with pure PF adhesive was used as a reference variant for this research.Furthermore, the range of experimental variants assumed that four single-layer particleboards were produced for each adhesive formulation containing 0%, 5%, 10%, 15%, and 20% pMDI.Two of them were manufactured using 50% of impregnated particles and the remaining two were made entirely of impregnated particles, which allowed us to determine the optimal adhesive composition for two different shares of impregnated wood.The following parameters were used to produce the boards: assumed thickness of 10 mm, assumed density of 650 kg/m 3 , dimensions of 670 × 580 mm, gluing degree of 10%, pressing temperature of 180 • C, unit pressure of 2.5 N/mm 2 , and pressing time of 25 s/mm of the assumed board thickness.The appearance of the produced particleboards differing in the content of impregnated particles is shown in Figure 1.
Determination of Particleboards Properties
Produced particleboards were conditioned for seven days at relative humidity (RH) of 65 ± 5% and temperature of 21 ± 2 °C prior to testing.Determinations of the mechanical and physical properties were performed in accordance with relevant standards: density according to EN 323 [46], bending strength (MOR) and modulus of elasticity (MOE) according to EN 310 [47], internal bond (IB) according to EN 319 [48], internal bond after the boiling test (V100) according to EN 1087 [49], and thickness swelling (TS) after 24 h of soaking according to EN 317 [50].The tests were carried out using 12 samples from each variant.
Determination of Particleboards Properties
Produced particleboards were conditioned for seven days at relative humidity (RH) of 65 ± 5% and temperature of 21 ± 2 • C prior to testing.Determinations of the mechanical and physical properties were performed in accordance with relevant standards: density according to EN 323 [46], bending strength (MOR) and modulus of elasticity (MOE) according to EN 310 [47], internal bond (IB) according to EN 319 [48], internal bond after the boiling test (V100) according to EN 1087 [49], and thickness swelling (TS) after 24 h of soaking according to EN 317 [50].The tests were carried out using 12 samples from each variant.
Statistical Analysis
Analysis of variance (ANOVA) was conducted to analyze the results of particleboard properties.Moreover, to distinguish homogeneous groups and assess the significance of observed changes, a HSD Tukey test at the significance level of α = 0.05 was performed using Statistica 13.3 software.
Results and Discussion
The results of the resin's properties investigations are presented in Table 2.It was found that the modification of PF resin using pMDI led to the increase in solid content, viscosity, and pH of mixtures and the obtained values were higher by 12%, 78%, and 8% in the case of variants containing the highest loading of pMDI, respectively.The increase in viscosity of adhesives, which is a crucial parameter that significantly affects the mechanical performance of manufactured particleboard [51], was probably caused by the increase in solid content, reactions between pMDI and water, and the formation of urethane linkages between the PF resin and pMDI [19,52].This phenomenon is consistent with the observations of Zheng et al. [53], who stated that, in the case of hybrid PF/pMDI resins, the viscosity increases with the increasing content of isocyanate up to 50%.
The thermograms presented in Figure 2 illustrate the effect of the addition of various amounts of pMDI on the condensation process of PF resin, which is represented by an exothermic region with a single peak.Moreover, Table 3 summarizes the parameters characterizing the curing process of modified adhesives, i.e., onset temperature (T onset ), peak temperature (T p ), final temperature (T endset ), and total heat (∆H).It was found that the condensation of hybrid PF/pMDI resin is a complex process, where the content of introduced pMDI plays a significant role.As the amount of pMDI increased, clear changes in the reaction kinetics were observed, confirming the modifier's significant impact on the behavior of PF resin.The results have shown that, as the amount of pMDI increased, the exothermic peak temperature gradually decreased.In the case of maximum loading of isocyanate, the Tp was reduced by 32 °C compared to pure PF resin.Moreover, an increase in the values of total heat released during the process was also ob- It was found that the condensation of hybrid PF/pMDI resin is a complex process, where the content of introduced pMDI plays a significant role.As the amount of pMDI increased, clear changes in the reaction kinetics were observed, confirming the modifier's significant impact on the behavior of PF resin.The results have shown that, as the amount of pMDI increased, the exothermic peak temperature gradually decreased.In the case of maximum loading of isocyanate, the T p was reduced by 32 • C compared to pure PF resin.Moreover, an increase in the values of total heat released during the process was also observed.In the case of pure PF resin, the ∆H value was 178.1 J/g and the addition of pMDI in the amount of up to 10% caused a gradual increase in the enthalpy value up to 278 J/g.However, a further increase in the loading of pMDI reduced the enthalpy to 190.2 J/g, which is still a higher value than pure PF resin.The decrease in T p together with the increased heat release during the exothermic transition of the resin indicate the increased reactivity of the modified mixtures and the possibility of lowering the curing temperature of the experimental, hybrid adhesives.From a practical point of view, this modification may result in lower energy consumption during the technological process and, as a result, could contribute to reducing production costs and more efficient use of resources [54].Moreover, considering that the reaction enthalpy is directly related to the conversion degree of the resin, it can be concluded that adding pMDI to PF resin allowed a higher conversion degree [55,56].Observed improvement in the reactivity of PF resin, which is shown mainly by the reduction in T p , results from the fact that, in addition to the typical slow condensation reaction leading to the formation of methylene bonds (-CH 2 -), a second reaction takes place (almost simultaneously) as well.The reaction of the hydroxyl groups of PF resin with the isocyanate groups of pMDI leads to the formation of durable urethane linkages (-NH-CO-O-) [57][58][59][60].The reaction of pMDI with the functional groups of PF resin is probably hindered by the considerable amount of water introduced with the PF resin and released during its condensation.As a result, it also disrupts the curing process.In turn, at the higher concentration of pMDI (20%), the formation of urethane linkages intensified which led to the formation of a larger number of urethane bonds and changed the reaction kinetics.It is also shown by the further decrease in T p and increase in both T onset and T endset , at which the exothermic transition begins and ends.Mixing pMDI with PF resin produced an oil-water emulsion, where pMDI created a dispersed phase of vitrified urea/urethane/biuret structures which accelerated the curing process of the PF/pMDI mixture [53,58].
Despite the recorded improvement in the reactivity of PF/pMDI hybrid resin, we also noticed that the introduction of pMDI in the amount ranging from 5 to 15% resulted in a gradual decrease in resin's thermal stability, especially at temperatures exceeding 300 • C (Figure 3, Table 4).
Polymers 2023, 15, x FOR PEER REVIEW 7 of 15 Tonset and Tendset, at which the exothermic transition begins and ends.Mixing pMDI with PF resin produced an oil-water emulsion, where pMDI created a dispersed phase of vitrified urea/urethane/biuret structures which accelerated the curing process of the PF/pMDI mixture [53,58].Despite the recorded improvement in the reactivity of PF/pMDI hybrid resin, we also noticed that the introduction of pMDI in the amount ranging from 5 to 15% resulted in a gradual decrease in resin's thermal stability, especially at temperatures exceeding 300 °C (Figure 3, Table 4).Overall, the thermal degradation process of PF resin includes three main degradation stages: post-curing, thermal reforming, and ring stripping [60,61].Based on the analysis of TG/DTG curves, three mass-loss stages can be noted: a first in the temperature range up to 270 • C, a second between 270 and 450 • C, and a third between 450 and 550 • C. In the first stage of pyrolysis, usually occurring below 270 • C, a mass loss mainly related to the release of free formaldehyde, phenol, oligomer, and water introduced with PF resin and resulting from the progressing cross-linking reaction can be observed.In the second stage, recorded mass loss resulted from the decomposition of bridged methylene (thermal reforming).Furthermore, when the temperature exceeded 450 • C (third stage), the mass loss was caused by the degradation of methylene bonds, resulting in carbon monoxide and methane volatilization.Additionally, phenol was further degraded to the carbon structure at a temperature of about 500-550 • C [62][63][64], and the aromatic structure was degraded as well.Based on the analysis of TG/DTG curves and the data presented in Table 4, it can be observed that, in the first stage of pyrolysis, the addition of pMDI to PF resin in the amounts of 5 and 10% caused a slight shift of the maximum mass loss rate towards lower temperatures, while the residual mass (RM) values remained similar.The increase in the loading of pMDI to 15% resulted in a slight increase in mass loss by approx.9% compared to the reference sample.More significant mass losses observed in the case of these variants were noted only in the next stage of degradation.It seemed that small content of pMDI (5-15%) caused the gradual loss of thermal stability of hybrid resin which has been shown by the decrease in the residual mass values.It should be also noted that, as the loading of pMDI increased, the maximum mass loss rate shifted towards higher temperatures.However, the thermal stability of the resin modified using 20% of pMDI seemed different.In this case, the addition of pMDI increased thermal stability of the resin in the temperature range up to approx.350 • C, allowed us to obtain the residual mass at a level comparable to or slightly higher than noted for pure PF resin and to shift the maximum mass loss rate towards higher temperatures.In the next stages, the mixture containing 20% of pMDI also experienced a significant mass loss; however, the residual mass was slightly higher than observed for resins enhanced with smaller amounts of pMDI but still lower than in the case of pure PF resin.It is worth emphasizing that the results of TG/DTG are consistent with the outcomes obtained by Liu et al. [60] who showed that the mixture of PF resin with pMDI in a 1:1 weight ratio was characterized by better thermal properties than pure PF resin at temperatures below 500 • C. Reduced thermal stability of PF/pMDI resins containing small amounts of pMDI (5-15%) was probably related to the fact that, with such small loadings of isocyanate, the dominant reaction was slow condensation of PF resin [52].The second reaction was between pMDI and hydroxyl of the methylol group (-CH 2 OH) of resol PF with the formation of a relatively small number of urethane linkages.As already mentioned, pMDI introduced into the system does not react with water to the same extent as with the hydroxyl groups of PF resin, which, in turn, may interfere with the formation of PF/pMDI structure and can lead to a reduction in thermal stability [52,58,65].Furthermore, the addition of 20% pMDI to PF resin contributed to the formation of a larger number of more durable urethane bonds and, as a result, thermal stability at the temperature of up to 350 • C was improved.When the temperature rose, the thermal degradation process started for both PF resin and pMDI, which led to decreased thermal stability of PF/pMDI.According to Xu et al. [65], who obtained similar results in the case of MUF/pMDI resin, this effect can be attributed to the increased rigidity of the polymer molecules using aromatic pMDI with the bulky double phenyl ring.
Figure 4 presents the results of manufactured particleboards' density and mechanical properties.The strength of produced materials is especially important because, even if the board achieves effective protection against fire, the deterioration caused by the treatment can still be a limiting factor in some applications, e.g., for structural purposes [66].Based on the outcomes of the density measurements, it was found that both wood impregnation and the introduction of pMDI to the adhesive mixtures did not affect the obtained results.The average values ranged from 631.8 to 671.2 kg/m 3 , relatively close to the assumed one.Therefore, the effect observed by Du and Song [32], consisting of the increase in density of the board due to the use of fire retardant-treated particles, did not occur in this study.The comparison of variants bonded using unmodified PF resin showed that impregnation negatively impacted the strength of particleboards and, moreover, the greater the share of impregnated particles was, the more pronounced deterioration was.The results of bending strength, modulus of elasticity and internal bond were decreased by 18%, 11%, and 22% when the share of impregnated particles was 50% and by 27%, 21%, and 43% in the case of boards made entirely of impregnated wood.Studies have also shown that the introduction of pMDI to PF resin contributed to the improvement in the mechanical characteristics of particleboards.The improvement in adhesion strength probably resulted from the formation of crosslinking and/or linear structures (urea/biuret/dimmer/trimer), which was caused by the reactions between pMDI and water stored in the middle lamellae of wood cell walls [67].Furthermore, pMDI can contribute to the improvement in adhesion due to the formation of polyurethane structures resulting from the reactions with hydroxyl groups of polysaccharides and phenolic groups of lignin contained in the wood tissue [17].As stated by Zheng et al. [53], the introduction of isocyanate to PF resin could enhance the morphology of cured adhesive and, consequently, could toughen the resultant bond lines which may also have a favorable effect on the strength of the boards.The statistical analysis showed that, to achieve results as good as in the case of the reference variant, the amount of 5% and 10% should be applied for boards containing 50% and 100% of impregnated particles, respectively.Considering that Based on the outcomes of the density measurements, it was found that both wood impregnation and the introduction of pMDI to the adhesive mixtures did not affect the obtained results.The average ranged from 631.8 to 671.2 kg/m 3 , relatively close to the assumed one.Therefore, the effect observed by Du and Song [32], consisting of the increase in density of the board due to the use of fire retardant-treated particles, did not occur in this study.The comparison of variants bonded using unmodified PF resin showed that impregnation negatively impacted the strength of particleboards and, moreover, the greater the share of impregnated particles was, the more pronounced deterioration was.The results of bending strength, modulus of elasticity and internal bond were decreased by 18%, 11%, and 22% when the share of impregnated particles was 50% and by 27%, 21%, and 43% in the case of boards made entirely of impregnated wood.Studies have also shown that the introduction of pMDI to PF resin contributed to the improvement in the mechanical characteristics of particleboards.The improvement in adhesion strength probably resulted from the formation of crosslinking and/or linear structures (urea/biuret/dimmer/trimer), which was caused by the reactions between pMDI and water stored in the middle lamellae of wood cell walls [67].Furthermore, pMDI can contribute to the improvement in adhesion due to the formation of polyurethane structures resulting from the reactions with hydroxyl groups of polysaccharides and phenolic groups of lignin contained in the wood tissue [17].As stated by Zheng et al. [53], the introduction of isocyanate to PF resin could enhance the morphology of cured adhesive and, consequently, could toughen the resultant bond lines which may also have a favorable effect on the strength of the boards.The statistical analysis showed that, to achieve results as good as in the case of the reference variant, the amount of 5% and 10% should be applied for boards containing 50% and 100% of impregnated particles, respectively.Considering that in a previously conducted study only the addition of 20% was used [37], this research showed that reducing this loading by 75% or 50% is possible, depending on the share of treated particles.
Optimization of the amount of introduced pMDI is crucial due to the health risks, environmental aspects, and high price [10].Many diisocyanates such as, for example, methylene diphenyl diisocyanate (MDI) and toluene diisocyanate (TDI) are classified as substances suspected of causing cancer (H351).Moreover, Husskonen et al. [68] stated that diisocyanates generally act as eye, respiratory, and skin irritants.The estimated number of incidents of occupational asthma related to their use in the European Union ranges from 2350 to 10,150 cases each year and the production of adhesives is identified as a particularly exposed branch of industry [69].On the industrial scale, diisocyanates are usually produced from petroleum resources and, in most cases, they are obtained by reacting a primary amine with highly toxic phosgene [70,71].The exposure of the natural environment to isocyanates can lead to the pollution of water [72], soil [73], and air [74].Therefore, the content of pMDI in hybrid resins should be adjusted to be as low as possible to decrease the negative impact and reduce production costs.
Figure 5 presents the results of thickness swelling and internal bond after boiling, indicating the water resistance of manufactured particleboards.Taking into account the potential structural application of produced materials, it is essential to investigate their behavior under the conditions of constant or periodic exposure to water usually resulting in the creation of internal stresses and deterioration in the strength of adhesive bonds [75,76].Optimization of the amount of introduced pMDI is crucial due to the health risks, environmental aspects, and high price [10].Many diisocyanates such as, for example, methylene diphenyl diisocyanate (MDI) and toluene diisocyanate (TDI) are classified as substances suspected of causing cancer (H351).Moreover, Husskonen et al. [68] stated that diisocyanates generally act as eye, respiratory, and skin irritants.The estimated number of incidents of occupational asthma related to their use in the European Union ranges from 2350 to 10,150 cases each year and the production of adhesives is identified as a particularly exposed branch of industry [69].On the industrial scale, diisocyanates are usually produced from petroleum resources and, in most cases, they are obtained by reacting a primary amine with highly toxic phosgene [70,71].The exposure of the natural environment to isocyanates can lead to the pollution of water [72], soil [73], and air [74].Therefore, the content of pMDI in hybrid resins should be adjusted to be as low as possible to decrease the negative impact and reduce production costs.
Figure 5 presents the results of thickness swelling and internal bond after boiling, indicating the water resistance of manufactured particleboards.Taking into account the potential structural application of produced materials, it is essential to investigate their behavior under the conditions of constant or periodic exposure to water usually resulting in the creation of internal stresses and deterioration in the strength of adhesive bonds [75,76].In the case of boards bonded using pure PF resin, the results have shown that adding fire-retardant-treated particles in the amount of 50% did not affect the results of thickness swelling.However, when the share of impregnated particles increased to 100%, manufactured particleboards demonstrated lower swelling values.According to Jayamani et al. [77], impregnation of wood with potassium carbonate reduces its hydrophilicity due to ongoing reactions between fire retardant and hydroxyl groups of wood and changes in the orientation polarization [78].Furthermore, the reason for such an effect could be a partial degradation of hemicelluloses which was observed before for wood treated using potassium carbonate as well [77].On the other hand, in the case of boards bonded using unmodified PF resin, the results of the internal bond after boiling have shown a deterioration of 45% and 67% due to the use of impregnated particles in the amount of 50% and 100%, respectively.Based on the statistical analysis, it was found that, to produce boards with the same properties as the reference variant, the PF resin should be modified using 5% pMDI in the case of 50% impregnated particles and 10% pMDI in the case of 100% impregnated particles.Overall, the introduction of pMDI to the PF resin led to an improvement in the particleboard's resistance to water and as the content of isocyanate increased, the observed improvement was more noticeable.According to Iswanto et al. the favorable effect of pMDI on TS and V100 could result from its reaction with hydroxyl groups of wood constituents which reduces the accessibility of water.Moreover, the In the case of boards bonded using pure PF resin, the results have shown that adding fire-retardant-treated particles in the amount of 50% did not affect the results of thickness swelling.However, when the share of impregnated particles increased to 100%, manufactured particleboards demonstrated lower swelling values.According to Jayamani et al. [77], impregnation of wood with potassium carbonate reduces its hydrophilicity due to ongoing reactions between fire retardant and hydroxyl groups of wood and changes in the orientation polarization [78].Furthermore, the reason for such an effect could be a partial degradation of hemicelluloses which was observed before for wood treated using potassium carbonate as well [77].On the other hand, in the case of boards bonded using unmodified PF resin, the results of the internal bond after boiling have shown a deterioration of 45% and 67% due to the use of impregnated particles in the amount of 50% and 100%, respectively.Based on the statistical analysis, it was found that, to produce boards with the same properties as the reference variant, the PF resin should be modified using 5% pMDI in the case of 50% impregnated particles and 10% pMDI in the case of 100% impregnated particles.Overall, the introduction of pMDI to the PF resin led to an improvement in the particleboard's resistance to water and as the content of isocyanate increased, the observed improvement was more noticeable.According to Iswanto et al. [79], the favorable effect of pMDI on TS and V100 could result from its reaction with hydroxyl groups of wood constituents which reduces the accessibility of water.Moreover, the access of water could also be decreased by the improvement in morphology of bond lines which prevents water penetration into the joint [53,80,81].
Based on the results of all properties, it was found that, to achieve properties as good as the reference variant, the optimal content of pMDI for 50% of impregnated particles is 5%, and for 100% of impregnated particles it is 10%.The boards produced this way were classified as P4 boards (load-bearing boards for use in dry conditions) according to EN 312 [82].The outcomes have also shown that to achieve the properties of P5 boards (load-bearing boards for use in humid conditions), the loading of pMDI should be increased to 15% for both shares of impregnated particles.
The results of ANOVA are presented in Table 5.Based on statistical parameters, a significant influence of both the share of impregnated particles (factor A) and the amount of pMDI introduced to PF resin (factor B) on the values of all parameters investigated was confirmed.This is evidenced by the high values of the sum of squares (SS), mean square (MS) and Fisher statistic (F), and low values of p-value (p < 0.05).Moreover, there is a close interaction between factors A and B which means that the effect of one factor depends on the level of the other factor; the SS, MS, and F values obtained for this interaction are also statistically significant.
Conclusions
The obtained results indicate that hybrid PF/pMDI resin is suitable for gluing wood particles impregnated with a mixture of potassium carbonate and urea in particleboard production.
During the course of the research, the follow has been demonstrated:
•
As the amount of pMDI increases, an increase in viscosity, solid content, and pH of the adhesive mixtures can be observed.
•
The modification of PF resin using pMDI positively affects the condensation kinetics and reactivity of adhesive mixtures, as evidenced by a decrease in peak temperature and an increase in total heat release.
•
The addition of pMDI in the range of 5% to 15% leads to a gradual decrease in the thermal stability of hybrid resins at temperatures above 300 • C. Further increasing the pMDI content to 20% contributes to an enhancement in resin thermostability in the temperature range up to 350 • C.
•
Impregnation of wood particles does not affect the density of resultant particleboards, regardless of the share of fire retardant-treated wood.However, their mechanical properties, such as bending strength, modulus of elasticity and internal bond, deteriorate in the case of boards bonded using neat PF resin.
•
The enhancement of PF resin using pMDI results in improved mechanical properties and water resistance of these boards and allows the production of materials with properties as good as the untreated board.
•
The loading of pMDI was optimized to be 5% in the case of particleboards containing 50% impregnated particles and 10% for boards produced using 100% impregnated wood.These materials can be classified as P4 particleboards; to upgrade the class to P5, the loading of pMDI should be increased to 15%.
Figure 2 .
Figure 2. DSC curves of PF resin with the addition of different amount of pMDI.
Figure 2 .
Figure 2. DSC curves of PF resin with the addition of different amount of pMDI.
Table 3 .
Results of DSC analysis of PF resin with the addition of various amounts of pMDI.
Table 3 .
Results of DSC analysis of PF resin with the addition of various amounts of pMDI.
Table 5 .
The results of the ANOVA.
|
2023-12-10T16:03:41.019Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "20b262b31d886d1f01fd4ccfd6517516b3c01ff7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/15/24/4645/pdf?version=1702017675",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "35d9abad867c449491f86a5f1ad61c42856f6744",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56419896
|
pes2o/s2orc
|
v3-fos-license
|
Gingival squamous cell carcinoma masquerading as periodontal pathology
Cancer of the oral cavity and pharynx are the most common type of head and neck cancer with annual global incidence estimated at approximately 480,000 cases. More than 90% of oral cancer cases occur in people older than 45 years. Lesions of gingiva account for approximately 10% of the oral squamous cell carcinomas and may clinically present mimicking as benign or inflammatory lesions, leading to delay in definitive diagnosis and treatment with unfavourable outcome. Present case is a 60year old male patient who presented with chronic periodontal lesion since the past 6 months. However, a through history, clinical and histopathological examination revealed the lesion to be gingival squamous cell carcinoma and treatment was initiated.
Introduction
The incidence rate of oral cancers is around 4 per 100,000 of population worldwide. Despite technological advances in treatment and diagnostic methods, oral cancer is usually associated with high mortality and morbidity rates, especially in developing countries. [1] To increase the duration of survival and quality of life, concerted effort should be aimed at shortening the diagnostic procedure and time elapsed between diagnosis and treatment to limit possible tumor progression. [1] More than 90% of oral cancer occurs in people older than 45 years. Lesions of gingiva account for approximately 10% of the oral squamous cell carcinomas and may present clinically as an area of ulceration, exophytic mass, or red/white speckled patches. Carcinoma of gingiva constitutes an extremely important group of neoplasms as the lesion frequently mimics the reactive and inflammatory conditions affecting the periodontium, delaying the diagnosis and making the prognosis of the patient poorer. [2] Here is presenting an unusual case of squamous cell carcinoma in a 60yr old man that clinically mimicked a periodontal lesion.
Case report
A 60year old male patient presented to the Department of Oral Pathology, SCB Dental College and Hospital with a chief complaint of pain in the gums in right lower part of mouth for the past 6 months. There was history of localized pain, bleeding from gingiva during brushing that was not relieved after taking medication. Past dental history also revealed that the patient had visited various dental & medical practitioners who had diagnosed the lesion as chronic periodontitis and palliative care was provided for the same. As there was no improvement in his oral health, he visited our hospital. Detailed history taking revealed Patient also had a habit of chewing tobacco (10-12 Gutkha per day) and smoking bidi (10-15 bidi per day) for last 15 yrs. Intraoral examination revealed a pinkish rough granular tender growth measuring about 1x1cm over the marginal & attached gingiva in relation to right lower canine to Right lower 1 st premolar (fig-1). Borders were raised and firm on palpation with a firm, fixed, and broad base. There was no bleeding or exudation of pus on palpation, neither any associated tooth mobility. There was gingival recession in relation to 43. There was also an ulcerated mass in relation to periapical region of 41 in the attached gingiva. Patient had poor oral hygiene, tobacco stained teeth, halitosis, stains and calculus deposits. There was no regional lymphadenopathy. Routine hematological investigations were inconclusive. Lateral oblique radiograph of the patient revealed no significant underlying bone loss (fig-2). The chronicity of the lesion, tobacco history and atypical clinical findings raised a suspicion of malignancy. At this stage, a provisional diagnosis of GSCC (Gingival squamous cell carcinoma) was made and after obtaining the patient's consent, scrape cytology was immediately performed and the slides were IJBAR (2016) 07 (03) www.ssjournals.com stained with leishman`s stain. Interestingly we found moderately dysplastic epithelial cells on the cytosmear (fig-3). Patient was advised for incisional biopsy following which a histopathological diagnosis of well differentiated squamous cell carcinoma of gingiva was made (fig-4). Patient was referred to Acharya Harihar Regional Cancer Centre (AHRCC) for further management where he subsequently underwent surgery. Patient was free of disease in a one year clinical follow up.
Discussion
Squamous cell carcinoma (SCC) is defined as a malignant epithelial neoplasm exhibiting squamous differentiation characterized by the formation of keratin. [3] Studies have shown that the most common etiologic factors associated with SCC are smoking, which carries 2-3 times greater risk than that in the general population (only 0.03% to begin with), smokeless tobacco use which increases the general risk fourfold, and chewing pan (a combination which includes calcium hydroxide, areca nut, and betel leaf) which increases the general risk by a factor of 8. [4][9] SCC is often asymptomatic, and the initial symptoms are usually an intraoral mass or swelling, ulceration, pain, ill-fitting dentures, mobility of teeth, or unhealed extraction wounds. Gingival SCC frequently resembles inflammatory lesions affecting the periodontium like pyogenic granuloma, gingivitis, periodontitis. In early stages, the lesion may closely simulate advanced periodontitis, associated with minimal pain, and may lead to a diagnostic delay. [4] The present case clinically mimicked a periodontal lesion, and it could be ruled out only after histopathology examination.
Vast majority of gingival SCC (75%) are diagnosed within 1.5 months, while in a minority of cases, the total diagnostic time exceeds 1.5 months due to misdiagnosis. No significant differences in time before diagnosis were found when gingival cancers were compared to other oral tumors. However, by the time of diagnosis, gingival cancers had invaded adjacent structures more frequently than other oral cancers. [5] In our case patient was diagnosed after 6 months, but surprisingly without any significant bony involvement.
The fundamental factors for this diagnostic delay were negligence on part of the patients themselves (primary delay) or due to time taken by the primary physician to diagnose the condition (secondary delay). The majority of the lesions were observed by the patients themselves, indicating that oral self-examination have a role in the early detection of disease. However, a lack of awareness among patients as well as Medical & Dental practitioners resulted in the undue delay in many cases. [6] Taking into account the fact that early diagnosis is a foremost step for reducing cancer mortality, efforts should be prioritized towards screening programmes designed to detect the disease during its asymptomatic phases. Educational interventions on the population particularly focused on risk groups (self-exploration) and on the professionals (clinician's index of suspicion) should include a sound knowledge of the disease presentation, specifically on sites like floor of the mouth, gingiva and retromolar trigone. [7] The overall survival rate for GSCC is about 54%. [8]
Conclusion
Many times, we are too quick to dismiss persistent lesions without further investigations and doing so could result in failure of diagnosis of a potentially life-threatening disease like squamous cell carcinoma as the worldwide survival rate of this condition is rather disappointing. The gingival location of oral squamous cell carcinoma (OSCC) coupled with early invasion of contiguous bone tissue leads to an advanced stage at the time of diagnosis. This would indicate that earlier referral for diagnosis & biopsy are absolutely necessary in persistent periodontal lesions.
|
2019-03-13T13:32:30.851Z
|
2016-03-30T00:00:00.000
|
{
"year": 2016,
"sha1": "09d62b4e9878d4756da19f35351c88d65fce68a5",
"oa_license": "CCBY",
"oa_url": "https://ssjournals.com/index.php/ijbar/article/download/3122/2258",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7d5819cd065897a8e5f94351c04d11282f7b1da5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
146587806
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation Of Iron Deficiency Anemia Awareness In a Rural Area: Results From a Survey In a Mediterranean Region Rural Area Of Turkey Kırsal Bir Alandaki Demir Eksikliği Anemisi Farkındalığını Değerlendirme: Türkiye’nin Akdeniz Bölgesi Kırsal Bir Alanındaki
INTRODUCTION: There are many causative agents of anemia. Hereby, it is aimed to evaluate the iron deficiency awareness of the people in a rural area of Turkey’s Mediterrenean Region. MATERIAL and METHODS: 132 people participated in the survey. None of the participants were health workers. The survey was conducted in a rural area of Aksu. Aksu is a district of Antalya Province. And, Antalya Province is in the Turkey’s Mediterrenean Region. 7 questions were asked in the survey. RESULTS: 81 (≈61 %) participants thought that iron requirement of the body could be obtained mostly from the red meat. 42 (≈32 %) participants thought that iron requirement of the body could be obtained mostly from the vegetables. And 9 (≈7 %) participants thought that iron requirement of the body could be obtained mostly from the fruits. Besides, 94 (≈71 %) participants knew that iron deficiency could cause anemia. DISCUSSION AND CONCLUSION: Iron deficiency anemia (IDA) is a public health problem. Knowing the social awareness about IDA may help to make social programs for reducing the iron deficiency anemia. Health organisations must be conducted to increase the awareness of people in rural areas about consuming iron rich foods.
INTRODUCTION
Anemia is one of the common findings of outpatient units. There are many causative agents of anemia. Iron deficiency, vitamine B12 deficiency, folic acid deficiency, gastrointestinal bleeding, malignancy, urinary bleeding, impaired gastrointestinal absorption, thalasemia, erthrocyte enzyme defects are some known reasons of anemia.
Worldwide prevalence of anaemia was estimated as 24.8 % . A WHO(World Health Organization) scientific group reported a hemoglobine treshold for anemia as 12 mg/dL in adult nonpregnant females in 1968 (2). It was also reported a hemoglobine treshold for anemia as 13 mg/dL in adult males (2) . Lastly, the same aforementioned WHO scientific group reported a hemoglobine treshold for anemia as 11 mg/dL in pregnant adult females (2). Hereby, it is aimed to evaluate the iron deficiency awareness of the people in a rural area of Turkey's Mediterrenean Region.
MATERIAL and METHODS
132 people participated in the survey. All of the participants were 18 or over 18 years old. None of the participants were health workers. The survey was conducted in a rural area of Aksu. Aksu is a district of Antalya Province. And Antalya Province is a province in southern Turkey. In addition to this, Antalya Province is in the Turkey's Mediterrenean Region. 7 questions were asked in the survey. The survey questions and their answer choices are listed below. All of the participants have chosen only one answer choice for each survey question.
94(≈71 %) participants knew
that iron deficiency could cause anemia ( Figure 1). 42 (≈32 %) participants thought that iron requirement of the body could be obtained mostly from the vegetables ( Figure 2). 81(≈61 %) participants thought that iron requirement of the body could be obtained mostly from the red meat ( Figure 2). And 9(≈7 %) participants thought that iron requirement of the body could be obtained mostly from the fruits ( Figure 2).126 (≈95%) participants thought that iron deficiency could be more frequently found in women (Figure 3). On the other side, 6(≈5%) participants thought that iron deficiency could be more frequently found in men (Figure 3). 85(≈65%) of the all participants thought that all of the choices in the fourth question could be caused by anemia (Figure 4). And distribution of the participants' answers to question 4 was shown in figure 4. 68(≈52 %) of the all participants answered ''Yes'' to the fifth question ( Figure 5). And 64(≈48 %) of the all participants answered ''No'' to the fifth question ( Figure 5). 52(≈39%) of the all participants did not know about the drugs containing iron ( Figure 6). 45(≈34%) participants thought that all of the choices in the seventh question could result in a lack of iron in the body (Figure 7). Lastly, distribution of the participants' answers to question 7 was shown in figure 7.
DISCUSSION
Iron is an element and it is involved in hemoglobine synthesis. Hemoglobine carries oxygen from lungs to tissues. Gastrointestinal bleeding, menstrual bleeding and nutritional deficiency may cause iron deficiency.
Malabsorption associated diseases such as coeliac disease and Chron's disease could also result in iron deficiency. On the other side, low iron stores and iron deficiency anemia are not the same things. Low iron stores could result in anemia. But, low iron stores could also be seen when hemoglobine levels are in normal ranges.
In addition to this, pica could also be seen as an eating disorder in patients with IDA (3,4,5). Hussain T et al. reported that 77.9% of the women participants in their study were aware of the term iron deficiency anemia (6). On the other hand, 94(≈71 %) participants in this study knew that iron deficiency could cause anemia ( Figure 1). However, the participants in the study of Hussain T et al. were all women (6). In a study conducted to determine the nutritional knowledge among adolescent girls, it was found that 30% girls knew about the sources of iron (7). On the other hand, 81(≈61 %) participants in this study thought that iron requirement of the body could be obtained mostly from the red meat ( Figure 2). And ≈39 % of the all participants in this study thought that iron requirement of the body could be obtained mostly from the vegetables or fruits ( Figure 2). Most of the participants in this study were farmers. And these farmers mostly grow vegetables in the greenhouses. They also eat vegetables more than meat. So that their work and eating habit might have an effect on their answers.
CONCLUSION
Iron deficiency anemia is a public health problem. Knowing the social awareness about IDA may help to make social programs for reducing the iron deficiency anemia. Health organisations must be conducted to increase the awareness of people in rural areas about consuming iron rich foods.
|
2019-05-07T14:11:58.753Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "a15d27d0f4077532fce536d6f94884c6bb8df704",
"oa_license": null,
"oa_url": "https://doi.org/10.5505/aot.2019.30306",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "11f0beadc62d794f251a12c21f6be8674115a663",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
}
|
969711
|
pes2o/s2orc
|
v3-fos-license
|
Antimicrobial Stewardship and Urinary Tract Infections
Urinary tract infections are the most common bacterial infections encountered in ambulatory and long-term care settings in the United States. Urine samples are the largest single category of specimens received by most microbiology laboratories and many such cultures are collected from patients who have no or questionable urinary symptoms. Unfortunately, antimicrobials are often prescribed inappropriately in such patients. Antimicrobial use, whether appropriate or inappropriate, is associated with the selection for antimicrobial-resistant organisms colonizing or infecting the urinary tract. Infections caused by antimicrobial-resistant organisms are associated with higher rates of treatment failures, prolonged hospitalizations, increased costs and mortality. Antimicrobial stewardship consists of avoidance of antimicrobials when appropriate and, when antimicrobials are indicated, use of strategies to optimize the selection, dosing, route of administration, duration and timing of antimicrobial therapy to maximize clinical cure while limiting the unintended consequences of antimicrobial use, including toxicity and selection of resistant microorganisms. This article reviews successful antimicrobial stewardship strategies in the diagnosis and treatment of urinary tract infections.
Introduction
Urinary tract infections (UTIs) are the most common bacterial infection encountered in ambulatory care settings in the United States, accounting for 8.6 million visits in 2007 [1,2]. Likewise, catheter-associated UTIs are the most common type of healthcare-associated infection reported to the National Healthcare Safety Network (NHSN) [3] and the most commonly treated infections in residents of long-term care facilities (LTCF) each year [4]. In a recent study by Sammon et al. [5], 10.8 million patients in the United States visited an Emergency Department (ED) for the treatment of a UTI between 2006 and 2009. The economic burden of utilizing the ED for the treatment of UTIs is estimated to be $2 billion US dollars annually, with mean charges being 10 times higher for patients who were treated and released from EDs ($2000 per visit) compared with treatment in an outpatient clinic ($200) [5].
Starting or reassessing antimicrobial prescriptions based on clinical context, symptomatology and susceptibility data are of paramount importance in all clinical situations and particularly when dealing with UTIs [6]. Urine samples are the largest single category of specimens received by most microbiology laboratories, and the majority of urine cultures do not yield clinically significant results [7]. The diagnosis of UTI is primarily based on signs and symptoms rather than isolated laboratory findings; importantly, bacteriuria is not a disease [8]. Thus, the collection and interpretation of urine cultures should be based on the clinical scenario. Cultures are not recommended for most women with acute uncomplicated cystitis because the microbiology and therapeutic approach for these women is consistent, and short course therapy is effective. However, for individuals with acute pyelonephritis or complicated UTI it is important to obtain a urine culture prior to empiric therapy in order to appropriately tailor the antimicrobial regimen if necessary. In patients with indwelling urinary catheters and residents of long term care facilities, populations with high prevalences of bacteriuria, decisions as to whether to obtain a urine culture and treat should be made carefully in order to avoid inappropriate antimicrobial treatment of bacteriuria that is not associated with symptoms.
Clinicians are frequently faced with the risk-assessment decision to balance the short-and long-term risks and benefits of prescribing an antimicrobial. The short-term risks for the individual prescriber and patient include failure to treat a blossoming symptomatic infection with potential clinical worsening of the patient. The long-term benefits of not prescribing antimicrobials in asymptomatic patients, such as avoiding the emergence of antimicrobial-resistant organisms and adverse events, including Clostridium difficile infection, are less tangible to the prescriber focused on the individual rather than the ecological effects with impact at the population level [9].
Antimicrobial resistance is a major public health problem worldwide, caused in part by the overuse of antimicrobials in clinical situations where they are not necessary or in prolonged courses of therapy when shorter durations are as effective [10][11][12]. Antimicrobial prescribing should be prudent, thoughtful and rational. The choice of antimicrobial agents should be individualized based on the patient's allergy history, local practice patterns, prevalence of resistance, availability, cost and compliance [13]. Unfortunately, in many parts of the world fluoroquinolones are the most commonly prescribed antimicrobials for uncomplicated UTIs even though narrower spectrum cost-effective alternatives are available; their use should be minimized considering their adverse ecologic effects [13]. Several studies in adults and children have demonstrated that short-term antimicrobial courses are as effective as longer ones for the treatment of uncomplicated UTIs and many complicated UTIs [14][15][16], although there still remain many questions as to the optimal duration of treatment for many types of complicated UTIs. It is the responsibility of all healthcare providers to practice antimicrobial stewardship and to avoid the unnecessary use of antimicrobials [17,18].
Definitions
For the purpose of this review, uncomplicated UTIs include episodes of acute cystitis and pyelonephritis occurring in healthy, non-pregnant, non-immunocompromised women with no history suggestive of an abnormal anatomical or functional urinary tract and no signs of systemic infection. All other UTIs are considered complicated [2]. The classification of UTIs according to the individual host and the severity of location have therapeutic implications in antimicrobial stewardship. International treatment guidelines recommend short-course (single-dose to 3-day regimens) regimens for acute uncomplicated cystitis [13]. Moreover, a Cochrane review [19] of 15 studies, including 1644 elderly women, concluded that short courses (3-6 days) are as effective as long courses (7-14 days) for treating uncomplicated cystitis in elderly women. Short-course regimens are also effective tor treatment of cystitis in pregnant women, UTIs that are generally considered to be -complicated‖ [20]. Moreover, short-course regimens have been found to be effective for more complicated UTIs-thus, a 5-day course of levofloxacin 750 mg daily was found to be as effective as longer courses of therapy for the treatment of acute pyelonephritis and complicated UTI [15].
Asymptomatic bacteriuria (ASB) is defined as the presence of bacteriuria in urine revealed by quantitative culture in a sample taken from a patient without symptoms suggestive of lower or upper UTI. In women, the traditional quantitative definition for ASB is 10 5 cfu/mL in 2 consecutive voided urine specimens and for ASB in men a voided urine specimen with 1 bacterial species isolated in a quantitative count of 10 5 cfu/mL [8]. In general, treatment of ASB is not indicated and may be associated with adverse outcomes, including subsequent antimicrobial resistance, C. difficile infection, adverse drug effects, and increased cost. However, ASB is associated with complications in some populations, and should therefore be screened for and treated if present in pregnancy and during interventions that compromise the urinary tract mucosa [21]. Despite the fact that UTI in patients with diabetes mellitus is associated with more severe and uncommon complications, screening for and treatment of ASB in diabetics are not recommended [22].
Microbiology
E. coli causes 75% to 95% of episodes of ASB, cystitis and pyelonephritis in young healthy women, with a minority of cases caused by other Enterobacteriaceae, other Gram negative rods, Enterococcus faecalis, Staphylococcus species and Group B streptococcus. In men and women with -complicating factors‖, the causative uropathogens are more variable.
Does the Patient Have a UTI? Are Antimicrobials Necessary?
Antimicrobial stewardship opportunities are summarized in Table 1. In general, symptomatic UTIs should be treated with antimicrobials to alleviate symptoms and, in the case of more serious infection, to prevent complications, whereas ASB generally does not warrant treatment. Thus, the first question a clinician should ask when considering antimicrobial therapy is whether the patient is symptomatic and if such signs and symptoms are likely caused by bacteriuria. To assist clinicians in differentiating symptomatic UTIs from ASB, several reviews and consensus guidelines have been published that provide criteria for diagnosis and management of suspected uncomplicated UTIs and those occurring in acute and long term care facilities [2,8,[23][24][25][26]. Cystitis is usually manifested as dysuria with or without frequency, urgency, suprapubic pain or hematuria. Clinical signs of pyelonephritis include fever (temperature >38 °C), flank pain, chills, costo-vertebral angle tenderness, and nausea and vomiting [2]. In women, absence of vaginal symptoms in the setting of UTI symptoms increases the likelihood that a UTI is present [23]. It may be very difficult to determine whether symptoms are associated with bacteriuria in patients with altered sensation, such as those with spinal cord injury and neurogenic bladder [25]. Laboratory parameters aid in the diagnosis of UTI but are not helpful in isolation. Furthermore, results of voided midstream urine cultures should be interpreted with caution. In a recent study Hooton et al. [27] analyzed microbial species and colony counts in urine samples from 226 healthy women (aged 18-49 years) with symptoms of cystitis. They found that the detection of E. coli in voided midstream urine at colony counts as low as 10-10 2 cfu/mL was highly predictive of its presence in the bladder (positive predictive values of 93% for growth of ≥10 2 cfu/mL and 99% for ≥10 4 cfu/mL). On the other hand, growth of enterococcus species and Group B streptococci in voided urine was not predictive of their growth in bladder urine and suggest that these organisms are likely to be urethral contaminants instead. The usefulness of voided urine cultures in other populations has not been studied.
Antimicrobial Selection
Antimicrobial resistance varies over time and by patient population in different geographic locations. If antimicrobial therapy is indicated for UTI, it is important to determine the correct drug, dose and duration of therapy. Sometimes, as with acute uncomplicated cystitis, the clinical presentation is suggestive of a predominant organism (E. coli) with predictable antimicrobial susceptibility, and narrow spectrum agents are appropriate for empiric treatment. However, in other situations, as with complicated UTI, antimicrobial susceptibility is not as predictable or there may be multiple causative uropathogens, and broad spectrum agents are more appropriate. The individual risk factors, local patterns of antimicrobial resistance, presence of urinary catheters or other -complicating factors‖, recent or prolonged hospitalization and previous exposure to antimicrobials must be taken into consideration when one is considering the optimal empiric agent for treatment of UTI.
Acute uncomplicated cystitis is a benign condition, with early resolution of symptoms in 25% to 42% of women with rare progression to pyelonephritis [2]. Nonetheless, it has considerable morbidity and antimicrobials are routinely prescribed aiming for rapid symptom resolution. The Infectious Diseases Society of America (IDSA) guidelines [13] emphasize the importance of considering collateral damage (adverse effects of a drug, such as selection for resistance) when prescribing antimicrobials. They recommend four agents (nitrofurantoin, trimethoprim-sulfamethoxazole, fosfomycin, and pivmecillinam) that result in relatively little collateral damage compared with other agents. Pivmecillinam may not be available in all countries, which limits its use. Because culture results in these patients are fairly predictable, urine cultures are usually not recommended. However, cultures are recommended if there is a concern about possible antimicrobial resistance since uropathogen resistance data reflected in hospital or community antibiograms are often unreliable, due to the nature of passive surveillance, in guiding the selection of antimicrobial therapy. The IDSA treatment guidelines for uncomplicated cystitis do, however, suggest thresholds for the prevalence of resistance in the community (if reliable antibiogram data are available) above which a drug is not recommended for empiric treatment-10% for fluoroquinolones and 20% for trimethoprim-sulfamethoxazole [2,13].
In addition, pharmacokinetic properties of the antimicrobial are important depending on the site of infection. For the treatment of a complicated UTI, the drug should achieve high concentrations in urine, kidney tissue and prostate. Therefore, nitrofurantoin and fosfomycin are not recommended for upper tract infection or any complicated UTI. Fluoroquinolones have a broad spectrum of activity and penetrate tissue well and are thus the drugs of choice for empiric treatment of uncomplicated pyelonephritis and complicated UTIs. Drug resistance has made this class of antimicrobials less useful than in the past, and for patients with severe infections it is recommended that parenteral agents with more reliable activity against uropathogens be used until susceptibility data are available. Trimethoprim-sulfamethoxazole also penetrates tissue well and is an excellent agent for the treatment of uncomplicated pyelonephritis and complicated UTIs if the organism is known to be susceptible, but it should not be used empirically in such patients due to the high prevalence of resistance among uropathogens worldwide.
Streamlining Empirical Therapy
De-escalation or streamlining a broad-spectrum antimicrobial to a narrower spectrum agent active against the causative uropathogen once susceptibility data are available is an important antimicrobial stewardship strategy in the management of complicated infections occurring in the hospital or LTCF. In addition, selective reporting of antimicrobial susceptibilities for uropathogens is a strategy used in many microbiology laboratories to avoid the use of broad-spectrum agents and guide clinicians in the selection of antimicrobials. The impact of selective reporting on the appropriate use of antimicrobials for UTIs has been evaluated in a randomized study [6] and several prospective surveys [28,29]. Coupat et al. [6] randomly assigned residents at 3 French universities to an intervention group that received susceptibility reporting for only 2 to 4 antimicrobials for case-vignettes, or to a control group that received full-length reporting for 25 antimicrobials. Selective reporting improved the appropriateness of antimicrobial choices by 7% to 41%, depending on the vignette. In addition, most residents in the intervention group reported that selective reporting facilitated their choice of antimicrobials. Selective susceptibility reporting has been associated with a direct effect on antimicrobial prescribing by community clinicians in the United Kingdom [29].Tailoring antimicrobial therapy based on local guidelines and culture results and selectively reporting susceptibility for uropathogens are important stewardship practices to improve the appropriate use of antimicrobials.
Selecting the Correct Dose and Route
Pharmacokinetic and pharmacodynamic properties should be considered when treating a UTI in order to achieve optimal tissue levels and effectively eradicate the infection. Antimicrobials that are characterized by concentration-dependent killing (e.g., aminoglycosides and fluoroquinolones) are most effective when administered once daily achieving high serum or tissue peaks relative to the minimum inhibitory concentration (MIC) of the organism. Antimicrobials that are characterized by time-dependent killing (e.g., penicillins and cephalosporins) are most effective when the serum or tissue concentration of the drug is maintained above the MIC for an extended period of time, rather than by achieving high serum concentrations. This is achieved by either continuous infusion or prolonged infusion rates of the antimicrobial. Both types of agents are effective in the treatment of UTI, but it is important that the dose and dosing interval be determined correctly for the agent chosen to treat the infection.
The preferred route of antimicrobial administration depends on the site of infection, antimicrobial susceptibilities, the individual patient's gastrointestinal absorption and the bioavailability of the drug. Oral agents should achieve high serum and tissue concentrations for the treatment of complicated UTIs. The parenteral route should be used for empiric therapy in severely ill patients or those with poor absorption or oral bioavailability [30,31]. The selection of antimicrobial therapy should also take into account the potential toxicity and necessary dosing adjustments based on the glomerular filtration rate of the individual patient.
Cystitis
Recommended antimicrobial regimens and duration of therapy for acute uncomplicated cystitis are summarized in Table 2. For the treatment of uncomplicated cystitis, short-course regimens (single dose to 5 days) are recommended as first-line therapy and are as effective as longer antimicrobial regimens in achieving symptomatic cure with fewer adverse effects [2]. Recommended empirical first-line treatment regimens for uncomplicated cystitis, based on the IDSA guidelines [13], include: nitrofurantoin, trimethoprim-sulfamethoxazole (TMP-SMX), fosfomycin trometamol and pivmecillam. Nitrofurantoin monohydrate macrocrystals 100 mg twice daily with meals for 5 days has shown good efficacy and is well tolerated with low propensity for adverse ecologic effects; it should only be used for the treatment of cystitis and avoided if pyelonephritis is suspected. TMP-SMX 160 mg/800 mg twice daily for 3 days remains very effective with high cure rates [32,33] but is not recommended in areas with resistance prevalence >20% [34,35]; it is inexpensive and well tolerated with fewer ecologic adverse effects than fluoroquinolones. Fosfomycin trometamol 3 g sachet in a single dose or pivmecillinam 400 mg twice daily for 3 to 7 days are also recommended as first line agents due to their low propensity for ecologic adverse effects even though in some studies they appear to be clinically inferior to TMP-SMX or fluoroquinolones [13]. Recommended second line agents for acute uncomplicated cystitis include fluoroquinolones (levofloxacin 250 mg or 500 mg once daily for 3 days or ciprofloxacin 250 mg twice daily for 3 days). Due to the rising prevalence of fluoroquinolone resistance in some regions of the world and due to their importance in treatment of a wide variety of infections, the use of fluoroquinolones should be reserved when possible for other uses than cystitis [8,13,35]. Beta-lactams (amoxicillin-clavulanate, cefdinir, cefaclor and cefpodoxime) for 7 days or more are also recommended as second line agents with some studies reporting lower efficacy compared to TMP-SMX and fluoroquinolones [2].
Reducing the duration of treatment and selecting recommended agents other than fluoroquinolones for the treatment of uncomplicated cystitis are important stewardship strategies. Given the ubiquity of cystitis, such stewardship strategies may ultimately have significant beneficial effects on antimicrobial resistance and other adverse consequences of antimicrobial therapy.
Antimicrobial-sparing strategies for the management of acute uncomplicated cystitis that warrant further study include delayed treatment [36] and the use of anti-inflammatory drugs [37].
Treatment duration for complicated cystitis has been less thoroughly studied, but in general such infections should be treated for at least 7 days, especially in men where underlying prostatic infection may exist. Although UTIs in older or pregnant women are often considered -complicated‖, short-course treatment has been shown to be effective in such women as mentioned earlier [20].
Pyelonephritis
The treatment of pyelonephritis following initial empiric therapy should be guided by urine culture and susceptibility results. Most episodes of acute uncomplicated pyelonephritis are treated in the outpatient setting, but patients should be hospitalized if the episode is severe, if there is hemodynamic instability, oral medications are not tolerated, poor adherence to therapy or any complicating factors such as diabetes, renal stones or pregnancy [2]. Empiric therapy for pyelonephritis should have a broad-spectrum of activity and be started without delay to avoid complications. For acute uncomplicated pyelonephritis, a fluoroquinolone is recommended as the empiric regimen of choice when feasible [13], because it is a serious infection that may be life threatening. Short-course regimens of oral levofloxacin 750 mg once daily for 5 days appear to be effective for uncomplicated pyelonepritis and complicated UTI [13,15]. Recommended outpatient oral empiric regimens are summarized in Table 3 and include: fluoroquinolones (e.g., levofloxacin 750 mg once daily for 5 days or ciprofloxacin 500 mg twice daily or 1 g extended release daily for 7 days), TMP-SMX 160 mg/800 mg twice daily for 7-14 days or beta-lactams for 10-14 days. A parenteral broad-spectrum agent such as ceftriaxone can be used along with these regimens if drug resistance is a concern, particularly in patients with severe infection [2,13].
Pyelonephritis in patients with -complicating‖ factors are at greater risk for severe complications. The optimal treatment duration is not known, and such treatment should be tailored to the severity of illness, the rapidity of response to treatment, and results of imaging studies if done. Such patients should generally be treated for 10 days or longer with antimicrobials targeted to the causative uropathogen.
In many studies the optimal duration of treatment for UTIs is defined by the absence of recurrent UTI after an arbitrary number of days (e.g., 7, 10, 14 days). Often, the minimum duration of treatment required for clinical cure is not known. To further reduce volume of consumption, selection pressure and adverse ecological effects, more studies on shorter treatments in different populations are needed [30].
Catheter-Associated UTIs
International guidelines for the management of catheter-associated UTIs (CA-UTIs) [25] recommend 7 days of antimicrobials in patients with prompt resolution of symptoms or 5 days of levofloxacin in patients who are not severely ill (assuming the organism is susceptible). Ten to 14 days of treatment are recommended for patients with a delayed response. A 3-day course of antimicrobial therapy could be used in women ≤65 years without upper urinary tract symptoms after an indwelling catheter has been removed.
Recurrent Acute Uncomplicated Cystitis
Several non-antimicrobial related strategies to prevent recurrent acute uncomplicated UTIs have been published [2]. Behavioral interventions include abstinence or reduction in frequency of sexual intercourse which is often not very feasible. Contraceptive methods such as spermicides and spermicide-coated condoms alter the vaginal flora and favor the colonization of uropathogens and should be avoided. Urination soon after intercourse, drinking fluids, not routinely delaying urination and wiping front to back have not been shown to be associated with a reduced risk of uncomplicated cystitis in case-control studies, but might be effective in some patients and are not unreasonable strategies to suggest for patients with recurrent cystitis. Cranberry juice, capsule or tablets are widely used by women to prevent UTI recurrences, but they have not been convincingly demonstrated to be effective in preventing such recurrences [38]. There are some small studies, however, that suggest cranberry is effective and, given that this strategy appears to be benign, it is reasonable that women continue to use cranberry if they think that it has been effective.
Adhesion blockers such as D-mannose are increasingly being used by women to prevent cystitis, but supportive data are sparse. In a recently published randomized study [39] of 308 women with recurrent UTIs, investigators allocated patients into three groups: 2 grams of D-mannose powder in 200 mL of water daily, 50 mg of daily nitrofurantoin or no treatment for 6 months. Patients in the D-mannose group and nitrofurantoin group had a significantly lower risk of recurrent UTIs during the study compared to patients receiving no prophylaxis (RR 0.239 and 0.335, p < 0.0001). Of concern, the authors did not present data for the D-mannose group and the nitrofurantoin group separately, although they mentioned that the difference between the two groups was not significant. Interestingly, the authors noted that the time from starting prophylaxis to onset of symptoms did not differ significantly between the groups (presumably including the no-treatment group). Patients in the D-mannose group had a significantly lower risk of side effects compared to patients in the nitrofurantoin group (RR 0.276, p < 0.0001). Porru et al. [40], in a recent randomized cross-over pilot trial, evaluated the efficacy of D-mannose in the treatment and prophylaxis of recurrent UTIs in 60 patients (mean age 42 years). Patients were randomly assigned to treatment and prophylaxis with TMP-SMX or to a regimen of oral D-mannose 1 g every 8 h for 2 weeks followed by 1 g twice a day for 22 weeks. Patients were crossed over to the other intervention in the second phase of the study, with no further antimicrobial prophylaxis. Mean time to UTI recurrence was 52.7 days with antimicrobial treatment, and 200 days with D-mannose (p < 0.0001). Of note, however, the investigators used an unusual and unproven prophylactic regimen of TMP-SMX in the study (one week per month), observed a highly unusual rate of UTI recurrence in the 24-week period on TMP-SMX (91.7% of women had ≥1 recurrence compared with 20% of the D-mannose women), and the authors do not describe how the data were analyzed for the crossover aspect of the trial. While neither of these studies provide convincing evidence that D-mannose is effective in preventing cystitis, further studies of D-mannose are clearly warranted to determine its pharmacokinetic properties and clinical efficacy.
Other non-antimicrobial strategies to reduce the risk of recurrent uncomplicated cystitis include replacement topical estrogen therapy in postmenopausal women, probiotics, oral immunostimulants and vaccination. Replacement topical estrogen normalizes the vaginal flora in postmenopausal women and has been shown to greatly reduce the risk of recurrent UTI in this population [41]. Probiotics are widely used to prevent recurrent UTI but the published data to date remain unconvincing. Probiotics are touted to protect the vagina from colonization by uropathogens by steric hindrance or blocking potential sites of attachment, production of hydrogen peroxide which is microbicidal to E. coli and other uropathogens, maintenance of a low pH, and induction of anti-inflammatory cytokine responses in epithelial cells. However, in a review of four randomized controlled trials of lactobacillus probiotics for bacterial genitourinary infections in women, only one demonstrated a significant reduction in rates of UTI recurrence [42]. Moreover, most of these studies did not determine whether the probiotic led to vaginal colonization with the probiotic strain. While the probiotic approach has a credible scientific basis, additional adequately designed clinical trials need to be performed before its routine use can be recommended. Oral immunostimulants may have a role in UTI prevention. In a systematic review and meta-analysis of four trials that together included 891 participants, OM-89, an extract of 18 different serotypes of heat-killed uropathogenic E. coli given orally to stimulate innate immunity, decreased the rate of UTI recurrence (RR 0.61, 95% CI 0.48-0.78) [43]. The agent is commercially available in some European countries but not in the United States. Although there is great interest in developing a safe and effective UTI vaccine, there is no currently available product on the market.
Antimicrobial prevention strategies are highly effective for prevention of recurrent uncomplicated cystitis, but should be considered only as a last resort after non-antimicrobial strategies have been tried or considered and the potential risks of long term antimicrobials have been thoroughly discussed with the patient.
Catheter-Associated UTIs
Screening and treatment of patients with catheter-associated asymptomatic bacteriuria (CA-ASB) are not recommended to reduce subsequent CA-bacteriuria or CA-UTI [24]. Likewise, systemic antimicrobial treatment of ASB is not recommended to reduce the risk of symptomatic UTI in catheterized patients. The most effective way to reduce the incidence of asymptomatic or symptomatic bacteriuria is to reduce urinary catheterization by restricting its use to patients who clearly need it and by removing the catheter as soon as no longer indicated [25]. Nurse-or physician-based electronic reminders and automatic stop orders to remove unnecessary urinary catheters have been successfully implemented in clinical practice and are recommended by the IDSA guidelines. Systemic antimicrobial prophylaxis to prevent symptomatic infection should be avoided in patients with urinary catheterization in order to reduce the selection pressure for multiple-drug-resistant pathogens.
Bacteriuria in Pregnancy
Symptomatic and asymptomatic bacteriuria are common during pregnancy and E. coli is the most common etiologic agent. The incidence of ASB in pregnancy varies among different countries and ranges between 2% and 18% [44][45][46]. Studies of ASB have often been of poor quality with small sample sizes, different gestational ages, unclear definitions, differences in diagnostic techniques, timing of urine collection and different cutoff points for significant bacteriuria [47]. In both symptomatic and asymptomatic infection, quantitative culture is the gold standard for diagnosis. Current guidelines recommend screening pregnant women at least once in early pregnancy with a urine culture [22]. Treatment for ASB during pregnancy has become a standard of obstetrical care and has been shown to reduce the rate of pyelonephritis and decrease the incidence of low birth weight. However, studies of ASB in pregnancy were mostly done in the early antimicrobial era and the methodological quality of the studies limits the strength of the conclusions that can be drawn [46]. Duration of therapy for ASB should be 3-7 days [22].
There are still unknown consequences of exposing neonates to antimicrobial therapy. A long-term Danish study of 447,629 single pregnancies followed for 9.9 years found a small increased risk of epilepsy in children whose mothers received antimicrobials (mainly for UTI), including nitrofurantoin, during pregnancy [48]. Additionally, there is no clear consensus in the literature on the optimal antimicrobial choice or duration of therapy for UTI during pregnancy. In light of the possible adverse effects of antimicrobials, higher quality research is needed to better understand the direct and indirect consequences of antimicrobial exposure early in life and prudent antimicrobial use is extremely important during pregnancy and early childhood [47]. Studies exploring cost-effective diagnostic tools at the point of care and non-antimicrobial options to prevent or treat ASB and UTIs are needed to limit unnecessary treatment of bacteriuria in pregnancy.
Long Term Care Facilities
One of the most important problems in antimicrobial stewardship in LTCFs is the inappropriate use of antimicrobials to treat UTIs in asymptomatic residents [4,24]. Despite extensive research demonstrating lack of benefit and potential harm for antimicrobial use in ASB [49,50], up to 50% of asymptomatic nursing home residents are prescribed broad-spectrum antimicrobials (e.g., fluoroquinolones) for a suspected UTI [9]. In a study by Phillips et al., up to 80% of the antimicrobials prescribed to individuals with an indwelling urinary catheter were written in the absence of signs or symptoms of UTI but in the presence of urinalysis results [9]. The diagnosis of UTIs in elderly LTCF residents is challenging as there is a wide range of events that can prompt urine testing, such as changes in mental status, behaviors, color or smell of the urine with our without dysuria, or falls [51]. Increased antimicrobial stewardship efforts are indicated to reduce unnecessary urinary catheterization, unnecessary diagnostic testing and inappropriate prescribing of antimicrobials for ASB in LTCFs and other institutional settings.
Some useful strategies to improve the use of antimicrobials in LCTFs have been reported [24]. Pettersson et al. [52], described an education intervention to improve antimicrobial use in a cluster randomized trial in Swedish LTCFs including educational small group sessions with facility nurses and physicians, guidelines adapted for the local context, written materials, and feedback on prescribing. At the end of the 2 year intervention period, there was no difference between the intervention and control facilities in fluoroquinolone use for UTI. There were, however, significant differences favoring the intervention facilities in secondary outcomes, including a decrease in any antimicrobials given for all infections and an increase in a -wait and see‖ approach of observation with delayed empiric antimicrobials. Loeb et al. [53], in a cluster-randomized trial including 12 nursing homes evaluated the impact of implementation of consensus guidelines with treatment algorithms prior to institution of empiric antimicrobial therapy for treatment of UTIs. The intervention program included nursing education in small group interactive sessions, video tapes and written material, outreach visits and one-on-one physician detailing. Over the study period there was a significant decrease in the number of antimicrobial days given for suspected UTI in the intervention compared with control homes, but no difference between the two groups in total antimicrobial days for all indications. The difference between intervention and control groups appeared to wane over time. Zabarsky et al. [54] focused on education of healthcare providers about appropriate collection of urine specimens and not to treat ASB. Direct individual feedback regarding specific cases was given. In the six months following implementation there were significant decreases in the proportion of inappropriate urine specimens sent for culture, episodes of treatment of ASB, and total antimicrobial days. These reductions were maintained during the following 7 to 30 months. Another multicenter study in LTCFs in Finland [55] developed a program where teams comprising an infectious disease consultant, infection control nurse, and geriatrician visited 39 LTCF during 200439 LTCF during -2008. The site visits consisted of a structured interview concerning patients, ongoing antimicrobials, and diagnostic practices for UTI. Following the visits, regional guidelines for prudent use of antimicrobials in LTCFs were published, and the use of antimicrobials was followed up by an annual questionnaire. The investigators found that most of the antimicrobial were used for UTI (range by year, 66.6%-81.1%). At baseline, 14.5% (177/1221) LTCF residents received antimicrobials for UTI prophylaxis and this significantly decreased to 7.8% (90/1158) (p < 0.001) after the implementation of the multidisciplinary intervention, without an increase in the number of patients treated for acute UTI in LTCF.
Surgical Prophylaxis
Antimicrobials are often used to prevent specific post-operative infections. Clinical practice guidelines for antimicrobial prophylaxis in surgery [56] from the American Society of Health-System Pharmacists (ASHP), the Infectious Diseases Society of America (IDSA), the Surgical Infection Society (SIS) and the Society for Healthcare Epidemiology of America (SHEA) provide procedure-specific recommendations to avoid post-operative bacteriuria or urosepsis. The selection of prophylactic agents should be based on the individual patient's prior antimicrobial use, history of UTI and risk factors for UTI. Routine screening and treatment for ASB are discouraged in most surgical procedures as they lead to unnecessary treatment, further diagnostic testing, delays in the procedure, development of antimicrobial resistance and adverse events such as C. difficile. Drekonja et al. [57], retrospectively evaluated the use of antimicrobial treatment of ASB in 1688 patients undergoing non-urologic procedures at a single center; 25% of the patients were screened by urine culture for ASB. The authors found no difference in surgical site infection rates (20% vs. 16%; p = 0.56) but more frequent episodes of post-operative UTI (9% vs. 2%; p = 0.01) among patients treated for bacteriuria vs. those not treated. These findings suggest no benefit from empiric peri-operative antimicrobial therapy for ASB.
In urologic procedures such as transrectal biopsy or resection of the prostate, antimicrobial prophylaxis and treatment of bacteriuria is recommended and proven to reduce post-procedural urosepsis from 4.4% to 0.7% [58]. Herr [17] investigated 2010 consecutive patients with bladder tumors who underwent cystoscopy without antimicrobial prophylaxis at a single center by the same surgeon; 24% of the patients had documented ASB prior to the procedure. The incidence of symptomatic post-procedure UTIs within 30 days was 4.5% in colonized patients with ASB and 1.1% in uninfected patients (p = 0.02), all UTIs resolved within 24 h with oral antimicrobials. These findings suggest that ASB is common in bladder cancer patients undergoing cystoscopy, but antimicrobial prophylaxis is unnecessary because subsequent UTIs are uncommon and easily treated.
Barriers to Guideline Implementation
Clinicians have profound individual accountability, and yet adherence to guidelines at the bedside often remains low causing omission of therapies contributing to preventable harm, suboptimal outcomes and waste of resources [59]. The reasons for poor compliance with guidelines are multifactorial. Several authors have proposed steps to overcome the barriers in guideline implementation, including more transparency in the level of recommendations, prioritizing which therapies have the greatest benefit to the patients at the lowest risks and costs, and implementation of order sets at the point of care incorporating the recommendations from national guidelines [60,61]. Henig et al. [62], systematically evaluated the methodological quality of eight national and international guidelines for the treatment of UTIs in adults published in the last 10 years (2004-2013); the authors identified variable recommendations depending on local epidemiology and different methodological rigor in guideline development. Some limitations to the UTI guidelines include poor descriptions of applicability such as likely barriers and facilitators to implementation, strategies to improve update and resource implications, lack of patient involvement in the development of recommendations and none of the published guidelines used the GRADE methodology to interpret the evidence and grade the recommendations [62]. Existing guidelines for the treatment of UTIs rarely address the implementation of recommendations within antimicrobial stewardship programs.
Areas of Uncertainty
More research is needed to optimize the diagnosis, treatment and prevention strategies of UTIs. Targeted rapid diagnostic tests that could distinguish between inflammation and infection, identify the pathogen and its mechanisms of antimicrobial resistance are very much needed. The development of newer antimicrobial oral agents with novel mechanisms of action against Gram-negative organisms to treat uropathogens is also awaited. Faster diagnostics and better antimicrobials will not improve antimicrobial prescribing practices unless global efforts continue to reinforce the importance of prudent, thoughtful and rational use of antimicrobials.
Conclusions and Recommendations
The diagnosis of UTI is primarily based on signs and symptoms rather than isolated laboratory findings. Urine cultures are often not useful for acute uncomplicated cystitis, are recommended for patients with uncomplicated pyelonephritis and complicated UTI, and with few exceptions, should not be collected in asymptomatic patients. Antimicrobial therapy should be tailored to each patient taking into consideration the severity of disease, individual and local patterns of antimicrobial resistance and the potential for collateral damage associated with antimicrobial use. Selecting the correct drug, dose, and shortest clinically effective duration of therapy when possible, is key to optimal antimicrobial stewardship. Strategies to prevent recurrent UTIs and catheter-associated bacteriuria could greatly reduce the use of antimicrobials and are therefore key stewardship modalities. It is the responsibility of all healthcare providers to practice antimicrobial stewardship and prescribe antimicrobials prudently, thoughtfully and rationally.
Author Contributions
Each author has contributed to the literature search, drafting and review of the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
|
2016-03-22T00:56:01.885Z
|
2014-05-05T00:00:00.000
|
{
"year": 2014,
"sha1": "24114e27c399d997a3d9c625bd18b2ccb784f496",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/2079-6382/3/2/174/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24114e27c399d997a3d9c625bd18b2ccb784f496",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
169035976
|
pes2o/s2orc
|
v3-fos-license
|
Thermal and Electronic Transport Properties of the Half-Heusler Phase ScNiSb
Thermoelectric properties of the half-Heusler phase ScNiSb (space group F4¯3m) were studied on a polycrystalline single-phase sample obtained by arc-melting and spark-plasma-sintering techniques. Measurements of the thermopower, electrical resistivity, and thermal conductivity were performed in the wide temperature range 2–950 K. The material appeared as a p-type conductor, with a fairly large, positive Seebeck coefficient of about 240 μV K−1 near 450 K. Nevertheless, the measured electrical resistivity values were relatively high (83 μΩm at 350 K), resulting in a rather small magnitude of the power factor (less than 1 × 10−3 W m−1 K−2) in the temperature range examined. Furthermore, the thermal conductivity was high, with a local minimum of about 6 W m−1 K−1 occurring near 600 K. As a result, the dimensionless thermoelectric figure of merit showed a maximum of 0.1 at 810 K. This work suggests that ScNiSb could be a promising base compound for obtaining thermoelectric materials for energy conversion at high temperatures.
ScNiSb is a member of the large family of the HH phases, which crystallize with the cubic MgAgAs-type crystal structure (space group F43m, no. 216). The compound was discovered by A. E. Dwight [27]. The first structure refinement from X-ray powder diffraction data confirmed the equiatomic composition of the substance [27,28]. Later, a structure refinement on X-ray single-crystal
Materials and Methods
A polycrystalline sample was synthesized by arc-melting elemental scandium (lumps, 99.9%), nickel (rod, 99.99%), and antimony (lumps, 99.999%) in Ti-gettered argon gas atmosphere. In relation to the intensive evaporation of antimony during the melting, 6% of the nominal mass Sb was added beforehand. The obtained ingots were hand-ground into fine powder. In order to obtain dense bulk samples suitable for thermoelectric property measurements, spark plasma sintering (SPS) was applied (SPS-515 ET, Dr Sinter setup, SDC Fuji, Japan). A consolidation was performed by heating the charge to 950 K at 50 K min −1 under uniaxial pressure of 100 MPa and dwelling this temperature for 10 min. The density of the so-casted pellets, determined by the Archimedes method, was over 98% of the theoretical value.
The prepared material was characterized at room temperature (RT) by X-ray powder diffraction (X pert Pro PANalytical, CuKα radiation, Almelo, Netherlands). Powder diffraction data were collected using an upgraded Huber G670 type Guinier camera with an imaging plate detector. The large focal circle at 360 mm diameter provides for excellent resolution, in particular with hard X-rays. Due to the quite small unit cell for half-Heusler-type compounds, we took advantage of doubling the number of observable Bragg reflections by using the MoKα doublet of the incident beam. As monochromator, a focusing 1D multilayer optics (AXO Dresden, Dresden, Germany) was used. It provides for high usable intensity, along with excellent suppression of the K β component in the direct beam.
The reflection positions obtained by profile deconvolution were corrected for sample displacement. The structure refinement was done by employing the programs FULLPROF (version 6.30) [43] and WinCSD (version 4.19) [44]. Sample composition was checked by energy-dispersive X-ray (EDX) analysis on a FEI scanning electron microscope (FEI, Hillsboro, OR, USA) equipped with an EDAX Genesis XM4 spectrometer.
The Seebeck coefficient and the electrical resistivity of the sintered samples were measured simultaneously under helium atmosphere in the temperature range 350-950 K using the temperature differential and the four-probe methods, respectively, implemented in commercial equipment Linseis LSR-3 (Linseis Messgeraete GmbH, Selb, Germany) and Ulvac ZEM-3 (ULVAC, Methuen, MA, USA). In these measurements, the temperature difference between the ends of each sample was kept equal to 50 K for the LSR-3 device and 20 K, 30 K, and 40 K for the ZEM-3. The thermal diffusivity was measured in the temperature range from 300 K to 923 K using the laser flash method (NETZSCH LFA-457).
Low-temperature (2-300 K) measurements of electrical resistivity, specific heat, Seebeck coefficient, and thermal conductivity were carried out on a Physical Property Measurement System (PPMS-9, Quantum Design, San Diego, CA, USA). The electrical resistivity was measured by standard four-point DC technique, where electrical contacts were made from silver wires attached to the sample by silver paste. The heat capacity measurements were carried out using the relaxation method with the two-τ model. For Seebeck and thermal conductivity measurements, gold-plated copper electrodes were attached to the specimen using silver-epoxy paste.
Results and Discussion
First, crystal structure determination was performed with the X-ray powder diffraction pattern obtained using the CuKα radiation (Bragg-Brentano geometry, 2θ max = 90 • , 11 reflections available in the measured range). All the Bragg peaks were well indexed with the cubic system (space group F43m), except traces of impurity phase Sc 2 O 3 , spotted around 31.3 • . The lattice parameter (a = 6.0749(2) Å) obtained is slightly larger than the experimental values reported before in the literature (between 6.0498 Å and 6.0620 Å) [27][28][29]31], yet smaller than the calculated ones [32,34,35]. The differences between experimental values may have resulted from a different level of structural disorder caused, for example, by slightly different stoichiometry (cf. below). The structure refinement was performed first, considering that Sc, Ni, and Sb atoms occupy the 4b ( 1 ⁄2 1 ⁄2 1 ⁄2), 4c ( 1 ⁄4 1 ⁄4 1 ⁄4), and 4a (0 0 0) sites, respectively, and the occupancy factors were assumed to be equal to unity. Despite the obtained low residuals (R I = 0.031, R P = 0.043), the atomic displacement parameters reveal non-systematic change with the atomic masses: B(Sc) = 1.0(1) Å 2 , B(Ni) = 1.3(2) Å 2 , B(Sb) = 0.75(7) Å 2 . An attempt to refine the occupation of the Ni site (vacancy on this position was suggested in [29]) was not successful: R I = 0.031, R P = 0.04; B(Sc) = 0.93(1) Å 2 , B(Ni) = 1.0(2) Å 2 , B(Sb) = 0.78(7) Å 2 ; Occ(Ni) = 0.98(1). Another reason for enhanced B(Ni) may have been the off-center location of the atoms at this position. Indeed, the Ni could be refined at 16e position (xxx) with x = 0.262(2). This did not change the residuals (R I = 0.031, R P = 0.043) but allowed a more logical distribution of the atomic displacement parameters to be obtained (B(Sc) = 0.98(14) Å 2 , B(Ni) = 0.85(15) Å 2 , B(Sb) = 0.75(7) Å 2 ). Despite the low residuals, the used powder diffraction data did not allow a final decision about the structural details in ScNiSb. To shed more light, high-resolution X-ray powder diffraction data were measured, employing Huber G670 type Guinier camera with double radius and using MoKα radiation (2θ max = 100 • , 84 reflections available in the measured range). In this experiment, the application of the ideal atomic distribution on the crystallographic sites confirmed the atomic displacement parameters not following the atomic masses (B(Sc) = 0.64(3) Å 2 , B(Ni) = 0.75(3) Å 2 , B(Sb) = 0.61(2) Å; R I = 0.027, R P = 0.093). An attempt to refine the occupancy of the nickel position did not reveal any vacancies. The stoichiometric composition of the material is in agreement with the lattice parameter, which is clearly larger (a = 6.0761(4) Å) than that for the Ni-defect compositions ScNi 0.87 Sb (a = 6.0521(6) Å) and Sc Ni 0.85 Sb (a = 6.0498(6) Å) [29]. The off-center model with Ni at the 16e position (x = 0.256(2)) yielded similar residuals (R I = 0.024, R P = 0.097). Yet the sequence of the atomic displacement parameters (B(Sc) = 0.66(3) Å 2 , B(Ni) = 0.61(4) Å 2 , B(Sb) = 0.59(2) Å) is in much better agreement with the atomic masses of the elements. The final results of the crystal structure refinement of ScNiSb from powder X-ray diffraction (MoKα radiation) data are presented in Figure 1. Further details of the real crystal structure may be revealed using the high-resolution X-ray single-crystal data at the equiatomic composition. Nonetheless, in most possible scenario with off-center Ni atoms, the crystal structure reveals clear deviation from the translation symmetry, which should reduce the lattice thermal conductivity, as was recently shown for the intermetallic clathrates [45]. The temperature dependencies of the electrical resistivity (ρ) and the Seebeck coefficient (S) of ScNiSb, determined in a wide temperature interval, are shown in Figure 3. At elevated temperatures, the experiments carried out on heating and cooling the specimen yielded very similar results, and hence only the data obtained on cooling are shown. Moreover, it should be noted that near 300 K the measurements performed employing different techniques/equipment (LSR-3, ZEM-3, PPMS) converged to almost the same values. Therefore, in the following discussion, the data collected using LSR-3 will be evaluated.
As can be inferred from Figure 3, ScNiSb exhibits semiconducting-like behavior, typical for doped semiconductors, with an ionization (or freeze-out) region from 2 K up to about 150 K, an extrinsic (or saturation) region up to about 500 K, and an intrinsic region at higher temperatures. A broad shoulder observed around 50 K has unclear origin. As shown in the inset to Figure 3a, in the intrinsic region, the resistivity can be well described by a standard Arrhenius model: 1/ρ = σ0 + σexp(−Eg/2kBT), where σ0 stands for the residual conductivity, and Eg is the activation energy. The soderived value of Eg amounts to 0.47(1) eV, which is much larger than that reported in the literature [31,35,46].
The thermoelectric power of ScNiSb is positive in the entire temperature range studied, because the number of holes in the valence band far exceeds the number of electrons in the conduction band. Therefore, ScNiSb is a p-type material. The S(T) dependence shows a shoulder-like feature at 120 K and a broad maximum (Smax = 240 μV K −1 ) near T = 450 K. This maximum is related to the The experimental sample density obtained by the Archimedes method is only 1.5% smaller than the theoretical value ( Table 1). The prepared sample of ScNiSb was hard and brittle, as predicted from theoretical calculations [34]. (5) 6.674(2) 6.58 (1) The elements distribution on the polished surface of the specimen is presented in Figure 2. Consistent with the PXRD results, the sample appears fairly homogeneous, except for tiny amounts of scandium-rich phase, probably an oxide. The chemical composition derived as an average over three different points examined on the sample surface is in very good agreement with the nominal one (see Table 1). This supports the off-center position of Ni in the crystal structure. (5) 6.674(2) 6.58(1) The temperature dependencies of the electrical resistivity (ρ) and the Seebeck coefficient (S) of ScNiSb, determined in a wide temperature interval, are shown in Figure 3. At elevated temperatures, the experiments carried out on heating and cooling the specimen yielded very similar results, and hence only the data obtained on cooling are shown. Moreover, it should be noted that near 300 K the measurements performed employing different techniques/equipment (LSR-3, ZEM-3, PPMS) converged to almost the same values. Therefore, in the following discussion, the data collected using LSR-3 will be evaluated.
As can be inferred from Figure 3, ScNiSb exhibits semiconducting-like behavior, typical for doped semiconductors, with an ionization (or freeze-out) region from 2 K up to about 150 K, an extrinsic (or saturation) region up to about 500 K, and an intrinsic region at higher temperatures. A broad shoulder observed around 50 K has unclear origin. As shown in the inset to Figure 3a, in the intrinsic region, the resistivity can be well described by a standard Arrhenius model: 1/ρ = σ0 + σexp(−Eg/2kBT), where σ0 stands for the residual conductivity, and Eg is the activation energy. The so- The temperature dependencies of the electrical resistivity (ρ) and the Seebeck coefficient (S) of ScNiSb, determined in a wide temperature interval, are shown in Figure 3. At elevated temperatures, the experiments carried out on heating and cooling the specimen yielded very similar results, and hence only the data obtained on cooling are shown. Moreover, it should be noted that near 300 K the measurements performed employing different techniques/equipment (LSR-3, ZEM-3, PPMS) converged to almost the same values. Therefore, in the following discussion, the data collected using LSR-3 will be evaluated. actual one because of breakdown of the Maxwell-Boltzmann law in a material with narrow energy gap or with strong deviation in carriers mobility [48].
The temperature dependence of the power factor (PF = S 2 /ρ) calculated from the measured data of ScNiSb is presented in Figure 3c. On increasing temperature, PF starts growing above about 50 K and reaches a maximum of 0.90(4) × 10 −3 W m K 2 at 810 K. This value is similar to those determined for other RE-based HH phases [3,5,[38][39][40][41][42] and other thermoelectric materials [49]. In order to inspect the conduction mechanism in ScNiSb, a Jonker plot was constructed ( Figure 4) [50]. The observed linear relationship between the thermopower and logarithm of the electrical conductivity is a characteristic feature of semiconductor in its intrinsic region, with charge carriers scattered mainly on acoustic phonons [51]. At low temperatures, the slope of the straight line is positive, while at high temperatures it is negative. However, in both temperature regions, slope has a constant value of ±86.15 μV K −1 . The switch in the sign of the Jonker-type correlation occurring near 450 K suggests that the temperature variations of the Seebeck coefficient and the electrical conductivity in ScNiSb are governed mainly by changes in the carrier concentration. As can be inferred from Figure 3, ScNiSb exhibits semiconducting-like behavior, typical for doped semiconductors, with an ionization (or freeze-out) region from 2 K up to about 150 K, an extrinsic (or saturation) region up to about 500 K, and an intrinsic region at higher temperatures. A broad shoulder observed around 50 K has unclear origin. As shown in the inset to Figure 3a, in the intrinsic region, the resistivity can be well described by a standard Arrhenius model: 1/ρ = σ 0 + σexp(−E g /2k B T), where σ 0 stands for the residual conductivity, and E g is the activation energy. The so-derived value of E g amounts to 0.47(1) eV, which is much larger than that reported in the literature [31,35,46].
The thermoelectric power of ScNiSb is positive in the entire temperature range studied, because the number of holes in the valence band far exceeds the number of electrons in the conduction band. Therefore, ScNiSb is a p-type material. The S(T) dependence shows a shoulder-like feature at 120 K and a broad maximum (S max = 240 µV K −1 ) near T = 450 K. This maximum is related to the compensation effect, when the electron concentration starts to overcome the holes concentration. Using the relationship [47] S max = E g /2eT max (e stands for elementary charge) one finds E g = 0.22 eV, in good agreement with the theoretical data [35,46], however more than twice smaller than that determined from the ρ data. It should be noted that the so-obtained value of E g may differ from the actual one because of breakdown of the Maxwell-Boltzmann law in a material with narrow energy gap or with strong deviation in carriers mobility [48].
The temperature dependence of the power factor (PF = S 2 /ρ) calculated from the measured data of ScNiSb is presented in Figure 3c. On increasing temperature, PF starts growing above about 50 K and reaches a maximum of 0.90(4) × 10 −3 W m K 2 at 810 K. This value is similar to those determined for other RE-based HH phases [3,5,[38][39][40][41][42] and other thermoelectric materials [49].
In order to inspect the conduction mechanism in ScNiSb, a Jonker plot was constructed (Figure 4) [50]. The observed linear relationship between the thermopower and logarithm of the electrical conductivity is a characteristic feature of semiconductor in its intrinsic region, with charge carriers scattered mainly on acoustic phonons [51]. At low temperatures, the slope of the straight line is positive, while at high temperatures it is negative. However, in both temperature regions, slope has a constant value of ±86.15 µV K −1 . The switch in the sign of the Jonker-type correlation occurring near 450 K suggests that the temperature variations of the Seebeck coefficient and the electrical conductivity in ScNiSb are governed mainly by changes in the carrier concentration. The low-temperature (T < 300 K) specific heat (Cp) of ScNiSb is featureless, except for little hump near 3 K (see Figure 5). Possibly, the latter anomaly appears because of the impurity phase detected in the PXRD and EDX studies. Generally, the Cp(T) of ScNiSb has a shape typical for nonmagnetic compounds and can be analyzed by Debye formula: where n is the number of atoms per formula unit, R is the gas constant, ΘD is the Debye temperature, and x = hν/kBT. The first term of Equation (1) corresponds to the electronic part, while the second one corresponds to the phonon contribution to the Cp. The electronic specific heat was described using a simple Sommerfeld term Cel = γT; the fit in the range 4.5-7 K yields γ = 0.5(1) mJ mol −1 K −2 (Inset of Figure 5). By fitting the experimental data over the whole temperature range, we derived the ΘD = 354(1) K. Close to room temperature, Cp is approaching the Dulong-Petit limit of 74.8 J mol −1 K −1 . The temperature dependence of the thermal conductivity (κ) in ScNiSb was calculated with the T > 300 K data derived from the measured thermal diffusivity (D), using the relationship κ = DCpd, where Cp = 3nR represents the specific heat (n is a number of atoms in formula unit, and R is the gas The low-temperature (T < 300 K) specific heat (C p ) of ScNiSb is featureless, except for little hump near 3 K (see Figure 5). Possibly, the latter anomaly appears because of the impurity phase detected in the PXRD and EDX studies. Generally, the C p (T) of ScNiSb has a shape typical for nonmagnetic compounds and can be analyzed by Debye formula: where n is the number of atoms per formula unit, R is the gas constant, Θ D is the Debye temperature, and x = hν/k B T. The first term of Equation (1) corresponds to the electronic part, while the second one corresponds to the phonon contribution to the C p . The electronic specific heat was described using a simple Sommerfeld term C el = γT; the fit in the range 4.5-7 K yields γ= 0.5(1) mJ mol −1 K −2 (Inset of Figure 5). By fitting the experimental data over the whole temperature range, we derived the Θ D = 354(1) K. Close to room temperature, C p is approaching the Dulong-Petit limit of 74.8 J mol −1 K −1 .
where n is the number of atoms per formula unit, R is the gas constant, ΘD is the Debye temperature, and x = hν/kBT. The first term of Equation (1) corresponds to the electronic part, while the second one corresponds to the phonon contribution to the Cp. The electronic specific heat was described using a simple Sommerfeld term Cel = γT; the fit in the range 4.5-7 K yields γ = 0.5(1) mJ mol −1 K −2 (Inset of Figure 5). By fitting the experimental data over the whole temperature range, we derived the ΘD = 354(1) K. Close to room temperature, Cp is approaching the Dulong-Petit limit of 74.8 J mol −1 K −1 . The temperature dependence of the thermal conductivity (κ) in ScNiSb was calculated with the T > 300 K data derived from the measured thermal diffusivity (D), using the relationship κ = DCpd, where Cp = 3nR represents the specific heat (n is a number of atoms in formula unit, and R is the gas constant), while d denotes the density of the material. The overall magnitude of κ is greater than in the literature results [31]. A small increase of κ above about 600 K can be related to heat loses during the measurement or/and some contribution due to bipolar thermal conductivity [52]. At lower The temperature dependence of the thermal conductivity (κ) in ScNiSb was calculated with the T > 300 K data derived from the measured thermal diffusivity (D), using the relationship κ = DC p d, where C p = 3nR represents the specific heat (n is a number of atoms in formula unit, and R is the gas constant), while d denotes the density of the material. The overall magnitude of κ is greater than in the literature results [31]. A small increase of κ above about 600 K can be related to heat loses during the measurement or/and some contribution due to bipolar thermal conductivity [52]. At lower temperatures we observed a well-exposed peak at~50 K, which is related to the interplay between different types of phonon-scattering processes, and suggests high quality of our sample.
Assuming the validity of the Wiedemann-Franz law, κ el = LσT, where L is the Lorenz number, one can calculate the electronic contribution (κ el ) to total κ. Shown in Figure 6 is the estimate of κ el in ScNiSb, derived with L = 1.5 + exp(−|S|/116), as given in Ref. [53]. The so-obtained κ el is fairly small and slightly increasing with increasing temperature. This result implies that the thermal conductivity in ScNiSb is dominated in the whole temperature range studied by the lattice contribution (κ lat ). Remarkably, the magnitude of κ lat is much larger than the minimum thermal conductivity calculated using the Cahill model [54]. This finding opens a prospective of significant reducing κ lat by proper alloying and forming composite materials based on ScNiSb. The values between 5 and 10 W m −1 K −1 above RT are typical for the HH phases [55,56]. The deviations from the translational symmetry found during the crystal structure determination do not reduce markedly the thermal conductivity, as was found recently in intermetallic clathrates [45], raising once more the question of the real atomic structure of the HH phases, as was already discussed for example TiGePt [57,58]. On the other hand, the good thermal conductivity may be understood from the point of view of chemical bonding. The latter characterized by the presence of three-center Sc-Ni-Sb and two-center Sc-Ni interactions. Due to the predominant role of the first type, the bonding may be considered as pseudo homogeneous, i.e., all interactions are same or similar. The regular distribution of similar bonds in the crystal structure of the chemical bonding is described as isotrop. This characteristic of bonding should not influence the thermal conductivity [59].
the good thermal conductivity may be understood from the point of view of chemical bonding. The latter characterized by the presence of three-center Sc-Ni-Sb and two-center Sc-Ni interactions. Due to the predominant role of the first type, the bonding may be considered as pseudo homogeneous, i.e., all interactions are same or similar. The regular distribution of similar bonds in the crystal structure of the chemical bonding is described as isotrop. This characteristic of bonding should not influence the thermal conductivity [59]. The experimental data collected for ScNiSb allowed us to calculate the thermoelectric figure of merit (ZT = S 2 T/ρκ), and the result is shown in Figure 7. With increasing temperature, ZT increased, reaching the maximum ZT = 0.10 at 810 K. This value is smaller than ZT reported for well-established p-type thermoelectrics [60], however it is similar to those found for other RE-based HH phases [1,3,51]. At room temperature ZT = 0.01, which is almost two times smaller than the value reported before for an arc-melted sample [31], yet four times larger when compared with ZT of our sample, prepared by high-pressure high-temperature (HPHT) sintering [42]. The main reason for the reduced ZT values is very low electrical conductivity, opening a way for enhancing the thermoelectric figure of merit by appropriate substitutions. The experimental data collected for ScNiSb allowed us to calculate the thermoelectric figure of merit (ZT = S 2 T/ρκ), and the result is shown in Figure 7. With increasing temperature, ZT increased, reaching the maximum ZT = 0.10 at 810 K. This value is smaller than ZT reported for well-established p-type thermoelectrics [60], however it is similar to those found for other RE-based HH phases [1,3,51]. At room temperature ZT = 0.01, which is almost two times smaller than the value reported before for an arc-melted sample [31], yet four times larger when compared with ZT of our sample, prepared by high-pressure high-temperature (HPHT) sintering [42]. The main reason for the reduced ZT values is very low electrical conductivity, opening a way for enhancing the thermoelectric figure of merit by appropriate substitutions.
Conclusions
As an extension of the literature data for T < 400 K, the thermoelectric properties of the HH antimonide ScNiSb were determined from 2 K up to 950 K. Although this material has a high positive value of the Seebeck coefficient (up to 240 μV K −1 at 450 K), its thermoelectric properties are moderate. Because of a high electrical resistivity (~100 μΩm around RT) and a relatively high value of thermal conductivity (>6 W m −1 K −1 ), the maximum PF and ZT values of 0.91(4) × 10 −3 W m −1 K −2 and 0.1 at 810 K were established, respectively.
The results obtained for ScNiSb are similar to the data reported for many other RE-bearing HH phases and for pure RE-free HH phases. It appears plausible that proper modification of this material (nanostructurization, substitution, composite formation, etc.) may lead to significant improvement of its thermoelectric performance.
Conclusions
As an extension of the literature data for T < 400 K, the thermoelectric properties of the HH antimonide ScNiSb were determined from 2 K up to 950 K. Although this material has a high positive value of the Seebeck coefficient (up to 240 µV K −1 at 450 K), its thermoelectric properties are moderate. Because of a high electrical resistivity (~100 µΩm around RT) and a relatively high value of thermal conductivity (>6 W m −1 K −1 ), the maximum PF and ZT values of 0.91(4) × 10 −3 W m −1 K −2 and 0.1 at 810 K were established, respectively.
The results obtained for ScNiSb are similar to the data reported for many other RE-bearing HH phases and for pure RE-free HH phases. It appears plausible that proper modification of this material (nanostructurization, substitution, composite formation, etc.) may lead to significant improvement of its thermoelectric performance.
|
2019-05-30T13:12:16.534Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6d01f474fe25af30099cb83d334061ebc907c7f6",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/materials/materials-12-01723/article_deploy/materials-12-01723.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d01f474fe25af30099cb83d334061ebc907c7f6",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
270674388
|
pes2o/s2orc
|
v3-fos-license
|
Changes in Compression Pressure of Elastic Stockings for the Lower Limbs During Cesarean Section: A Prospective Observational Study
Background Postpartum peripheral nerve injuries can impact recovery. Elastic stockings are recommended for thromboembolism prevention, although concerns about entrapment neuropathy exist. In this prospective observational study, we investigated the differential compressions caused by wearing elastic stockings before and after anesthesia, as well as changes in the diameters of the lower leg and ankle in parturient women undergoing spinal anesthesia for elective cesarean section (CS). Methods Eighteen pregnant women, classified by the American Society of Anesthesiologists as having physical status 2, underwent lower leg measurements taken before a CS. Elastic stockings were applied, and compression pressure was measured at pre-anesthesia, post-surgery, and six hours post-return to a hospital room. Fluid, blood loss, urine output, and neuropathy presence were recorded. For all parameters, changes at the three time points were compared for the primary analysis. For secondary analysis, participants were categorized as having intraoperative blood loss greater than (group P) or less than 1,000 g (group N), and factors were compared with pre-anesthesia and six hours post-return to a room. Data were analyzed and presented using a one-way analysis of variance with Bonferroni correction for multiple comparisons or unpaired two-tailed t-tests for pairwise comparison. Results None of the women had postoperative entrapment neuropathy. Six patients had >1,000 g of blood loss. Compression significantly increased from pre-anesthesia (left 13.6 ± 2.4, 95% CI: 12.18 to 14.52; right 13.4 ± 2.4, 95% CI: 12.41 to 14.69) to post-surgery (left, 17.4 ± 2.6, 95% CI: 15.68 to 18.12; right, 16.9 ± 2.6, 95% CI: 16.20 to 18.70) (p < 0.01). Compression pressure at post-surgery differed significantly between group P (left, 15.3 ± 1.3; right, 14.7 ± 1.8; 95% CI: -4.98 to -0.32) and group N (left, 18.1 ± 2.9; right, 17.8 ± 2.4; 95% CI: -5.38 to -0.26) (p < 0.05). The results are expressed as mean ± standard deviation, with P-values <0.05 indicating statistical significance. Conclusions In this study, no neuropathy occurred; however, over-compression risk with elastic stockings, especially when exceeding recommended pressure levels, was highlighted. Balancing thromboembolism prevention and over-compression risks is crucial for patients undergoing CSs with spinal anesthesia.
Introduction
The prevalence of postpartum peripheral nerve injuries occurs in approximately 0.3-2% of all deliveries [1], which outnumbers neurological insults directly associated with neuraxial anesthesia [2].Intrinsic obstetric palsies from either compression or nerve stretch during delivery are a major cause of these nerve injuries; peripheral nerve injury can interfere not only with patient postoperative recovery but also with child rearing, which may contribute to patient anxiety, leading to postpartum depression [1].Nevertheless, prophylactic measures, such as wearing elastic stockings, intermittent pneumatic compression, and anticoagulant therapies, are needed to reduce the risks of perioperative pulmonary thromboembolism; these measures are recommended in the Enhanced Recovery After Cesarean guidelines aimed at enhancing recovery after cesarean section (CS) surgery [3].However, overzealous compression of the lower limbs by elastic stockings may cause entrapment neuropathy [4].Moreover, spinal or epidural anesthesia for a CS inevitably induces postoperative sensory deficits in the lower extremities, thereby masking symptoms of nerve occlusion.In addition, local anesthetic sensory blockade is a risk factor for peripheral neuropathy, as it delays the sensory perception of neuropathy [5].Despite this potential risk of postoperative peripheral neuropathy, no study has probed the impact of compression pressure on the lower limbs with elastic stockings during a CS under spinal anesthesia.Herein, we evaluated the differential compressions induced by the wear of elastic stockings between pre-and post-anesthesia, as well as alterations in diameters of the lower leg and ankle in parturient women who received spinal anesthesia during CS.
Materials And Methods
In adherence with the Strengthening the Reporting of Observational Studies in Epidemiology guidelines, we conducted a prospective observational study at Kitasato University Hospital, Sagamihara, Japan, ratified by our institutional review board.All participants were fully informed and provided written informed consent.Ethical approval for this study was provided by the Kitasato University Medical Ethics Organization on January 4, 2019 (approval number B18-183).The clinical trial number and registration URL for this study are as follows: umin000035319 (https://center6.umin.ac.jp/cgi-bin/ctr/ctr_view_reg.cgi?recptno=R000040233).
From January 2019 to December 2021, we included 18 pregnant women (American Society of Anesthesiologists physical status 2) who underwent elective CS with spinal anesthesia.Participants with contraindications for spinal anesthesia, chronic medication usage, smoking habits, hypertension, diabetes, thyroid disease, or neuropathic complications were excluded.All participants wore medical elastic stockings that were commercially available in Japan (Toray Industries, Inc., Tokyo, Japan).One day before surgery, as recommended by the manufacturer, the diameter of the lower leg was measured at the thickest part of the calf and that of the ankle was measured at the thinnest part.Subsequently, appropriately sized elastic stockings were applied based on these measurements.For the primary analysis, compression pressure with the stockings was measured on each of the left and right legs at three distinct time points (pre-anesthesia (PRE), post-surgery (POST), and six hours post-return to the hospital room (6HR)) using the Kikuhime™ pressure monitor (TT MediTrade, Sorø, Denmark) at the point where the lower leg diameter was measured.
Ankle and lower leg diameters were also measured at the same points.PRE was defined as immediately before spinal anesthesia after entering the operating room, POST as immediately after the end of surgery, and 6HR as six hours after spinal anesthesia, respectively, and PRE was used as a control for comparison.
Concurrently, the final level of anesthesia, intraoperative infusion and blood loss volumes, urine output, and presence or absence of neuropathy were also recorded.For the secondary analysis, the patients were divided into two groups: those with intraoperative blood loss greater than 1,000 g (group P) and those with intraoperative blood loss less than 1,000 g (group N), and ankle and lower leg diameters and compression pressures at the end of the surgery and six hours post-surgery were compared in the two groups.Patients with any missing data were not included in the analysis, but there were no missing data in all 18 patients.Spinal anesthesia for CS was performed with a 27-g pencil-point needle from the L3/4 intervertebral space and 12 mg of hyperbaric bupivacaine, 10 μg of fentanyl, and 100 μg of morphine hydrochloride.Extracellular fluid was started at 1,000 ml/hr as soon as the patient entered the operating room, and continuous intravenous phenylephrine at 1 mg/h was started as soon as induction of anesthesia was performed.When the systolic blood pressure was less than 80% of the blood pressure at rest in the ward, phenylephrine 0.05 mg was administered intravenously if the heart rate was greater than 70 beats per minute, or ephedrine 5 mg was administered if the heart rate was less than 70 beats per minute.Surgery was initiated after confirming that the cold-numbing zone had expanded to Th4-S5.After delivery, continuous intravenous oxytocin 2.5 IU/h was started, and oxytocin 1 IU or methylergometrine 0.2 mg was administered intravenously as needed according to the degree of uterine contractions.No sedatives of any kind were used during surgery.
Data were analyzed and presented using a one-way analysis of variance with Bonferroni correction for multiple comparisons or unpaired two-tailed t-tests for pairwise comparison, using GraphPad Prism (GraphPad Software, Boston, United States).The results are expressed as mean ± standard deviation, with Pvalues <0.05 indicating statistical significance.The minimum sample size, calculated a priori, was 18, with an effect size of 1.03, a type-I error of 0.05, and a power of 0.95, using G*Power software (Version 3.1.9.6; Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany).
Results
No postoperative atonic hemorrhage or neuropathy was observed.Table 1 presents the patient demographics, final anesthesia level, intraoperative infusion volume, blood loss volume, and urine output.Intraoperative blood loss was greater than 1,000 g in six patients.Despite insignificant variances in the lower leg and ankle diameters in the PRE, POST, and 6HR measurements, compression significantly increased between the PRE (left, 13.6 ± 2.4 mmHg; right, 13.4 ± 2.4 mmHg) and POST (left, 17.4 ± 2.6 mmHg; right, 16.9 ± 2.6 mmHg; P < 0.01) measurements (Figure 1).In a secondary analysis comparing intraoperative blood loss of more or less than 1,000 g, there were no significant differences in ankle and lower leg diameters or in compression pressure at 6HR between the two groups.However, there was a significant difference (P < 0.05) in compression pressure at POST between group P (left, 15.
Discussion
Our investigation pioneers the assessment of temporal variations in compression exerted by elastic stockings during CS under spinal anesthesia.Remarkably, despite unchanged ankle joint and lower leg circumferences, we observed a significant surge (P < 0.01) in compression pressure immediately postoperation.This fact suggests that the internal pressure in the lower leg due to the potentially elastic stockings may increase while sensory deprivation due to spinal anesthesia is ongoing.This phenomenon may be attributable to complex factors, including altered fluid distribution subsequent to spinal anesthesia [6], transfusion to counteract hypotension, and autologous blood transfusion by uterine restoration.According to the practice guidelines for obstetric anesthesia [7], in CSs under spinal anesthesia, large infusions of colloidal or crystalloid fluid are administered to maintain uteroplacental blood flow as a way to counteract the relative reduction in circulating blood volume that occurs as a result of anesthesia-induced peripheral vasodilation.Such massive infusions prior to anesthesia are acceptable because massive hemorrhage may occur after the delivery of a child due to uterine relaxation.However, if uterine contractions are good and bleeding is minimal, a large amount of blood stored in the pregnant uterus is returned to the maternal circulatory system [8].Uterine blood flow increases markedly during pregnancy, and in the case of a singleton pregnancy, the average uterine blood flow is 500-600 mL/min at 34-40 weeks [9].In our secondary analysis in the present study, there was a difference in the immediate postoperative compressive pressure between patients with intraoperative blood loss of more than 1,000 g and those with intraoperative blood loss of less than 1,000 g.In other words, we speculate that when blood loss is relatively small, the return of blood stored in the gestational uterus, in addition to the massive infusion of fluid before anesthesia, maintains circulating blood volume, which in turn increases the compression pressure on the lower extremity.Given the small sample size in the secondary analysis and the absence of previous research on the relationship between the distribution of vascular beds and systemic blood volume throughout the body during spinal anesthesia, this phenomenon warrants additional descriptive studies.However, clinicians might be vigilant for potential occlusive neuropathy during persistent sensory nerve blockade in the lower extremities during the immediate postoperative period.
One distinguishing factor of a CS compared to other surgical procedures is that the mother must assume caregiving responsibilities for her child shortly after the operation.A previous study examining postpartum recovery after CS indicated that recovery is notably slower for several days following the surgery [10].Moreover, pain, anxiety, and other physical abnormalities experienced after a CS can significantly impact the mother's recovery and are linked to the development of postpartum depression [1].It has been observed that if postpartum depression is left untreated, it can substantially affect not only the mother's mental health but also the physical and mental development of the child [11].Therefore, ensuring a healthy recovery with minimal complications after a CS is essential for positively influencing perinatal care.Although infrequent, the potential increase in intra-leg pressure from elastic stockings should be considered, as the development of strangulation neuropathy can interfere with both the physical and psychological recovery of pregnant women.
A limitation of this study is that none of the participants had postoperative peripheral neuropathy.Therefore, the clinical significance of the increased lower extremity compression pressure of POST observed in this study remains uncertain.However, the prevalence of peripheral nerve injury after delivery is small, approximately 0.3-2% of all deliveries [1], making studies with the power to detect differences in nerve injury unfeasible.Notably, four of the participants had pressure levels higher than those recommended for the elastic stockings used in this study; clearly, this increase may have impacted the increased risk of potential entrapment neuropathy.Therefore, measuring compression pressure immediately post-CS would be prudent, while alternative thromboprophylaxis should be opted for when an increase in compression pressure is detected.Furthermore, this is a small observational study and may not be generalizable to real populations.Further studies in a wider population are warranted.
Conclusions
The application of elastic stockings could augment compression in the lower extremities immediately post-CS under spinal anesthesia, posing a risk of entrapment neuropathy.This study underscores the importance of balancing thromboembolic risk reduction with the potential risks of over-compression in patients who undergo CS with spinal anesthesia.Further research is needed to fully understand the clinical implications.
FIGURE 1 :
FIGURE 1: Circumferences of the ankles and lower legs and elastic stocking pressure at each point: (i) circumferences of the right ankle joints; (ii) circumferences of the right lower leg; (iii) elastic stocking pressure (right); (iv) circumferences of the left ankle joints; (v) circumferences of the left lower leg; and (vi) elastic stocking pressure (left) ** P < 0.01 6HR, six hours after anesthesia; ns, not significant; PRE, before anesthesia induction; POST, at the end of the surgery
FIGURE 2 :
FIGURE 2: Intraoperative blood loss of more than 1,000 g (group P) compared with less than 1,000 g (group N): (i) comparison of compression pressure in the right lower leg between the two groups and (ii) comparison of compression pressure in the left lower leg between the two groups * P < 0.05 6HR, six hours after anesthesia; ns, not significant; POST, at the end of the surgery
|
2024-06-23T15:03:38.752Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "9f18085396f79cad640bda0b81ac50480703f06a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f352e15a1a8c26342c29e13bdbc8611fbca398e5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253445745
|
pes2o/s2orc
|
v3-fos-license
|
Biological Impact of Shorter Wavelength Ultraviolet Radiation‐C †
Life on earth has constantly coped with the impact of solar radiation, especially solar ultraviolet radiation (solar UV). Various biological mechanisms protect us from solar UV. New devices emitting shorter wavelengths UV‐C, i.e. <254 nm emitted by conventional UV germicidal lamps, have emerged. These shorter wavelength UV‐C emitting devices are useful for various purposes, including microorganism inactivation. However, as solar UV‐C does not reach the earth surface, biological impacts of UV‐C has been studied using 254 nm germicidal lamps, and those using shorter wavelength UV‐C is rarely known. To balance the utility and risk of UV‐C, the biological effect of these new UV‐C emitting devices must be investigated. In addition, our knowledge of biological impacts of the wavelength‐dependent entire UV (100–400 nm) must be enhanced. In this review, we briefly summarize the biological impacts of shorter wavelength UV‐C. Mechanisms of UV‐C‐induced cellular damage and factors affecting the microorganism inactivation efficiency of UV‐C have been discussed. In addition, we theoretically estimate the probable photocarcinogenic action spectrum of shorter wavelength UV‐C. We propose that increasing the knowledge on UV‐C will facilitate the adoption of shorter wavelength UV‐C emitting new devices in an optimal and appropriate manner.
INTRODUCTION
Ultraviolet radiation C (UV-C) is defined as UV falling within the wavelength range of 100-280 nm. As it is absorbed by ozone layer, solar UV-C cannot reach the earth surface. Conventional germicidal lamps that primarily emit 254 nm UV are routinely utilized for surface sterilization because this wavelength is effective in killing microorganisms, however, they are not safe to human skin due to their genotoxic effects. Recently developed much shorter (<254 nm) wavelength UV-C emitting devices are less hazardous to mammalian cells, in addition to being efficient in microorganism disinfection. Recently, shorter wavelength UV-C ranging from 200 to 230 nm is sometimes called "Far UV-C". Issues concerning the use of UV-C emitting devices are becoming more relevant in the "COVID-19 Era" to medical professionals and people working in public spaces. This has led to a burst in investigations related to usefulness and safety of various shorter wavelength UV-C, especially regarding Far UV-C; they are efficient in killing various microorganisms. Solar radiation is beneficial: it gives us warmth, brightness, activates vitamin-D3, brings about photosynthesis, and acts as a natural disinfectant. However, it is also one of the most dangerous environmental hazards because acute exposure to sunlight causes cutaneous inflammation, sunburn and immune suppression, while chronic exposure causes skin cancer. Various bioeffects of solar radiation are largely attributed to solar UV. UV is a well-known mutagen and carcinogen. It induces carcinogenic somatic mutations by causing DNA photolesions at dipyrimidine sites. In addition, the inflammation and immune suppression induced by UV also play a crucial role in developing skin cancers (1). Both these beneficial and harmful aspects originate from the same UV characteristic of inducing DNA lesions: both the disinfection and carcinogenic potencies depend on the efficiency of causing DNA lesions. Therefore, knowing the subcellular location of the DNA becomes pertinent to understanding whether a given UV wavelength can be absorbed by it. Although rapidly increasing, the existing literature on this aspect of short-wavelength UV-C is lacking. In this review, we summarize the state-of-the-art knowledge regarding UV-C including far UV-C, to emphasize existing knowledge and future directions. We discuss the bio-impact of UV, with a major focus on shorter wavelength UV-C emitted by artificial devices. In addition, we discuss the disinfection efficacy and acute and chronic bio-effects of the safe-far UV-C wavelengths within the UV-C spectrum.
human health is crucial. Therefore, previous studies on DNA photolesions have focused on UV-B and 254 nm UV-C.
Absorption of UV-C (100-280 nm) or UV-B (280-315 nm) by DNA excites the nucleobases, resulting in the formation of covalently linked dimeric photoproducts-called cyclobutane pyrimidine dimers (CPD) and pyrimidine pyrimidone (6-4) photoproducts ((6-4)PP)-at adjacent dipyrimidine sites (called both together as "dipyrimidine photoproducts"). Whereas, UV-A (315-400 nm) and visible lights tend to participate in the formation of reactive oxygen species (ROS) in the presence of photosensitizers and indirectly produce oxidative DNA lesions, UV-B produces dipyrimidine photoproducts through direct excitation and oxidative pathways. Radiation biologists have shown that the DNA absorption spectra is concordant with the action spectra for killing Escherichia coli or induction of mutation to fungal spores. Thus, DNA is the target molecule for these UV-mediated effects. Further, CPD is the major class of UV-induced DNA lesions involved in the cytotoxicity and mutagenesis (2). Accumulation of DNA lesions in response to repeated and prolonged exposures to sunlight results in skin carcinogenesis.
As mentioned above, among UV-induced DNA photolesions, CPD is the major molecule involved in cytotoxicity and mutagenesis. ROS cause various biological effects by the redoxsignaling pathway and produce oxidative DNA lesions that also play a role in carcinogenesis (3). The guanine base in genomic DNA is highly susceptible to oxidative stress, as it possesses the lowest oxidation potential of all nucleobases; 8-oxyguanosine (8-oxoG) is a sensitive marker of oxidative DNA damage.
On the other hand, in human skin cells in culture and in vivo, exposure to UV-A produces CPDs at higher yield than 8-oxoG (4), thereby implying that UV-A may interact with photosensitizers in the body to produce CPD. Hydrochlorothiazide (HCT, a commonly used diuretic medicine) significantly enhances UV-Amediated thymine dimer (T<>Ts) production in an oxygenindependent manner (5), indicating that excited HCT molecules function as UV-A absorbing chromophores that transfer energy to adjacent pyrimidines, resulting in the formation of T<>Ts. As such, photosensitizers similar to HCT might facilitate UV-Amediated production of T<>Ts in human skin. These data suggest that studies on DNA photolesions should consider the location, surrounding molecules and mechanisms that could directly/indirectly help mediate the effect of UV on target DNA; moreover, one should be aware of photosensitizers in the DNA environment.
DNA LESIONS CAUSED BY SHORTER WAVELENGTH UV-C AND ITS BIOLOGICAL IMPACT
In the late 1990s, a new type of UV irradiation equipment, namely, dielectric barrier discharge excilamp, having emission wavelengths in the range 170-350 nm, was developed using the transition of excited dimers/complexes of rare gas halides (6). Recently, many devices emitting UV-C other than 254 nm, including far UV-C, have been invented and introduced into medical and industrial use. Despite the recent increase in research on shorter wavelength, UV-C knowledge on use of excilamps in medical (for disinfection of surgical sites, water sterilization) or other occupational settings is still lacking. More studies detailing the shorter wavelength UV-C characteristics are desirable.
DNA has two absorption peaks in the UV-C spectra: 260 nm and 200 nm. Absorption of UV by DNA substantially decreases at longer UV wavelengths (UV-B and UV-A). The lowest absorption is observed between the two peaks at 240 nm. Below 200 nm, UV absorption by DNA once decreases slightly toward 180 nm and then increases toward further shorter wavelength ( Fig. 1, dotted line). Matsunaga et al. (7) measured the action spectra for the induction of thymine dimers and (6-4) PP in DNA using 150-300 nm UV and monoclonal antibodies against thymine dimers. Both types of DNA photolesions are efficiently produced by 280-180 nm UV-C; 260 nm UV which is known to be the most efficient at inducing dipyrimidine photoproducts falls within this range. In addition, 200 nm UV is also efficient at inducing dipyrimidine photoproducts. The action spectra for the formation of thymine dimers and (6-4)PP were similar for 180-300 nm UV. However, wavelengths <160 nm produced 9-fold higher thymine dimers than (6-4)PP (Fig. 1).
The extent of UV-induced cytotoxicity and mutagenicity (in mammalian cells or microorganisms) mainly depends on the amount of DNA photolesions produced. The next crucial determinant is the penetrative ability of the UV, i.e. whether the UV can reach the target DNA, in the hosts/bacteria/viruses. As expected, this depends on the size of the host cell and location and environment of the microorganisms.
DNA PHOTOLESIONS CAUSED BY 193 NM UV AND ITS BIOLOGICAL IMPACTS
Kochevar et al. investigated the amount of cellular DNA lesions produced and the biological impact of irradiating cells with 193 nm UV using several methods, including endonuclease sensitive sites (ESS) assay, measuring unscheduled DNA synthesis (UDS), and colony formation assay. They quantified pyrimidine dimer formation, UDS ability and number of surviving colonies. They found that higher fluence of 193 nm UV is required to produce outcomes similar to 254 nm UV, i.e. 193 nm UV is strongly absorbed by proteins in the cellular constituents before reaching nuclear DNA (8). Although mostly dependent on the constituent amino acids, the absorption spectra of most proteins peak around 190 nm wavelength (9). A dose of 1 J m À2 of 254 nm UV produces 17 ESS/Mb in normal human fibroblasts (NHFs), whereas the same dose of 193 nm UV produces 0.2 ESS/Mb. They also studied the effect of the cell shape on UVinduced ESS numbers using NHFs and Chinese hamster ovary (CHO) cells. The ESS numbers in response to 254 nm UV exposure was similar between the two cell types, whereas CHO cells had less than half the number of ESS found in NHF in response to 193 nm exposure. This discrepancy in ESS numbers between the two cell types can be explained by the differences in the distances of the nucleus from the cell membrane. In the round CHO cells the cellular membrane to nucleus distance is approximately 1.09 nm (at center), while in NHFs it is 0.52 nm; thus, 254 nm UV penetrates deeper into the cell compared with 193 nm UV. CHO cells have lower ESS numbers by 193 nm UV than 254 nm UV, because of three reasons: (1) the thickness of their cytoplasm, (2) the lower penetrance of 193 nm due to the physical reasons (shorter wavelength UV reaches shorter) and (3) higher absorption of 193 nm UV by proteins in the cytoplasm between the cell membrane and nucleus. On the other hand, 254 nm UV is capable of reaching the nucleus; its absorption by protein was noted to be low. Although 193 nm UV causes less photo lesions, these results and earlier reports prove that DNA absorbs 193 nm UV and 193 nm UV can penetrate the cellular membranes, in vitro, to reach the DNA (Fig. 1).
Results from colony formation assays suggest that the formation of dipyrimidine photoproducts partly play a role in 193 nm UV-induced cytotoxicity. Cells exposed to 193 nm UV in patients with xeroderma pigmentosum (XP, a nucleotide excision repair disorder), had approximately 49 lower D 0 values (dose which gives 37% survival rate) than normal cells. If the lethality can be solely attributed to the formation of dipyrimidine photoproducts, the ratio of D 37 values should be similar to that of aforementioned ESS ratio (0.2 vs 17), but this was not the case. Therefore, although 193 nm UV-induced cytotoxicity results from unrepaired dipyrimidine photoproducts, they are not solely responsible for its induction of cytotoxicity.
DNA PHOTOLESIONS CAUSED BY 207 NM UV AND ITS BIOLOGICAL IMPACT
Recently, lamps emitting 207 nm and 222 nm UV have been developed and shown to be harmless to murine skin. Buonanno et al. compared the epidermal damage caused by equivalent doses (1.57 k J m À2 ) of 207 nm and 254 nm UV in SKH hairless mice; four relevant cellular and molecular damage endpoints were evaluated 48 h after UV irradiation. They used excimer lamps emitting monoenergetic 207 nm UV (based on Kr-Br gas mixture) with a custom band pass filter to remove essentially all but the dominant 207 nm wavelength (10). They found that epidermal CPD positive cells and (6-4)PP-positive cells were scarce after irradiation with 207 nm UV. However, after irradiation with 254 nm UV, CPD-and (6-4)PP-positive cells accounted for 50% and 30% of epidermal cells, respectively. In addition, while epidermal thickness and positive staining for Ki-67 were similar between 207 nm UV and sham irradiations, they were significantly increased with 254 nm UV irradiation.
DNA PHOTOLESIONS CAUSED BY 222 NM UV-C AND ITS BIOLOGICAL IMPACT
Further, to narrow-down the wavelength range that is harmful only to microorganisms but the host tissue, they extended the study to 222 nm UV-C. They used a krypton-chlorine (KrCl) excimer lamp with a custom bandpass filter to remove essentially all but the dominant 222 nm wavelength; the lamp emitted principally 222 nm UV. They found that UV fluence giving the level of 4-5 log inactivation of microorganisms, which was around 0.6-1 kJ m À2 at 222 nm UV, did not produce CPD in the epidermis, while 254 nm UV did (11). Moreover, Narita et al. (12), confirmed that 10 daily exposures to 222 nm UV at a dose of 4.5 kJ m À2 (at a time) did not produce CPDs in the epidermis of hairless mice, suggesting there to be no carcinogenic consequences with repetitive irradiation. Exposure to 222 nm UV did not induce acute corneal damage in rats (13).
EFFECT OF LONG-TERM EXPOSURE TO FAR UV-C
The most critical effect of 254 nm UV in humans and animals is skin carcinogenicity caused by genotoxicity (14,15). One of main causes of UV-induced skin tumors is the formation of the highly mutagenic DNA lesions called dipyrimidine photoproducts, which are carcinogenic when left unrepaired (16,17). Photocarcinogenic effects of 222 nm UV was tested by repetitive and long-term irradiation of hairless mice with 222 nm UV, using a protocol with a 100% skin-tumor incidence rate in wild-type mice (18,19). The UV source was a KrCl excimer lamp with a filter to remove nearly all wavelengths except 222 nm wavelength. The energy of irradiation was 100%, 0.13% and 0.04% for the wavelength ranges 200-230 nm, 235-280 nm and 280-320 nm, respectively.
To evaluate the safety of the UV sources we tried to obtain direct evidence in addition to the CPD formation, which have been measured in earlier studies (10)(11)(12), an XP mouse model with DNA repair disorder was employed to precisely evaluate the risk of 222 nm UV in a sensitive way. Patients with XP are characterized by multiple, early onset malignant skin tumors in sun-exposed areas (20,21). They have >10 000-fold increased risk of non-melanoma skin cancer and a >2000-fold increased risk of melanoma before the age of 20 years (22). Likewise, the XP model mice used in the study were extremely hypersensitive to UV and highly susceptible to UV-induced skin carcinogenesis (21,23). CPD formation was recognized only in the uppermost layer of epidermis of XP model mice even at doses as high as 10 kJ m À2 . Tumors were absent in both XP model and wild-type mice given repetitive irradiation with 222 nm-UVC over a course of 15 weeks, followed by the 10 weeks of follow-up observations. Further, this irradiation protocol did not significantly affect the stratum corneum of mice. Furthermore, inflammatory reactions, such as erythema and ear swelling, were absent in both wild type and XP model mice following 222 nm-UVC exposure (24). In the same study, 222 nm UV-irradiated mice were examined for various chronic ophthalmic effects. Irrespective of Xpagenotype, these mice did not show significant changes in corneal and retinal tissues throughout the period of examinations. Welch et al. (25) also reported that skin tumors were absent in wildtype hairless albino mice, even after 66 weeks of chronic irradiation with 222 nm UV.
EFFECT OF EXPOSURE TO 222 NM UV ON PLANTS
Ohtake et al. (26), reported that 222 nm UV caused greater damage to the guard and epidermal cells of Arabidopsis plant than 254 nm UV. They used a 222 nm KrCl excimer lamp, which emits 235-265 nm wavelengths UV with the integrating intensity of 9.0%. Nevertheless, they deduced that the lower growth rate of Arabidopsis plant caused by the exposure to the 222 nm KrCl excimer lamp than exposure to 254 nm-germicidal lamp was due to 222 nm UV component rather than 235-265 nm UV spectrum of the lamp. They probably deduced from the facts that E. coli are more susceptible to 254 nm UV than to 222 nm UV, whereas P1 bacteriophages were more sensitive to 222 nm UV than to 254 nm UV, which is consistent with data reported elsewhere (27). They suggested that severe mitochondrial damage caused by 222 nm UV could lead to such effects. These results imply that the susceptibility of the living beings to UV depends on the cell-organelle critical for its survival/growth, the chromophore, and other vicinity molecules (proteins, amino acids, lipids, or nucleic acids) capable of absorbing the given UV radiation.
POTENCY OF SHORTER WAVELENGTH UV-C ON INACTIVATION OF MICROORGANISMS
Genotoxicity, owing to which UV inactivates microorganisms, is attributed to UV absorption by nucleic acids and is mainly related to its potency to produce dipyrimidine photoproducts. Data regarding UV action spectra for lethality of E. coli are concordant with that of the UV action spectrum of CPD formation. Taylor et al. (28), using repair-deficient bacterial mutants showed that efficiency of 222 nm UV in inactivating bacteria and its spores can be attributed to its DNA photolesion (especially pyrimidine dimers) inducing ability.
As described previously, 222 nm UV produces CPD in the uppermost stratum corneum layer (to which microorganisms usually adhere) in host (mice). Consequently, shorter wavelength UV can be used to inactivate microorganisms on stratum corneum surface, while it is less harmful to the hosts; it penetrates bacterial/viral nuclei (present ≲1 lm from cell membrane), but it cannot reach epidermal cell nuclei (present ≳10 lm from cell membrane) of the host. Buonanno et al. (11) showed that 222 nm UV emitted by a filtered KrCl excimer lamp can kill methicillin-resistant Staphylococcus aureus (MRSA) as efficiently as 254 nm UV emitted by germicidal UV lamp. However, they previously showed that 254 nm UV also efficiently kills human cells (10). Narita et al. showed that the efficiency of UV in inactivating microorganisms varies between species. Some microorganisms, including Bacillus cereus, Clostridium sporogenes, and Clostridioides difficile, are more susceptible to 222 nm UV, while others, such as Aspergillus niger spores and Trichophyton rubrum spores, are more susceptible to 254 nm UV. The 222 nm and 254 nm UV were comparable in their ability to inactivate viruses. Although both 222 nm UV and 254 nm UV were not so effective in disinfecting Feline calicivirus (FCV), they could inactivate influenza A viruses (29). Further, Buonanno et al. (30), showed that 222 nm UV safely inactivates airborne human coronaviruses. Although pyrimidine dimers are the primary photoproducts of UV-C exposed DNA, Setlow et al. found that Bacillus subtilis spores have a unique photochemistry. UVinduced photolysis of DNA in the spores does not produce detectable thymine dimers, but it produces 5-thyimyl-5,6dihydrothyine, a unique spore photoproduct (SP) (31). Fukui et al. (32), reported that irradiating healthy volunteers (n = 20) with 222 nm UV at the sterilizing dose of 0.5-5 kJ m À2 is safe and effective in disinfecting skin surface, suggesting that 222 nm UV can be utilized in the future as a disinfectant in surgical fields. Wang et al. reported a 2 log reduction in Bacillus subtilis spore count on irradiation with 172 nm, 222 nm and 254 nm UV at 8.7, 0.22 and 0.4 kJ m À2 fluence, respectively. 222 nm UV was much more efficient, whereas 172 nm UV is less efficient, than 254 nm UV in killing B. subtilis (33). The disinfection efficacy of the vacuum-UV (VUV, 100-200 nm) in aqueous environment is attributed to the various reactive oxygen species (ROS)-primarily comprised hydroxy radicals, and other ROS, including hydroperoxyl radicals, hydrogen peroxide and superoxide radicals-produced by UV-induced photolysis of water (33); involvement of inactivation by ozone might be considered. However, the involvement of ozone seems to be negligible in the disinfection process mediated by the 220 nm UV because UV absorption by oxygen peaks at 150 nm UV with almost no absorption near 220 nm UV. Far UV-C mediated inactivation of microorganisms is summarized elsewhere (34). Recently, a report indicated that 222 nm UV is less efficient than the 254 nm UV in inactivating severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in human saliva (35). Further, the required dose for inactivating SARS-CoV-2 in saliva is 30 times higher than that required to inactivate it in saline solution (PBS). Further investigation will be required to search for the optimal ways to inactivate microorganisms. The optimal wavelength and method would depend on type of the microorganism and its local environment.
EVALUATION OF MINIMUM RESPONSE BY SHORTER WAVELENGTH UVC IN MICE SKIN
Invention and development of excimer lamps emitting shorter wavelength UV has drawn attention of environmental and medical researchers, and the amount of literature on its safety and usefulness is increasing (10)(11)(12)24,25,32,33). Previously, to study the shorter wavelength UV-C safety threshold for use in occupational environments, biological responses and DNA lesions caused by irradiation with shorter wavelength UV-C were examined in vivo by evaluating cutaneous inflammatory responses to UV of 207 nm, 222 nm, 235 nm and 254 nm wavelengths (36). At first a qualitatively and quantitatively appropriate measurement strategy was investigated to evaluate the biological reactions, and minimal perceptible response dose (MPRD) was applied for the purpose. MPRD is determined by visual inspection of any subtle cutaneous response, such as erythema, edema, and scale which could be observed or perceived. Erythema was scarcely observed following irradiation with 207 nm and 222 nm UV, but it was visible after 254 nm UV irradiation. However, edema and scale formation were evident after shorter wavelength UV-C irradiations. The MPRD for 207 nm, 222 nm and 235 nm was determined to be >15, 15 and 2.0 kJ m À2 , respectively (36) ( Table 1). We proposed that action spectra of MED/MPRD are correlated with the functions of the action spectra of CPD formation multiply their reciprocal of transmittance through stratum corneum. Tentative action spectra for a given MED/MPRD was simulated by factoring in the reciprocal of reported action spectra of CPD formation multiply their reciprocal of transmittance through stratum corneum (in humans and mice). Estimation of MED by this manner, namely, action spectra of CPD formation multiply their transmittance through stratum corneum, fit well with the reported figures of MED at each wavelength, in the UVB range (especially at 300 nm or higher UV). At the shorter wavelength UV-C range, erythema is not elicited by UV-C, but only edema is elicited, followed by scale formation. The curve for estimated MED/MPRD action spectra was concordant with the figures of MPRD mentioned above; MPRD for 207 nm, 222 nm and 235 nm to be >15, 15 and 2.0 kJ m À2 , respectively. Though CPD was not formed in the mouse epidermal cells except stratum corneum by irradiation with shorter wavelength UV-C, edema was observed. This indicates that the biological response to shorter wavelengths UV-C is different. Although the mechanisms are unknown, MPRD could be used as indicators for assessing safety at the shorter wavelength UV-C (Fig. 3).
PROTECTION SYSTEM AGAINST UV IN LIVING ANIMALS
Having evolved under the sun, humans cannot avoid sun exposure. Therefore, various sorts of physiological protections exist to protect them from solar UV insults, such as the following. (1) DNA repair systems: dipyrimidine photoproducts are repaired by nucleotide excision repair system (37). If the dipyrimidine photoproducts remain in the replication cycle, they are replaced with A-A by translesion synthesis, eventually resulting in correction of T-T dimers. If photolesions are too large to be repaired, apoptosis is elicited to prevent induction of mutation. This step manifests as "Sunburn cells"/apoptotic keratinocytes. In a sense, genotoxicity is prevented by the biological apoptotic response. Thus, "sunburn" could be a warning sign to prevent genotoxicity. In the UVB range, wherein the UV-induced erythema is prominent, action spectrum of erythema is closely correlated with the action spectrum of photocarcinogenesis. (2) Presence of stratum corneum whose major constituent is keratin, with abundant cystin (absorbance maxima around 210 nm): keratin is produced by keratinocytes in epidermis and forms stratum corneum, which functions as a skin barrier from environments. Low penetrance of shorter wavelength UV is a physical characteristic that is dependent on the wavelength (the longer the wavelength, the farther the UV penetration is). In addition, one of the reasons for the aforementioned negligible epidermal injury in mice after irradiation with 207 nm and 222 nm UV could be the absorption of UV by keratin (Fig. 2) (38). Figure 2 depicts the absorbance of keratin in the different eluents. In literature, keratin absorption spectra are available for wavelengths above 220 nm. The two absorbance maxima of keratin are approximately 280 nm and 240 nm and under (with a substantially high absorption peak below 250 nm); it is the lowest at approximately 250 nm.
ESTIMATING ACTION SPECTRA OF UV-INDUCED PHOTOCARCINOGENESIS
Two key factors determine the occurrence of UV-induced skin cancer: (1) the efficacy of the given UV wavelength to induce pyrimidine dimer formation, and (2) the transmittance of the UV to the basal cells in the epidermis, where cancer stem cells reside. Figure 1 depicts the action spectrum of CPD formation (7). Previously, we have shown that the diffuse transmission of UV (235-360 nm) through commercially available human stratum corneum (approximately 20 lm thickness) is wavelength-dependent (36); for UV with wavelengths between 200 nm and 235 nm, transmittance was linearly correlated with wavelength. The transmittance of UV through human stratum corneum is shown by green line (Fig. 4a). At wavelengths <240 nm, UV transmittance decreases by approximately two to three orders. This substantial decrease of transmittance reciprocally correlates with the substantial increase in UV-absorbance by keratin at wavelengths <240 nm (Fig. 2). The target cells for skin cancer development by photocarcinogenesis, are the basal cells at the bottom of epidermis-cancer stem cells originate here through mutations. Keratinocytes, which are abundant of keratin fibers, constitute 90% of the epidermis. Therefore, we postulate that the transmittance of UV through the epidermis could be substituted by a function of the transmittance through 80-90 lm stratum corneum (shown in light blue line, Fig. 4a). From these results, it is safe to state that the wavelengthdependent photocarcinogenesis at shorter wavelength UV-C is a function of the transmittance to the epidermal basal cells multiplied by the action spectrum of CPD (shown by purple line; Fig. 4b). Calculated action spectra of human and mouse are shown by blue and red line in Fig. 4b, which is consistent with the reported data that the maximum action spectrum of UVinduced carcinogenesis in animal experiments falls in the UV-B (293 nm) range (39). The estimated photocarcinogenic action spectrum at shorter wavelengths indicates that photocarcinogenesis risk due to chronic exposure to 222 nm UV is about 7 log smaller compared to that due to exposure to 300 nm or 254 nm UV.
FUTURE DIRECTION: BALANCING RISK AND USEFULNESS
Emergence of new devices emitting far UV-C is useful for various purposes. UV is one of the best ways to safely disinfect because it inactivates microorganisms immediately without leaving chemical substance. However, UV can cause acute and chronic injuries to the eyes and skin. The most critical outcome of chronic exposure to UV is the development of skin cancers. The mechanism underlying sterilizing potency and skin carcinogenic potency of UV irradiation is formation of DNA photolesions. Penetration depth of 222 nm UV is too shallow to reach cellular DNA of the host, but it can reach the bacterial/viral/fungal DNA existing on the surface of the host skin. Shorter wavelength UV-C are less genotoxic to mammalian skin and eyes, compared with 254 nm UV, in addition to being useful for inactivation of microorganisms on the skin surface. However, we must take into consideration that the inactivation efficacy of UV substantially depends on the structure and size of the microorganisms, DNA localization and molecules in the environment being irradiated.
Available literature data on the shorter wavelength UV-C characteristics are summarized in Table 1. Absence of epidermal CPD staining after far UV-C exposure confirms its safety. Studies on UV-induced photoreaction are mostly limited to CPD formation. However, 193 nm UV was lethal to cells at a much dose than 254 nm UV, based on the colony formation assay, i.e. 193 nm irradiation is cytotoxic (8). Cellular UV cytotoxic assay by colony formation has not been studied using 222 nm UV. Investigating details of various shorter wavelength UV-Cinduced mechanisms, such as formation of DNA lesion (by different pathways), photolysis, interaction of UV with protein, and effect on mitochondrial DNA, are required to understand the scope of biological impact of shorter wavelengths UV-C.
Further, the inactivation efficiency and its mechanisms are not equal among all microorganisms. Therefore, UV treatment must be individually standardized for different microorganisms, taking into account their size, structure, and local environment and the physical characteristics of the given UV wavelength. The filter characteristics and resulting spectrum used in these experiments must be described in detail. Studies on the biological effects of the 200-235 nm components of the UV spectra should be encouraged to deepen our understanding to help balancing their usefulness and risk. Improved knowledge of these new UV-C emitting devices will inform people on their appropriate and optimal use.
|
2022-11-11T06:17:48.513Z
|
2022-11-10T00:00:00.000
|
{
"year": 2022,
"sha1": "2c6355fb90045f4605f3101cc63144d629ec0c24",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/php.13742",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "8daf109357899fb97c7160884e6682246acc3f8c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257452846
|
pes2o/s2orc
|
v3-fos-license
|
Improving Thermal Conductivity and Tribological Performance of Polyimide by Filling Cu, CNT, and Graphene
The thermal conductivity, mechanical, and tribological properties of polyimide (PI) composites filled by copper (Cu), carbon nanotube (CNT), graphene nanosheet (GNS), or combination were investigated by molecular dynamics simulation (MD). The simulated results suggested that Cu can improve thermal stability and thermal conductivity, but it reduces mechanical properties and tribological properties. CNT and GNS significantly improved the thermal and tribological properties at low content, but they decreased the properties at high content. In this study, the modification mechanism, friction, and wear mechanism of different fillers on polyimide were revealed by observing the frictional interface evolution process from the atomic scale, extracting the atomic relative concentration, the temperature and velocity distribution at the friction interface, and other microscopic information.
Introduction
Polyimide (PI) composites with good mechanical properties, heat resistance, and tribological properties are ideal frictional materials for ultrasonic motors [1]. However, the thermal conductivity of polyimide is only 0.30 W/m·K. It is easy to generate heat accumulation at the friction interface under a high vacuum environment, which seriously limits its application in the space environment.
Modification is one of the most effective ways to improve the thermal conductivity of PI. Compared with polymer materials, metals such as copper (Cu), silver (Ag), gold (Au), and other heavy metals, as well as carbon-based materials, such as carbon nanotubes (CNT) and graphene (GN), have higher thermal conductivity. Yang et al. [2] used carbon particles and SiO 2 to improve the thermal conductivity of PI. They also found that both silicon and carbon could improve the thermal conductivity. Wu et al. [3] used boron nitride nanosheets (BNNSs) and silver nanoparticle (AgNPs) hybrid-filled polyimide to improve the thermal conductivity of PI. The results showed that, with the increase in the content of hybrid filler, the thermal conductivity of the composite also increased, and it was significantly higher than that of BNNSs/PI composite. Gong et al. [4] prepared graphene fabric with mesh structure by chemical vapor precipitation method to improve the thermal conductivity of PI. The results showed that, after adding 10 layers of graphene woven fabrics (about 12 wt.%), the thermal conductivity of the composite could be increased by 1418% compared with pure PI. He et al. [5] studied the thermal conductivity of copper to polyoxymethylene (POM) composites by using a hot disk analyzer, and they studied the tribological properties by using a M-2000 friction and wear test machine. The results showed that 3 wt.% copper particles had little effect on the thermal conductivity of POM composites, but with the increase in copper content, the thermal conductivity of the composites would increase. Friction coefficient and wear rate will increase accordingly. Carbon nanotubes are also often used to improve the thermal conductivity of materials due to their excellent thermal conductivity [6][7][8]. Smith et al. [9] also studied the effects of nano-filled materials, mainly steel nanospheres, carbon nanotubes, and graphene sheets, on the thermal conductivity of fluoropolymers. The results showed that carbon nanotubes and graphene could effectively improve the thermal conductivity of polymers. In addition, many scholars have improved GNS to achieve better application [10][11][12]. Yang et al. [13] studied the friction effect of carbon nanotubes with different functions on composites through molecular dynamics simulation. The simulation results show that proper functionalization of carbon nanotubes can enhance the interface bonding between carbon nanotubes and matrix materials, thus improving the mechanical properties of the composite and reducing the wear rate. Cai et al. [14] also found that adding a small amount of graphene nanosheets could synergically enhance the thermal conductivity of low dielectric constant boron nitride polytetrafluoroethylene composites. Although the principle of adding high thermal conductivity filler to low thermal conductivity PI to improve the thermal conductivity [15] is widely recognized, no one has studied the effects of copper, carbon nanotubes, and graphene on the thermal conductivity changes of polyimide, and they have not studied the tribological properties at the same time.
This paper selected copper as a filler based on the ultrasonic motor copper stator, and copper also has high thermal conductivity and low cost. Except for copper, carbon nanotubes and graphene were added to improve the thermal conductivity, mechanical properties, and wear resistance of PI. Due to a large number of additive types, the filling ratio of each type was optimized by molecular dynamics simulation to reduce the time and cost of experimental research. With the development of the computer level, molecular dynamics simulation has become a very mature and reliable method and has been widely used in scientific research. It can observe the intrinsic mechanism of mixtures at the atomic level and predict the properties of mixtures.
Establishment of Molecular Dynamics Model of Composite Materials
The original structure of PI is designed as shown in Figure 1a Periodic boundary condition cubic units with the size of 4.0 × 4.0 × 4.0 nm 3 are constructed, and then the PI molecular chains with the degree of polymerization of 2 were filled by the Monte Carlo rule [16]. After molecular dynamics optimization, the packing density is close to the actual density, which is 1.6 g/cm 3 . Finally, PI molecules were filled into the periodic cell, as shown in Figure 2a. The three modifiers shown in Figure 1 are filled into the pure PI model, the composite models of Cu/PI, CNT/PI, and GNS/PI formed are shown in Figure 2, and the graphene and carbon nanotubes were colored green for easy identification. In order to compare the effects of modifiers more clearly, a set of composite models with three modifiers added to PI were also designed, as shown in Figure 2e.
Model Optimization
The Condensed-Phase Optimized for Atomistic Simulation Studies (COMPASS) [17] force field, which is suitable for simulating polymer systems, was selected in all optimization processes. Ewald and Atom methods were used to analyze van der Waals and Coulomb interactions between PI, Cu, CNT, and GNS [18][19][20]. The detailed parameters in the optimization process are shown in Table 1. Firstly, the Smart method was used for geometric optimization, and then the global minimum energy configuration was obtained. To further relax the molecular chains of the model, a 15-week annealing process was performed. In order to obtain a sufficient stable and reasonable model, the lowest energy model was selected from 15 annealing cycles for dynamic optimization. The isothermal ensemble system was used for dynamic optimization at atmospheric pressure 10 −4 GPa and time step 1 fs.
Calculation of Thermal and Tribological Properties
Thermal conductivity is a measure of a substance's ability to transfer heat through a material by conduction. It is measured by the length of the material under unit temperature difference and unit time direct conduction of heat, and usually λ is used to represent the thermal conductivity. The thermal conductivity is calculated according to the following, Figure 3 [21]: In order to better simulate the actual operation of the ultrasonic motor, the friction model, as shown in Figure 4, is established, and the friction model was established after matching with copper pairs. The pressure on the surface of PI nanocomposites is tested by pressing a 10 Å × 10 Å × 7.2 Å copper nano pin. The bottom layer is a 40 Å × 40 Å × 7.2 Å copper atomic layer. Before friction, the temperature was set to 298 K, and NVT ensemble was selected for all the friction simulations. The tribological properties of the friction materials were analyzed by running 300 ps at 0.1 Å/ps under the loading of 0.01 GPa to obtain the trajectory file with time change.
Thermal Properties
The PI composites with three different components are designed for each composite. In Cu/PI, the content of Cu powder is 3 wt.%, 6 wt.%, and 9 wt.%, respectively. The contents of carbon nanotubes and graphene in CNT/PI and GNS/PI are set at 0.5 wt.%, 1 wt.%, and 1.5 wt.%, which took into account the fact that carbon nanomaterials are extremely easy to aggregate, leading to the degradation of mechanical properties [22][23][24].
After adding Cu to PI, the thermal conductivity of the composite was improved, and the thermal conductivity increased with the copper content, as shown in Figure 5a. However, the thermal conductivity increases little when the Cu content is small. The thermal conductivity of 3 wt.% Cu/PI increases by 5.3% compared with pure PI. When the copper content reaches 9 wt.%, the thermal conductivity has a qualitative leap, reaching 0.521 W/m·K. The increase is 34.5% relative to pure PI. This is because, when Cu content is high, it is easy to form a heat conduction path, so the thermal conductivity is greatly improved [25][26][27]. Figure 5b. The mechanical properties of the composites decreased after Cu was added. When the Cu content is 9 wt.%, Young's modulus and shear modulus of PI are 2.62 GPa and 1.21 GPa, which decreased by 20.6% and 24.4%, respectively. This is because, with the increase in copper content, the interaction force between polymers of PI composite materials is weakened, which makes copper powder particles easily cause stress concentration phenomenon and promotes the formation of cracks.
After the addition of CNT, the thermal conductivity of PI composite increased first and then decreased, as shown in Figure 6a. The thermal conductivity of PI composite reached the peak of 0.88 W/m·K at the CNT content of 1.0%, which was 131.6% higher than pure PI. This is because of the composite effect [28]. When CNTs are at the stage of low mass fraction, the thermal conductivity of the composite increases with the increase in the content of the filler of CNT. At this point, the thermal conductivity of CNT plays a leading role in the overall thermal conductivity of the composites. With the excessive increase in CNT concentration, the stable arrangement of CNT in the composite material is damaged, the distribution of CNT become disordered, the normal heat conduction mode is broken, and the direction and path of heat conduction begin to be disordered, reducing the thermal conductivity of aerogel. Young's modulus and shear modulus are also tested, as shown in Figure 6b. When the CNT with 0.5 wt.% mass fraction is added, Young's modulus and shear modulus of the PI composite reach the maximum, which is 3.71 MPa, increased by 12.4%, and the shear modulus reaches 1.92 MPa, increased by 20%. This is because, when the stress is applied along the axial direction of CNT, the matrix can transfer the stress to the axial direction of CNT through intermolecular force, which enhances the structure of the composite material and improves the hardness of the composite material. In the simulation, the content of CNT is adjusted by changing the length of CNT. Moreover, the length of CNT increased, resulting in the disconnection of PI [29]. As a result, the mechanical properties decrease with the increase in the content of CNT. Figure 7a shows the thermal conductivity of GNS/PI composites. When the GNS content was 1 wt.%, the in-plane thermal conductivity reached 0.44 W/m·K, which increased by 15.8%. The enhancement effect of GNS on the thermal conductivity of PI is determined by the specific surface area. However, the graphene nanosheet is a two-dimensional planar structure. Both sides can contact PI molecules and can be embedded into the matrix in a flat way, which undoubtedly increases the probability of receiving phonon transport [30]. Both the shear modulus and Young's modulus of the GNS/PI composite in Figure 7b decrease with the increase in GNS content. This is because GNS itself has very high mechanical properties. However, with the increase in GNS content, the mechanical properties began to decline. This is mainly because the larger area of the graphene makes the polymer discontinuous.
Regarding the selection of materials, we should not only consider the thermal properties, but also the mechanical properties of materials. Therefore, a set of optimal amounts of modifiers are set to be mixed together, namely, PI composites containing 3 wt.% Cu, 0.5 wt.% CNT, and 0.5 wt.% GNS. The comparison of thermal conductivity of all materials is shown in Figure 8. Compared with pure PI and Cu/PI, the carbon material had better thermal conductivity. The 0.5 wt.% CNT/PI had the highest thermal conductivity, with 1.02 W/m·K. Unfortunately, the combination one did not show high thermal performance. With regard to the analysis from their structure in Figure 4e, the heat conduction channel did not form between Cu, CNT, and GNS due to steric hindrance. Therefore, the thermal conductivity of the composite is not as high as that of CNT/PI and GNS/PI. In future studies, the fillers' proportions and structure distribution will focus and be designed to further improve their thermal and mechanical performance.
Tribological Properties
The friction coefficient (COF) and wear rate under the optimal addition amount of modifier are obtained through the molecular dynamics simulation calculation, as shown in Figure 9. The friction coefficient after adding copper is 16.7% higher than that of pure PI because copper has a higher hardness in the polymer matrix, and it is not easy to change its form or position under the action of shear or extrusion [31]. Compared with pure PI, the friction coefficient of the added CNT decreased by 23.3%. The reduction was the same with the addition of GNS modifier. This is due to the strong adsorption between the CNT and the PI matrix, which can cluster PI molecules and reduce the adhesion on the copper friction pair, thus reducing the friction coefficient. This is similar to the principle of GNS. When three modifiers were added to the PI, the friction coefficient decreased even more. In order to further observe the dynamical evolution of Cu, CNT, and GNS in the wear state of PI matrix, the state diagram of the model after the end of friction is extracted, as shown in Figure 10. By comparing the deformation degree of the models in Figure 10, it can be found that the deformation degree of pure PI is the largest, while the composite model with Cu, CNT, and GNS is relatively small due to its better mechanical properties. Additionally, the deformation degree is minimal when Cu, CNT, and GNS are added to the PI simultaneously. Then, the changes of temperature and energy in the friction process are also analyzed to explore the mechanism of friction reduction. First, the atomic relative concentration distribution along the z direction of the friction PI nanocomposite is extracted, as shown in Figure 11. The relative atomic concentration of the four models at the bottom contact surface is high, indicating that there is a strong interaction between the polymer molecules and the copper layer. Compared with pure PI, the peak relative concentration after the addition of modifiers is smaller. The peak relative concentration of Cu, CNT, and GNS is reduced by 58.6%, 41.3%, and 27.5%, respectively. The peak relative concentration of three modifiers is reduced by 51.7%. This is because, after adding the modifier, there will be adsorption between the modifier and PI, so the relative atomic concentration of the interface is lower than that of pure PI. After 300 ps of friction, some of the molecules adhered to the copper pin, so the concentration decreased. Then, the distribution diagram of atomic temperature along the direction of matrix thickness in the process of molecular dynamics friction simulation is calculated, as shown in Figure 12. This occurs at 54 Å along the thickness of the matrix, which is the friction interface between PI matrix and copper pin layer, where a peak temperature of 333 K appeared. In contrast, the peak temperature of the matrix with modifier added at the friction interface is about 300 K. The frictional interface temperatures of Cu, CNT, and GNS are reduced by 9.6%, 10.2%, and 9.9%, respectively. When three modifiers are added at the same time, the friction interface temperature decreased to 295 K, which was 11.4% lower than the pure PI. It should be pointed out that this phenomenon is consistent with the conclusion of atomic concentration distribution in Figure 10. Higher interface friction temperature not only easily damages the viscous properties of the polymer, but it also easily produces adhesive wear, leading to reduced service life. As shown in Figure 13, the pure PI matrix also has an obvious peak velocity of 0.9 Å/ps at the friction interface, while the matrix with the modifier does not have an obvious peak. In the direction of thickness, the movement speed of Cu and CNT is relatively slow, and there is only a small peak at 11 Å, which is the friction interface between the friction pair and the composite material. The motion velocity fluctuation of the composite material with GNS added is relatively large, so the motion velocity fluctuation of the composite material with three modifiers is also relatively obvious, but they do not exceed the peak value of pure PI. This phenomenon is also consistent with the results of atomic concentration distribution and friction interface temperature. Fewer molecular chains interact with the copper pin, reducing temperature of the friction interface. Therefore, the movement of atoms is affected by the temperature, which limits the atomic movement and reduces the potential wear amount, thus enhancing the tribological properties of the polymer matrix.
Conclusions
The effects of copper powder, carbon nanotube, and graphene on the thermal conductivity of PI are studied by molecular dynamics simulation. The conclusion is as follows: a.
Copper can improve the thermal conductivity of PI, but at the same time, due to the uneven distribution of copper, it also reduces the mechanical properties of PI. Additionally, the hardness of copper is very large, which directly increases the friction coefficient of composite materials. b.
Carbon nanotube (CNT) and graphene can improve the performance of PI very well. The 0.5 wt.% mass fraction of carbon nanotubes can, respectively, increase the axial thermal conductivity by 115.8%, and the maximum thermal conductivity of graphene in the two-dimensional plane direction can increase by 168.4%. Both of them can effectively reduce the friction and wear of the composites and make the composites have excellent tribological properties. However, the strength of the composite decreases as the content of carbon nanotubes and graphene continues to increase. c.
The composites with three fillers at the same time have no obvious change in the increase in thermal conductivity, but they can greatly reduce the friction coefficient of the composites and reduce the wear rate. Acknowledgments: Thanks for discussion with Guoqing Wang.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-03-12T15:45:46.179Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "87d660eecf668bd60387c4c9409b6f551a48b1d7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/14/3/616/pdf?version=1678185548",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b872cf6117821857b571ee39d71352df2f44fae",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249776602
|
pes2o/s2orc
|
v3-fos-license
|
An exploratory examination of executive functioning as an outcome, moderator, and predictor in outpatient treatment for adults with anorexia nervosa
Objective People with anorexia nervosa often exhibit inefficiencies in executive functioning (central coherence and set shifting) that may negatively impact on treatment outcomes. It is unclear from previous research whether these inefficiencies can change over treatment. We aimed to (1) investigate whether executive functioning can improve over treatment, (2) determine whether baseline executive functioning moderates treatment outcome, and (3) examine whether baseline executive functioning predicts early change (i.e., increase in body mass index over the first 13 weeks of treatment) or remission. Method We conducted linear mixed model and logistic regression analyses on data from the Strong Without Anorexia Nervosa trial (Byrne et al. in Psychol Med 47:2823–2833, 2017). This study was a randomised controlled trial of three outpatient treatments for people with anorexia nervosa: Enhanced Cognitive Behavioural Therapy, Maudsley Model Anorexia Nervosa Treatment for Adults, and Specialist Supportive Clinical Management. Results While set shifting clearly improved from baseline to end of treatment, the results for central coherence were less clear cut. People with low baseline central coherence had more rapid reductions in eating disorder psychopathology and clinical impairment than those with high baseline central coherence. Baseline executive functioning did not predict early change or remission. Discussion The detail-focused thinking style commonly observed among people with anorexia nervosa may aid treatment outcomes. Future research that is more adequately powered should replicate this study and examine whether the same pattern of results is observed among people with non-underweight eating disorders. Supplementary Information The online version contains supplementary material available at 10.1186/s40337-022-00602-0.
Background
Anorexia nervosa is a serious mental health condition that can significantly impair physical health [1], disrupt psychosocial functioning [2], and has a mortality rate six times higher than that of the general population [3]. It is also notoriously difficult to treat, with 40% of adult clients not completing stand-alone outpatient treatments, and only 28% reaching remission at 12-month follow-up [4]. Some of this poor treatment outcome may be due to the significant inefficiencies in executive functioning that are often observed among people with anorexia nervosa [5][6][7].
These inefficiencies in executive functioning exist in two key areas: central coherence and set shifting [5]. Inefficiencies in central coherence are characterized by excessively detail-focused thinking at the expense of the bigger picture [8], and inefficiencies in set shifting by rigid, inflexible thinking with difficulty changing responses when rules in the environment change [9]. These inefficiencies could reduce the impact of therapy for people with anorexia nervosa. For example, they could result in rigid thought patterns and behaviours related to detail about food and weight that perpetuate anorexia nervosa and could interfere with adopting a bigpicture perspective about life and the need for change.
It is unclear whether these inefficiencies can change over the course of treatment. Some research has shown that central coherence and set shifting changed inconsistently or not at all over treatment [10][11][12]. However, other research has found that central coherence and set shifting can improve over treatment, for example, when targeted with cognitive remediation therapy (for a systematic review, see [13]. The present study utilised data from the Strong Without Anorexia Nervosa trial (SWAN; [4]. We conducted exploratory, secondary data analyses to (1) make sense of the inconsistent findings by investigating whether executive functioning can change over treatment, (2) determine whether baseline executive functioning moderates treatment outcome, and (3) examine whether baseline executive functioning predicts early change in body mass index or remission. Based on Tchanturia and colleagues [13], we predicted that executive functioning would improve over treatment. In the absence of evidence, we also predicted that people with higher executive functioning would have better outcomes, show early change, and achieve remission.
The SWAN trial
The SWAN trial was a multi-site randomised controlled trial across three Australian states. Participants were randomly allocated to receive either Specialist Supportive Clinical Management (SSCM; n = 39), Maudsley Model Anorexia Nervosa Treatment for Adults (MANTRA; n = 41) or Enhanced Cognitive Behavioural Therapy (CBT-E; n = 40). In SSCM, the first half of sessions combines clinical management and supportive psychotherapy, whereas the second half of sessions focuses on content dictated by the client [14][15][16]. MANTRA targets factors maintaining anorexia nervosa, specifically, thinking style, socio-emotional impairments, close others' unhelpful responses to the illness, and positive beliefs about anorexia nervosa [17,18]. CBT-E for underweight clients involves motivational work, increasing dietary intake and weight, tackling eating disorder psychopathology, maintaining changes, and developing strategies to overcome setbacks [19]. Independent ratings demonstrated that all three treatments were highly distinguishable [20]. In all three treatments, participants were allocated 25 to 40 sessions based on their pre-treatment BMI (< 16 = 40 sessions; 16 ≥ 17.5 = 30 sessions; 17.5 ≤ 18.5 = 25 sessions) to allow the time required to restore weight. The trial found no significant differences in outcomes between treatments, with significant improvements across all three [4].
Participants
The overall SWAN sample comprised 120 participants (95.8% female) who have been described previously and more fully in Byrne et al. [4]. They had a mean age of 26.19 years (SD = 9.47), and a mean baseline body
Plain English Summary
People with anorexia nervosa often have difficulty thinking flexibly and in terms of the big picture. We investigated whether these thinking styles (1) change over treatment, (2) influence response to treatment, or (3) predict whether people gain weight or overcome the eating disorder. We found that people were able to think more flexibly after treatment. We also found that people who had more difficulty seeing the big picture prior to treatment had a more rapid decrease in eating disorder symptoms and clinical impairment in treatment. Thinking styles did not predict whether people gained weight early in treatment or overcame the eating disorder. Our findings suggest that the detail-focused thinking style commonly observed among people with anorexia nervosa can be both a vulnerability and a strength.
mass index (BMI) of 16.70 kg/m 2 (SD = 1.22). To be eligible for participation, they had to meet the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; APA [21]) criteria for anorexia nervosa. Due to missing executive functioning data, the present study used a subset of participants from the SWAN sample (for demographic information, see Supplementary Table 1).
Measures
Participants completed the Wisconsin Card Sorting Test (WCST; [22] and the Rey Complex Figure Test (RCFT [23,24] at baseline and end of treatment. Treatment outcome measures were completed at baseline, mid-treatment, end of treatment, six-month followup, and 12-month follow-up.
Executive functioning
Central coherence The RCFT was used to assess central coherence. In this test, the participant is instructed to copy a complex figure comprising 18 numbered elements as accurately as possible. Originally, participants were instructed to produce the complex figure twice (once copying and once from memory) and both drawings were used to measure central coherence. Now, the copy trial is considered to provide the most direct measure of central coherence [25]. A central coherence index (CCI) is obtained from the order that the participant copies the first six elements of the figure (i.e., whether they copy global or detailed elements) and the style that the participant uses to copy the figure (i.e., whether they copy key global elements in a continuous or fragmented way). The CCI can range from 0 to 2, with higher scores indicating better central coherence.
Set Shifting Set shifting was measured using the WCST. This test requires the participant to match a number of stimulus cards to one of four key cards. Cards can be matched by the colour, form, or number of symbols on the cards. The participant is not told how to match the cards. Instead, after each trial, they are told whether they were right or wrong and they must infer the sorting rule from this feedback. Each time that the participant correctly matches 10 cards in a row, the sorting rule changes. The participant is not told that the sorting rule has changed and must infer the new sorting rule from the experimenter's feedback. The measure of set shifting is the number of "perseverative errors" or times that the participant matches cards according to a previously correct sorting rule after the rule has changed [26]. A higher number of perseverative errors indicates poorer set shifting performance [27].
Treatment outcomes
BMI Participants' height was measured in metres (m) at baseline. Their weight was measured in kilograms (kg) at baseline, each treatment session, and each assessment time-point. BMI was calculated as kg/m 2 .
Eating Disorder Psychopathology The global score of the Eating Disorder Examination (EDE; [28] was used to measure eating disorder psychopathology over the past 28 days. This semi-structured interview was administered by the Clinical co-ordinator at each site and produces four subscales: restraint, eating concern, shape concern, and weight concern. The global score is produced by summing the 22 items across the subscale scores and dividing by the number of items to obtain a mean scale score. This global score can range from 0 to 6, with higher scores indicating greater eating disorder psychopathology. The EDE has satisfactory internal consistency, discriminates well between people with eating disorders and healthy controls, and correlates with measures of similar constructs [29,30]. In the present study, internal consistency was 0.92.
Clinical Impairment The 16-item Clinical Impairment Assessment (CIA; [19] was used to assess the extent of psychosocial impairment due to eating disorder psychopathology over the past 28 days. In this self-report questionnaire, items are rated from 0 to 3 and summed to produce a global score. This global score can range from 0 to 48, with higher scores indicating greater clinical impairment. The CIA has good internal consistency, discriminates between people with eating disorders and healthy controls, and is highly correlated with clinicians' ratings of psychosocial impairment [31]. In the present study, internal consistency was 0.92.
Early change
Following Wade and colleagues [32], we assessed early change using change in BMI over the first 13 sessions of treatment as this represented the first "halfway point" in the 25 to 40 sessions offered in the SWAN trial. While shorter timeframes have been used in adolescents receiving Family Based Therapy (e.g., four sessions; [33]; [34], 13 sessions accounted for the less intense nature of outpatient treatment for adults with anorexia nervosa and allowed more time for changes to be observed, as has been adopted in a previous study of early change in adults [35]. Using the SWAN data, Wade and colleagues identified four latent classes with one class having a significantly greater increase in BMI over the first 13 sessions than any other class. For the purposes of the current analyses, we dichotomised early change as 0 (no early change) and 1 (early change-i.e., the people with the greatest increase over the first 13 sessions).
Remission
Following Byrne et al. [4], remission was defined as having a BMI greater than 18.5, a global EDE score less than 1.8, and no binge/purge behaviours.
Statistical analyses
We conducted all analyses using IBM Statistical Package for the Social Sciences (Version 22; IBM [36]. Logistic regression analyses were conducted to determine whether baseline variables predicted missing baseline or end of treatment data for both central coherence and set shifting. We applied Bonferroni corrections for all comparisons. We conducted two linear mixed model (LMM) analyses to investigate whether executive functioning can change over treatment and whether change over treatment differed between groups. Both analyses had time and group as fixed effects and the interaction between these variables. One of these LMM analyses had central coherence as the outcome variable and the other had set shifting. To investigate whether baseline executive functioning moderated treatment outcome, we conducted a separate LMM analysis for each treatment outcome variable. These LMM analyses had fixed effects of time and baseline central coherence or set shifting and the interaction between these variables. Group was not included as a fixed effect in any of the analyses examining early change or treatment outcomes because LMM analyses require a minimum of 10 cases for each effect examined. When significant interactions were observed, we categorised participants as having low or high baseline executive functioning using a median split. Finally, to investigate whether baseline executive functioning predicted early change or remission, we conducted logistic regression analyses.
Missing data
The pattern of missing data for each variable is described below. Both variables had a large amount of missing data at end of treatment and hence our analyses used only participants who had both baseline and end of treatment central coherence and set shifting data (as previously mentioned, demographics are provided in Additional file 1: Supplementary Table 1).
Central coherence
Of the 120 participants, 41 (34%) had both baseline and end of treatment data, 43 (36%) were missing both baseline and end of treatment data, 1 was missing only baseline data, and 35 were missing only end of treatment data. Baseline EDE predicted missing baseline central coherence data, such that those with higher eating disorder Table 1 Logistic regression analyses predicting missing baseline and end of treatment executive functioning data from baseline variables * Those with higher eating disorder psychopathology at baseline were less likely to have missing baseline central coherence data For end of treatment, the analyses were only conducted for those who also had baseline data. OR = odds ratio; CI = confidence interval; AN = anorexia nervosa; AN-R = anorexia nervosa restricting subtype; AN-BP = anorexia nervosa binge purge subtype. BMI = body mass index; EDE = eating disorder examination; CIA = clinical impairment assessment. The descriptive statistics for AN subtype are presented as frequency (percentage)
Baseline
End of treatment psychopathology at baseline were less likely to have missing baseline central coherence data. However, as shown in Table 1, no baseline variables predicted whether end of treatment central coherence data were missing. We can, therefore, conclude that end of treatment central coherence data were missing at random, and analyses contained up to 76 (63%) of the study participants.
Set shifting
Of the 120 participants, 37 (31%) had both baseline and end of treatment data, 28 (23%) were missing both baseline and end of treatment data, 3 were missing only baseline data, and 52 were missing only end of treatment data. As shown in Table 1, no baseline variables predicted whether set shifting data were missing at baseline or end of treatment. Therefore, we can conclude that both baseline and end of treatment set shifting data were missing at random, and analyses contained up to 89 (74%) of the study participants. Table 2 provides the descriptive statistics and Table 3 the inferential statistics.
Aim 3 executive functioning as a predictor of early change and remission
Neither baseline central coherence nor set shifting predicted early change or remission at end of treatment, 6-month follow-up, or 12-month follow-up. Table 4 displays the descriptive and inferential statistics.
Discussion
We conducted exploratory, secondary data analyses to investigate whether executive functioning can change over the course of treatment, determine whether baseline executive functioning moderated treatment outcomes, and examine whether baseline executive functioning predicted early change or remission. While participants were a subset of those from a previous investigation [4], we believe this to be a representative sample as central coherence and set shifting data were missing at random with one exception-those with higher eating disorder psychopathology at baseline were less likely to have missing baseline central coherence data. This may reflect the fact that the data came from a treatment study and those with more severe eating disorder symptoms were attending all treatment/ assessment sessions as they needed the support. This can be viewed as a strength as our results may extrapolate to those with more severe symptomatology.
Executive functioning as an outcome
In line with research showing that executive functioning can change over treatment [13], we found that set shifting clearly improved from baseline to end of treatment. The results for central coherence were less clear cut, with improvements only noted in MANTRA and SSCM. Given that MANTRA explicitly targets thinking styles, we might expect to see central coherence most improved in this group. Rather, we found that central coherence improved more over treatment in SSCM. As mentioned, a core element of SSCM is supportive psychotherapy with up to half of each session focused on content dictated by the client [14][15][16]. It is possible that having to think through everything that is going on in one's life and prioritise which topics, issues, or concerns are most important to discuss in session each week may promote bigger picture thinking. Moreover, focusing on content that is most salient to the individual may enable them to work through the specific stuck points that are consuming their attention and getting in
Executive functioning as a moderator
We found that people with low baseline central coherence had a greater decrease in eating disorder psychopathology and clinical impairment from baseline to 12-month follow-up than those with high baseline central coherence. A possible explanation for this finding is that the big picture of recovery may seem daunting and interfere with treatment progress, whereas an ability to focus on the details of changes that need to happen each week (e.g., changes in eating step by step) is what is needed and helpful for more rapid improvement. This finding supports the proposition that the detail-focused thinking style commonly observed among people with anorexia nervosa can be both a vulnerability and a strength. More specifically, while thinking in terms of details, for example, about food and weight could pose a vulnerability for the development and maintenance of anorexia nervosa, it could also break down the process of recovery into smaller, less overwhelming, and more achievable steps.
Executive functioning as a predictor
We found that baseline central coherence and set shifting did not predict early change or remission. This suggests that other variables may be more important or influential than baseline executive functioning. For example, baseline variables such as BMI, motivation, eating disorder psychopathology, depression diagnosis, self-esteem, and anorexia nervosa subtype have been shown to predict how well people with anorexia nervosa do in treatment [38][39][40].
Limitations
The present study had several limitations. First, participants were primarily white females. Thus, results are unable to be extrapolated to a more diverse sample of people with anorexia nervosa. Second, like many randomised controlled trials, strict exclusion criteria were applied (e.g., severe physical or mental illness such that outpatient treatment was inappropriate, current severe substance dependence, and current use of atypical antipsychotics because of the weight gain properties of these drugs). This means that the sample may not be representative of all people presenting for outpatient treatment of anorexia nervosa. Third, the substantial amount of missing executive functioning data limited our power to investigate moderation. It also may have introduced bias, but overall, the subset of participants included in our analyses were representative of the sample as a whole. Finally, results and p values should be interpreted with some caution due to the small sample size and exploratory nature of the study.
Conclusion
Our findings highlight several directions for future research. First, replication of our results is required before definitive conclusions can be made about whether set shifting can improve over treatment and whether baseline central coherence moderates treatment outcomes. Second, given the inconsistent findings regarding whether executive functioning can improve over treatment, it would be beneficial for future treatment studies to routinely assess executive functioning. Third, a recent meta-analysis by Keegan et al. [5] found that people with non-underweight eating disorders, such as binge eating disorder and bulimia nervosa, have central coherence and set shifting inefficiencies that are comparable to those observed among people with anorexia nervosa. Therefore, it would be of interest to examine whether the same pattern of results is obtained in the non-underweight group. Finally, and most importantly, our findings suggest some hypotheses for further testing in an adequately powered study. Specifically, while a detailfocused thinking pattern can pose a vulnerability for the development and maintenance of anorexia nervosa, it also offers a pathway to a focused drive and determination that can be weaponised against the eating disorder when directed towards recovery.
Additional file 1. Supplementary Table 1. Basic demographic information for participants who had both baseline and end of treatment central coherence and set shifting data.
|
2022-06-18T13:15:45.729Z
|
2022-06-17T00:00:00.000
|
{
"year": 2022,
"sha1": "6cc0597fd39a718e26c0542cc29d1bbaaa4d1be4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "44cb3e7e0124156dc0ac278b08fc53081cff0791",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9719472
|
pes2o/s2orc
|
v3-fos-license
|
Micro-CT Analysis of Bone Healing in Rabbit Calvarial Critical-Sized Defects with Solid Bioactive Glass, Tricalcium Phosphate Granules or Autogenous Bone
ABSTRACT Objectives The purpose of the present study was to evaluate bone healing in rabbit critical-sized calvarial defects using two different synthetic scaffold materials, solid biodegradable bioactive glass and tricalcium phosphate granules alongside solid and particulated autogenous bone grafts. Material and Methods Bilateral full thickness critical-sized calvarial defects were created in 15 New Zealand white adult male rabbits. Ten defects were filled with solid scaffolds made of bioactive glass or with porous tricalcium phosphate granules. The healing of the biomaterial-filled defects was compared at the 6 week time point to the healing of autologous bone grafted defects filled with a solid cranial bone block in 5 defects and with particulated bone combined with fibrin glue in 10 defects. In 5 animals one defect was left unfilled as a negative control. Micro-computed tomography (micro-CT) was used to analyze healing of the defects. Results Micro-CT analysis revealed that defects filled with tricalcium phosphate granules showed new bone formation in the order of 3.89 (SD 1.17)% whereas defects treated with solid bioactive glass scaffolds showed 0.21 (SD 0.16)%, new bone formation. In the empty negative control defects there was an average new bone formation of 21.8 (SD 23.7)%. Conclusions According to findings in this study, tricalcium phosphate granules have osteogenic potential superior to bioactive glass, though both particulated bone with fibrin glue and solid bone block were superior defect filling materials.
INTRODUCTION
There is a large and developing market for biodegradable bone substitutes in many fields of surgery. Special attention has been paid to develop bone substitutes for the cranio-maxillofacial skeleton since bone defects requiring grafting are easily created in routine procedures such as cranioplasty and facial asymmetry corrections. Numerous surgical techniques have been developed to reconstruct intra-operatively created cranio-maxillofacial bony defects [1][2][3]. Although bone substitutes, such as tricalcium phosphate (TCP) and bioactive glass (BAG), are already widely used in humans, there is little published histological and radiological data comparing their bony healing. In vivo animal experiments using the rabbit calvarial critical-sized defect (CSD) model is a well-established and especially suitable for cranial bone defect model as the rapid healing process is similar to the common human patient group of pediatric patients with a postoperative cranioplasty condition. The rabbit CSD model serves as a basis to evaluate bone substitute products and compare their healing characteristics to the current gold standard, autogenous bone [4,5]. Inorganic synthetic bone substitutes have also been combined with autologous differentiated stem cells [6][7][8] or even with bone marrow or adipose derived stem cells without osteogenic induction [9][10][11] with the goal of speeding up the ossification process in larger bone defects. There is a growing interest to gain an understanding of the fate of implanted cells within a porous solid scaffold and how extracellular mineralization induced by the scaffold affects the cells [12][13][14][15][16][17]. The BAG scaffolds are known to be non-cytotoxic, bacteriostatic and capable of supporting both cell attachment and proliferation in vivo. Early results indicate that the inclusion of BAG promotes precipitation of calcium phosphate on the scaffold surfaces leading to earlier cell differentiation and matrix mineralization [18][19]. Micro-computed tomography (micro-CT) serves as a new and accurate tool to analyze bone healing and to evaluate the bone formation on biomaterials such as TCP and BAG [20,21]. The aim of this study was to evaluate bone healing in rabbit critical-sized bicortical calvarial defects comparing two different synthetic scaffold materials, solid bioactive glass and tricalcium phosphate granules versus solid or particulated autogenous bone. Since granular materials were used in the form of tricalcium phosphate and particulated autogenous bone chips no mechanical testing was planned in this study since granular scaffolds are known to be characteristically non-loadbearing materials.
MATERIAL AND METHODS Animal study
The following animal care and experimental protocol received ethical approval (Decision ESHL-2008-07701/Ym-23) from the Oulu University Hospital Ethical Committee. The study was performed in accordance with the Declaration of Helsinki and its later amendments. A total of 15 white New Zealand male rabbits, aged 6 months or older and weighing at least 3.5 kilograms were included in this study. The anaesthesia was induced with subcutaneous injection of 15 mg/kg ketamine (Ketalar 50 mg/ml, Pfizer Oy, Helsinki, Finland) and 0.25 mg/kg medetomide (Domitor ® vet 1 mg/ml, Orion Oyj, Espoo, Finland). The eyes were protected against drying by applying carbomer gel (Viscotears 2 mg/g, Alcon Finland, Vantaa, Finland). An intravenous catheter was inserted to the lateral ear vein and a continuous infusion of sodium hydrochloride solution 0.9% (Natriumklorid 0.9%, Fresenius Kabi Ab, Helsinki, Finland) was given under the operation. Antibiotic prophylaxis with 60 mg/kg cefuroxime (Zinacef 750 mg, GlaxoSmithKline Oy, Espoo, Finland) was given intravenously before the operation as a single dose. The animals were protected against temperature loss with special covers and warming pads in standard animal laboratory manner. Under general anaesthesia, the fur on the planned operation area on the rabbit head was shaved and cleaned properly with povidone iodide (Betadine 75 mg/ml, Oy Leiras Finland Ab, Helsinki, Finland) solution. A double cover and sterile instrumentation was used individually on each animal for the surgery according to the standard OR protocol. For local anaesthesia, 2 ml of lidocaine (Lidocaine c. adrenalin 2%, Orion Oyj, Espoo, Finland) was infiltrated in the skin around the planned incision line in the midline of the skull. From an approximately 5 cm long sagittal incision of the skin the periosteum was elevated and the rabbit was operated bilaterally bicortical full thickness circular critical-sized (15 mm in diameter) defects, producing a total of 30 defects. Five defects were left empty as unfilled negative controls. Ten defects were filled with autologous particulated calvarial bone combined with fibrin glue to fix the bone mass within the defects mimicking particulate bone harvesting during cranioplasty. Five defects were filled with an autologous calvarial bone block mimicking the standard cranioplasty ; 0 -0.6 wt% TiO 2 ; and 48.5 -52 wt% SiO 2 . The solid scaffolds were made of melt spun bioactive glass fibers of 75 µm diameter which were sintered under defined conditions to produce a rigid scaffold with total porosity of 70% [22]. Five defects were filled with 0.5 g of biphasic β-TCP granules (Straumann Bone Ceramic™, Straumann AG, Basel, Switzerland). The granules were 100% crystalline being composed of 60% hydroxyl apatite and 40% β-tricalcium phosphate. Straumann Bone Ceramic granules are available in 2 sizes: 0.4 to 0.7 mm and 0.5 to 1 mm. In this study granule sizes ranging from 500 to 1000 μm were used, while the pore size range was 100 to 500 μm. The total porosity of the product was 90% and the pores were interconnected.
After the operation the soft tissue and skin was sutured tightly to cover the operation areas with Vicryl ® 3-0 (Ethicon Inc., Somerville, New Jersey, US) resorbable sutures. All animals received intensive supervision and care at the animal care facilities 24 h/day for the first three days following the surgery and for weeks after surgery three times a day. For postoperative analgesia each animal was given 0.1 mg/kg s.c. buprenorphine (Temgesic ® , RB Pharmaceuticals Ltd, Slough, England, UK) and against opioid related intestinal motility problems 3 mg metoclopramide s.c. (Primperan ® 5 mg/ml, Sanofi-Aventis Oy, Helsinki, Finland,) three times a day for three postoperative days. Decrease in eating, drinking and moving or clear suffering from pain were determined to be the humane end points and the animals would have been terminated immediately if these signs were exhibited.
Qualitative evaluation of the samples following harvest
Healing of the calvarial CSDs was allowed up to the 6 weeks post placement time point to evaluate the early stage healing and ossification process in defects. The animals were terminated by giving an overdose of pentobarbital (Mebunat ® vet, Orion Oyj, Espoo, Finland) intravenously after sedation with a subcutaneous injection of 0.25 mg/kg medetomide (Domitor ® vet 1 mg/ml, Orion Oyj, Finland) and 15 mg/kg ketamine (Ketalar ® 50 mg/ml, Pfizer Oy, Helsinki, Finland). Immediately after termination, the skulls of the animals were exposed and a parietal bone block including the defect area and its surrounding bone was taken as a specimen.
The specimens were fixed in 10% buffered formalin solution before histological preparation. All specimens were imaged using a micro CT scanner prior to histological sectioning. One rabbit had traumatized its cranial wound resulting in an ectopic position of the implanted scaffold which led to a lack of contact with the surrounding bone resulting in disturbed healing. This defect was therefore excluded from the study.
Micro-CT imaging
After sacrifice the calvarial bone blocks including the bilaterally created defects with filling materials were harvested for ex vivo micro-CT imaging
Radiological analysis
A volume of interest was manually selected from the defect and fully mineralized bone was thresholded to calculate the ratio of between the deposited bone and defect volume. Analyses were conducted with CTAn (v. 1.14.4.1, Brüker micro-CT, Kontich, Belgium).
Statistical analysis
Mean percentages of bone formation with standard deviation (SD) were calculated from micro-CT analyses. P-value of less than 0.05 was considered as statistically significant.
RESULTS
The negative control group revealed a noticeable variance in bone formation between individuals with micro-CT, 21.8 (23.7)%, presented in Table 1.
The micro-CT analysis at the 6 weeks post implantation time point revealed new bone formation in all defects. Particulated bone with fibrin glue and solid bone block were superior to BAG and TCP (P = 0.012, P = 0.025, P = 0.019 and P = 0.024, respectively; Table 1). The micro-CT analysis also showed significantly more new bone formation with TCP granules than with BAG scaffolds (P = 0.024, Table 1).
DISCUSSION
In treating cranial deformities, often the cranial vault must be reshaped by either recontouring or by sectioning the cranial vault into pieces [23]. When fragments are reassembled there may be palpable or visible defects between the bone pieces which may be unsightly when the scalp and pericranial tissues are redraped over the recontoured bone. Such unattractive bony defects can be filled with fragments of bone, particulated pieces of bone and bone dust collected during craniotomy. Some clinicians collect bone dust and particulated bone during cranioplasty and mix it together with fibrin glue to form a slurry-like bone paste [24]. The fibrin glue in this paste helps fixate the bone pieces in the slurry and prevents their migrating from the wound. While the use of bone slurry is common practice in some craniofacial units, there is little evidence to show that it is beneficial. This study attempted to use an animal model to show that solid cortical bone grafts and bone slurry could have a role in managing cranial bony defects. This study also tried to illustrate the differences in bone defect healing with solid versus granular synthetic scaffolds.
Advances in imaging have led to improved resolution and to the ability to reveal both newly calcifying and already calcified tissue in healing bone defects.
Micro-CT proves to be a novel and accurate tool for quantitative analysis of bone formation (Figure 1) whereas traditional histology better illustrates the cellular changes and histological properties of the healing area. The micro-CT results of this study illustrate some of the key differences in bone defect healing when the defect is left empty or when a solid or granular scaffold is used.
Remarkably, the results showed a notable variance in new bone formation between individual rabbits with empty control defects. The percentages in micro-CT range from 1.9 to 57.6, which is due to individual variation in the healing of large bone defects. In some individual rabbits the defect heals much faster and better than in other individuals. However, what is common in the negative control group is the pattern of healing. It is from the periphery towards the center. Since these are critical sized defects the healing is incomplete at the six week time point. In contra-distinction to this it must be realized that an empty void defect is dramatically different from a defect filled with a solid material. In the case of a solid scaffold or a bone block, much of the defect is occupied by the solid portion of the implant. This leaves little empty space available for "new bone growth" which is the parameter of interest measured in this experiment. In the case of a slowly resorbing solid BAG scaffold the only space allowed for tissue ingrowth is either into the three-dimensional porous space within the scaffold or around the solid implant on the dural surface of the implant. Solid biodegradable materials obstruct the ingrowth of bone to the defect area by its shear physical presence, unless the scaffolds resorb or have accessible porosity to allow the ingrowth. Thus sufficient blood flow by ingrowth of fibrous tissue and blood vessel is essential for the degradation and eventual replacement of a solid biomaterial by autologous tissues. In the case of solid bone blocks, which are analogous to replacing a devascularized bone flap during cranioplasty, these grafts become incorporated by replacement resorption and little physical space exists for new bone growth at the six week post grafting time point. Granular materials are inherently different from solid configurations of the same material [25][26][27][28]. This is true for both bone and synthetic biomaterials. The difference is two-fold. There is a far greater surface area available for cellularization with the granular configurations over solid structures. Moreover there is space between the granules to permit autologous tissue ingrowth and new bone formation which is evident at the six week time point for both particulate bone and β-TCP granules. The pore size of the implanted TCP granular scaffold plays an important role in revascularization as showed earlier by other investigators [29]. In general granular scaffolds are more quickly incorporated into the healing of a bony defect when compared to solid scaffolds. However, scaffolds in a granular format lack any load bearing capacity, while scaffolds in a solid form possess physical properties that allow them to be used to replace large cranial defects despite their slow resorption and replacement compared to granular scaffolds [27,30].
The micro-CT images show the three-dimensional curved structure of the TCP granules which permit locking of the particles, preventing their migration out of the bony defects. The ingrowth of the fibrous tissue and new vessels between the granules enables the granules to remain locked with bone bridging which makes the structure of the graft construct even more stable. On the other hand mechanical stability and good vascularity seems to hasten the resorption of the BAG scaffold more than seen with the granules. Both biomaterials used in this study, TCP and BAG, seemed to induce islets of bone growth histologically underneath the implanted area. Whether this phenomenon is caused by local irritation or stimulation of the stem cells in dura lying between the brain and the implant material remains unknown and will require more detailed investigation in the future.
A major limitation of this study is related to the absence of density differentiation between assayed grafts, since the material density of autologous bone, tricalcium phosphate, bioactive glass, can be similar to that of the newly formed bone, thus rendering data interpretation difficult. In a future in vivo study the authors will attempt to apply synchrotron based computed tomography imaging, where various artefacts can be avoided and superior resolution achieved even though the study protocol might require the additional use of bone deposition seeking labels such as strontium. This should allow for improved separation between newly formed bone JOURNAL OF ORAL & MAXILLOFACIAL RESEARCH Lappalainen et al.
and residual graft remains, thus greatly enhancing the relevance of the attained results. Every experimental model has its limitations [5]. The variability within the negative control group also leads to the question if the used model truly reflects a critical size defect -since up to approximately 60% of regeneration was attained. Longer time points such as 12 weeks would help answer the question of how complete the healing of an unfilled defect would be in the longer-term. Larger sample sizes in the future may also help lessen the effect of such inter-subject variability. Other sources of variability may arise, for instance, from the defect location, the inclusion of cranial sutures, the presence of dural tears, from thermal damage to the wound by electrocautery or by heat generation during the drilling of bone.
CONCLUSIONS
The findings of this study suggest that particulated autogenous bone with fibrin glue and solid autogenous bone blocks were superior in new bone formation to bioactive glass and tricalcium phosphate, while tricalcium phosphate granules were found to be superior to bioactive glass with more new bone formation in the rabbit critical-sized defect model.
|
2018-04-03T01:56:40.584Z
|
2016-06-30T00:00:00.000
|
{
"year": 2016,
"sha1": "4a8b38d006c0727ca3932c724d33592337321f63",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ejomr.org/JOMR/archives/2016/2/e4/v7n2e4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c28768389f9a56978b00603a0a5b58b6fdf825c",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266399306
|
pes2o/s2orc
|
v3-fos-license
|
Gastroenterologist-level detection of gastric precursor lesions and neoplasia with a deep convolutional neural network
Background: Gastric precursor lesions and neoplasia with very delicate changes in the gastric mucosa could be easily missed or misdiagnosed under endoscopy. Here we developed an automatic real-time pattern recognition tool based on convolutional neural networks (CNNs) algorithm to help endoscopists in detection of chronic atrophic gastritis (CAG) and gastric cancer (GC) lesions. Methods: The five-convolution-layer ZF model and thirteen-convolution-layer VGG16 model were combined in our neural network A total of 10,014 CAG and 3724 GC annotated images were used in the network training. Another independent set consisted of 50 CAG, 50 GC and 100 negative controls images were used to evaluate the performance of the final network. Results: In CAG detection, the performance of our model was much better than the average performance of the 77 endoscopists in sensitivity, specificity and accuracy (95% versus 74%, 86% versus 82%, 90% versus 78%, respectively). In GC detection, the performance of our model achieved a slightly higher sensitivity (90% versus 87%), but a lower specificity (50% versus 74%) and accuracy (70% versus 80%) than the average performance of the 89 endoscopists. Conclusion: In conclusion, we provided a CNN based computational tool to improve the detection of CAG and GC under endoscopy, and simplify diagnostic procedures.
INTRODUCTION
Gastric cancer (GC) is reported as the fifth most commonly diagnosed malignancy in the world with about one million new cases occurred in 2012 (951,000 cases, 6.8% of the total). [1]More than 70% of GC cases (677,000 cases) occurred in developing countries, and half occurred in Eastern Asia (mainly in China).Additionally, GC is the third leading cause of cancer death in both sexes worldwide (723,000 deaths, 8.8% of the total). [1]However, the prognosis of GC varies a lot among different stages.The 5-year survival rate of early gastric cancer (EGC) almost exceeds 90%, whereas less than 20% advanced GC patients can survive for more than 5 years. [2,3]Therefore, early detection and regular surveillance in the high-risk population are probably the most effective strategies to improve the survival rate of GC.[6][7] Particularly, AG is considered as a necessary transitional step in gastric carcinogenesis, [5,8] which is characterized by chronic inflammatory processes of gastric mucosa that leads to the loss of glandular structure and a reduction of gastric secretory function. [8]One cohort study indicated that the annual incidence rates of GC in patients with AG is 4.5 times of that in the general population. [9]Thus, accurate detection of AG along with regular surveillance and subsequent management could be very helpful to control GC in an early stage.
However, many premalignant lesions and EGCs with very delicate changes in the gastric mucosa demand extremely careful observation and inspectional skills, and such lesions (especially superficial flat ones) may be easily missed or misdiagnosed by conventional white light imaging (WLI) endoscopy.Recently, a retrospective cohort study in England reported an endoscopy miss rate of approximately 8.3% in 2,727 patients with GC. [10] Various advanced endoscopic modalities have been developed to improve the detection rate and diagnostic accuracy, such as high definition endoscopy (HDE), narrow band imaging (NBI), [11] magnifying endoscopy, chromoendoscopy and etc. [12] Nevertheless, the advanced endoscopic techniques are very expensive, and additional operation trainings are also required, making it impossible to be widely utilized in primary health centers.A readily accessible, cost-effective and comparatively reliable diagnostic approach for detecting premalignant lesions and ECG was strongly needed.
Due to the development of big data, deep learning algorithms have become the research focus of artificial intelligence. [13]Convolutional neural networks (CNNs), known as the most successful deep learning strategy applied to image classification, [14] have brought about a revolution in computer vision. [13]CNNs can extract a set of transformations from inputted data automatically and avoid manual design of specific feature detectors. [14]sing CNNs to analysis biomedical image has become more and more popular in many clinical scenarios, such as classification of histologic and histopathologic images, [15,16] diagnosis of Alzheimer disease, [17][18][19][20] differentiating breast lesions [21] and recognition of skin cancers. [22]However, few works have explored to address the automatic diagnosis of gastric premalignant lesions and neoplasia.
In this study, we constructed two independent gastrointestinal (GI) image datasets, and fine-tuned two types of deep learning models named ZF [23] and VGG16 [24] based on Faster R-CNN (Faster region-based convolutional neural network) [25] to identify CAG and GC lesions.The results were compared with GI doctors with different seniority to evaluate the performance of those models.
Patients information and study design
We conducted a single-center, retrospective diagnostic study and it was performed after the protocol approved by the Institutional Review Board and ethics committees of Beijing Friendship Hospital, Capital Medical University.We reviewed our endoscopic database to identify all patients with diagnosis of CAG and GC (in both early and advanced stage) from January 1st in 2013 to June 10th in 2017.Informed consent was not required because only de-identified patient data were obtained.
Data acquisition and processing
Distinct endoscopic images and relevant medical records of the patients were finally extracted when they fulfill either of the following criteria: (1) Diagnosis of CAG should be proved by histological classification of the Updated Sydney System after applying its gastric biopsy sampling protocol. [26](2) Diagnosis of GC must be endoscopically confirmed together by any two out of the nine specific GI experts certified by Chinese Gastroenterological Endoscopic Society.The diagnostic criteria were mainly based on personal experience according to morphological features of lesions with or without pathological results.
Exclusion criteria were: (1) The biopsy sites of CAG did not strictly adhere to the standard endoscopic biopsy sampling procedure mentioned in the Updated Sydney System or lesions in the endoscopic images were difficult to be identified; (2) The endoscopic images showed indistinct involvement of GC; (3) The patients who have comorbidity of malignancy of other systems; (4) The endoscopic images appear to be unclear and/or the shooting angles do not reach our requirements.
For both CAG and GC detection tasks, we established a training dataset and a testing dataset with nonoverlapping requested images (Figure 1A).The testing datasets which consisted of equivalent positive and negative samples were prepared for model validation.As for CAG, we used images of chronic superficial gastritis (CSG) as its negative samples.And with respect to negative samples for GC, we enrolled images that fulfilled at least one of the following diagnosis: benign gastric ulcer and polyps, gastric stromal tumors and gastric heterotopia pancreas (all proved by histopathological results) (Supplementary Table S1).All images were de-identified immediately.
Data annotation
Six experienced endoscopists were recruited to annotate images in the training datasets with bounding boxes.The boxes were supposed to be drawn in the exact biopsy sites according to both endoscopic descriptions and histological results.Each image was annotated by two endoscopists back-to-back (See Supplementary Materials).If the bounding boxes annotated by different endoscopists did not differ a lot from each other, the intersection area of both boxes were adopted as the final annotation.If the boxes were poorly overlapped (intersection area < 50% of the total area), such images would be picked out and would be discussed together by all the 6 endoscopists mentioned above.As a result, 364 images of CAG and 92 images of GC were afterwards discussed together by all the doctors.
Modified faster R-CNN
An overview of the structure of Faster R-CNN is shown in Figure 1B.In training process, we compared a fiveconvolution-layer ZF model [23] with a thirteenconvolution-layer VGG16 model [24] based on Faster R-CNN.We firstly pre-trained both the ZF and the VGG16 model with ImageNet dataset and randomly initialized all new layers by drawing weights from a zeromean Gaussian distribution with standard deviation of 0.01.We fine-tuned the whole layers of region proposal network (RPN) and the final bounding box regression and classification layers at the same time so that the standard four-step alternating algorithm of Faster R-CNN training process could be modified to an end-toend one.The end-to-end process was more efficient for speeding up the training procedure and getting higher detective quality.We also applied the 'image-centric' sampling strategy.A number of mini-batches were generated from a single image that contained many positive and negative anchors.We randomly sampled 256 anchors in each image to compute the loss function of a mini-batch where the ratio of positive and negative anchors was 1:1.If there were fewer than 128 positive samples in an image, we padded the mini-batches as negative ones.Learning rate of 0.001 was used for 50k mini-batches and 0.0001 for the next ones.
The readily processed network was afterwards used to finish the CAG and GC detection test with the threshold of classification score as 0.85.It was thought to be negative if none of suspicious lesion was detected in a single testing image.Four statistical parameters named TP, FN, FP, TN were calculated (TP, true positive; FN, false negative; FP, false positive; TN, true negative).The final accuracy of different datasets was revealed for identifying the performance of Faster R-CNN.
Performance evaluation of GI doctors
Every GI doctor involved in the validation tasks was assigned to the same testing dataset as the computational models were.Their performance was firstly evaluated by overall sensitivity, specificity and accuracy against histopathological diagnosis.Besides, all the doctors in each test were stratified into four levels (from Level I to Level IV) by years of endoscopic operation (Level I, < 5 years; Level II, 5-10 years; Level III, 10-15 years; Level IV, ≥ 15 years).We further estimated the average diagnostic reliability in every one of the four levels.
Statistical analysis
We evaluated performance of networks and GI doctors by calculating sensitivity, specificity, accuracy, positive predictive value (PPV), and negative predictive value (NPV).Inter-observer agreement of GI doctors was evaluated by Fleiss kappa measurement (more than 2 observers) with 95% confidence interval. [27]nterpretation of kappa values was done according to Landis and Koch. [28]Comparisons between the best computational model and GI doctors in sensitivity, specificity, and accuracy were analyzed by Pearson's chisquared test.P-value < 0.05 was considered statistically significant.
Description of datasets
Totally, 10,064 annotated images of CAG with definite histopathological results were obtained, among which 50 images were randomly set aside for testing.Another 50 images of CSG were also included as negative samples in the testing dataset.We then input the remaining 10,014 CAG images into Faster R-CNN network for machine learning.Examples of annotated images of CAG are demonstrated in Figure 2A.
Similarly, 3774 annotated images of GC concurrently endoscopically diagnosed by GI experts were collected.To be specific, 1540 images were diagnosed as EGC, among which 462 images had definite pathological verification.We therefore randomly extracted 50 out of the 3774 images and combined them with additional 50 images of non-cancerous lesions mentioned above to set up the testing dataset.The remaining 3724 GC images were put into training.Examples of annotated images of GC are demonstrated in Figure 2B.
Performance of faster R-CNN
For both CAG and GC detection, the ZF and the VGG16 models were trained meanwhile built upon a Faster RCNN architecture.Performance of these models were evaluated upon the testing dataset.We chose the model with the best accuracy to represent the final performance and to compare with GI doctors (Table 1).
Performance of GI doctors
There were 77 and 89 GI doctors with different seniority taking the CAG and GC detection test respectively.Their baseline characteristics were presented in Supplem entary Table S3.
For GC detection, the sensitivity, specificity and accuracy respectively range within 48%-100% (median 88%, average 87%), 0%-62% (median 78%, average74%), and 35%-73% (median 82%, average 80%).The sensitivity of Level I to Level IV was respectively 84.6%, 83.6%, 89.6% and 87.4%.The specificity ranged from 68.8%, 70.0%, 73.2% to 79.4%, while the accuracy was 76.7%, 76.8%, 81.4% and 83.4% respectively.The above three parameters completely presented as incremental diagrams in the bar chart from Level I to Level IV in spite of slight reduction of The inter-observer agreement of doctors in different levels regarding diagnosis of CAG and GC is listed in Su pplementary Table S4.The best agreement was all obtained by doctors in Level IV.
Comparison of Performance between Faster R-CNN and GI Doctors
Compared with average performance of the 77 doctors, performance of the best model is much better in sensitivity, specificity and accuracy (95% versus 74%, 86% versus 82%, 90% versus 78%, respectively).
After classifying all the GI doctors based on seniority, an overview of performance the optimal model and the 77 doctors is illustrated in Supplementary Figure S1A.Both TP and TN of the optimal network successfully exceed those of the level IV doctors, indicating that sensitivity and specificity as well as accuracy of Faster R-CNN have already reached expert-level.
Statistical differences between the network and different-level doctors are observed in the sensitivity (all levels, Ps < 0.05) and accuracy (all levels, Ps < 0.05, except Level III, P = 0.103).However, there is no significant difference in specificity (all levels, Ps > 0.05, except Level II, P = 0.034, shown in Figure 3A).Except a slightly elevated sensitivity (90% versus 87%), the specificity and accuracy of our network are significantly lower than the average performance of the doctors (50% versus 74%, 70% versus 80%, respectively).
An overview of performance between Faster R-CNN and the 89 doctors classified to different levels is demonstrated in Supplementary Figure S1B.The sensitivity of our network is equal to GI experts (Level IV doctors) while the specificity is much lower.
There is no significant difference between the network and doctors of different levels in sensitivity (all levels, Ps > 0.05) and accuracy (all levels, Ps > 0.05, except Level IV, P = 0.03).However, significant difference in specificity was observed (all levels, Ps < 0.05).The specificity of our network is much lower than doctors in all different levels (Figure 3B).
DISCUSSION
In this study, we have provided an automatic real-time lesion detector focusing on endoscopic diagnosis of CAG and GC based on deep convolutional neural network algorithm.The performance of our model was also evaluated and compared to board-certified GI doctors of different seniorities.
For CAG detection, the best model achieved a gastroenterologist-level performance.The sensitivity, specificity and accuracy (95%, 86%, 90%, respectively) of this model all exceed those of the level IV (the highest seniority) doctors.For GC detection, the best model achieved superior sensitivity (90% versus 87%) but inferior specificity and accuracy (50% versus 74%, 70% versus 80%, respectively) compared with average performance of all the 89 doctors.These results suggested that the network made positive diagnosis as much as possible and consequently aggravates misdiagnosis of non-cancer lesions.Considering that it is a standard procedure to take subsequent biopsies for making definite diagnosis before treatment, the overdiagnose of our model to ensure a high sensitivity would be acceptable for endoscopists.Some GCs showed very slight change of mucosa, especially some superficial lesions, which brought challenges to endoscopists.For these lesions, our network may server as a useful complement to human eyes.Some images of unnoticeable GCs correctly detected by our network are shown in Supplementary Fi gure S2.We also extracted the negative cases misdiagnosed as GCs by the network (Supplementary Figure S 3).Because of high similarity of morphology with GCs, inputting a certain number of such images for training would be helpful to reduce the false positive rate.Besides, the inter-observer agreement is unsatisfactory even within doctors of Level IV (Kappa, 0.584).In contrast to obvious diagnostic variations of doctors, our network is stable, uniform and repeatable.
Although several studies have been reported for applying computer-aided system to classifying colonic [29,30] and pancreatic lesions [31][32][33][34] few work contributes to gastric premalignant lesions and neoplasia.Besides, most methods mentioned above were focused on differentiating, not detecting.Actually, real-time detecting need much stronger capacity of pattern recognition than differentiating, which may be far beyond the ability of traditional machine learning models based on k-NearestNeighbor (kNN) and Support Vector Machine (SVM).In this study, we constructed several models based on CNNs, the most powerful deep learning algorithms at present. [13]To achieve superior stability, reliability and accuracy, we trained a modified Faster R-CNN with a database of 13,738 endoscopic images (including 10,014 images for CAG detection and 3724 images for GC detection), which is hundreds of times larger than that of previous studies.
Traditional algorithms usually demand for manual extraction of domain-specific visual features, followed by further ability of classifying.Therefore, their application may be greatly restricted for the disability of automatic discovery and location of lesions.Additionally, since most of the precancerous lesions and early neoplasia are presented as subtle alteration of morphology and color of mucosa, they are quite difficult to be precisely detected only with WLI during examination even by some experts.Thus, we designed and implemented an automatic real-time lesion detector for CAG and GC, taking advantage of independent learning from little pre-processing sources by Faster R-CNN.
CAG lesions often present as diffused mucosal changes, which makes it much more difficult to delineate the outline of the lesion than other diseases such as GC.Bounding boxes for annotating positive samples are placed in biopsy sites manually which usually only cover the most severe area of lesions.However, in training process, we further generated a group of candidate bounding boxes widely distributed in the whole image based on the 'anchor rules' mentioned in the previous study. [25]We then labelled them as positive/negative according to Intersection over Union (IoU) between the ground-truth boxes and each of them.After that, they were randomly selected and inputted for training.Subsequently, some candidate bounding boxes labeled negative may actually include sporadic atrophic lesions, leading to a small amount of noise in negative samples.Based on atypical characteristics and small percentage of such noise, our network can adjust and converge itself to a relatively satisfactory condition after a long period of self-learning (Supplementary Video S1).Expert-level performance in CAG detection also proves strong ability to correctly distinguish positive samples from negative ones.
In GC detection, we enrolled both early and advanced GC diagnosed endoscopically together by two of the nine certified GI experts with or without pathological results in order to make full use of all data, which means the training set may include some over-diagnosed images, resulting in enhanced sensitivity and underes-timated specificity and accuracy of our network with a standard of histopathology in the test.A larger number of GC images with definite histopathological results would be greatly needed in training dataset to obtain a better network.Additionally, all images of EGC would be picked out to improve the detection rate of the most controversial but fatal cancerous lesions.
The primary motivation for designing a real-time computer-aided lesion detector in endoscopy is to assist young GI doctors in discovering and precisely locating gastric precursor lesions and neoplasia, especially EGCs.We further hope to diminish the false dismissal rate and misdiagnosis rate of EGC, as well as helping to direct specific biopsy sites.In real clinical environment, the readily processed network will be converted to a software, and integrated with endoscopic operating system.There is no need in altering endoscopic examination protocol nor hardware to use our system.All we need is WLI rather than any other advanced endoscopic equipment.After a long-term follow-up observation, the preconceived reduction in false dismissal rate and misdiagnosis rate of CAG, EGC and AGC (advanced GC) would be statistically calculated.The difference of overall medical cost per patient between the novel endoscopic diagnosis procedure and the conventional one will also be analyzed.Moreover, the system could also be used as an educational tool, speeding up the learning curve of endoscopic beginners.
For CAG detection, our network outperforms GI experts in sensitivity, specificity and accuracy.For GC detection, our network has a superior sensitivity but inferior specificity and accuracy than GI experts.In conclusion, we provided a deep learning based computational tool to improve the detection rate of CAG and GC, simplify diagnostic procedures and target following biopsy.
Figure 1 .
Figure 1.Schematic illustration of data composition and processing and an overview of Faster R-CNN structure.(A) Composition of training and testing dataset for both CAG and GC and post-processing before training.10064 images of CAG were extracted from our endoscopic database, among which 50 images were randomly set aside.The other 10,014 images consisted of the training dataset for CAG.The testing dataset contained another 50 images of CSG randomly selected as negative samples and 50 images of CAG pick-out before.Similarly, 3774 images of GC were extracted and we picked out 50 images randomly from them.The testing dataset of GC contained additional 50 non-cancer images randomly selected and 50 images of GC picked out before.All the images included were de-identified immediately and then annotated by six endoscopic experts according to a back-to-back protocol.(B) A two-stage principle of Faster R-CNN in lesion location.The first stage is to use feature maps of the last convolution layer to generate candidate ROIs.The second stage is to accomplish lesion recognition, position and classification.R-CNN: Faster region-based convolutional neural network; CAG: chronic atrophic gastritis; GC: gastric cancer.
Figure 2 .
Figure 2. Examples of annotated images for both CAG and GC.(A) Examples of images in the training dataset for CAG with bounding boxes.Each box was located manually in the exact biopsy site according to endoscopic descriptions and histological results, followed by cross-contrast procedure.(B) Examples of images in the training dataset for GC with bounding boxes.The images were annotated manually with the boxes in the biopsy sites and as large as possible without exceeding the boundary of the lesions.CAG: chronic atrophic gastritis; GC: gastric cancer.
|
2023-12-21T16:24:21.447Z
|
2023-12-18T00:00:00.000
|
{
"year": 2023,
"sha1": "19d18cb3ea6e32f84f34fd4367b45e592501aa8b",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.hksmp.com/journals/mr/article/download/44/513",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "839114fe4a4a99db007b9dbe2c135ab3a7053e08",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
}
|
259543225
|
pes2o/s2orc
|
v3-fos-license
|
Highly protein-loaded melt extrudates produced by small-scale ram and twin-screw extrusion - evaluation of extrusion process design on protein stability by experimental and numerical approaches
Understanding of generation, extent and location of thermomechanical stress in small-scale (< 3 g) ram and twin-screw melt-extrusion is crucial for mechanistic correlations to the stability of protein particles (lysozyme and BSA) in PEG-matrices. The aim of the study was to apply and correlate experimental and numerical approaches (1D and 3D) for the evaluation of extrusion process design on protein stability. The simulation of thermomechanical stress during extrusion raised the expectation of protein degradation and protein particle grinding during extrusion, especially when TSE was used. This was confirmed by experimental data on protein stability. Ram extrusion had the lowest impact on protein unfolding temperatures, whereas TSE showed significantly reduced unfolding temperatures, especially in combination with kneading elements containing screws. In TSE, the mechanical stress in the screws always exceeded the shear stress in the die, while mechanical stress within ram extrusion was generated in the die, only. As both extruder designs revealed homogeneously distributed protein particles over the cross section of the extrudates for all protein-loads (20–60%), the dispersive power of TSE revealed not to be decisive. Consequently, the ram extruder would be favored for the production of stable protein-loaded extrudates in small scale.
Introduction
The biopharmaceutical sector has grown rapidly over the last two decades (Martins et al., 2022). However, compared to small-molecule based drug products, biopharmaceuticals are more complex and have to be parenterally administered mostly as liquid formulation (Antosova et al., 2009). One of the challenges is that many protein-based drugs are not stable in liquid formulations. Therefore the solid-state is preferred for a wide range of used protein-based drugs for biopharmaceutical development (Lai and Topp, 1999;Pikal, 2009). Physical and chemical stability of protein-based drugs is considered as bottleneck for a successful biopharmaceutical development. Even in the solid state, protein degradation can occur during the whole life cycle of a protein formulation (Manning et al., 2010;Pikal, 2009). As the thermal stability of a protein depends on the relation between the degree of protein degradation and process-related stress applied the unfolding behavior of a protein is commonly studied.
In pharmaceutical industry, hot melt extrusion (HME) is mainly used as continuous and robust manufacturing technology for the production of solid dosage forms (Crowley et al., 2007). In the last few years, the application of HME was expanded to biopharmaceutics and can be used Abbreviations: ASD, amorphous solid dispersion; BSA, bovine serum albumin; BSE, backscattered electron; D, dimensional; EDX, energy-dispersive X-ray; HME, hot-melt extrusion; LVR, linear viscoelastic range; MRT, mean residence time; PEG, polyethylene glycol; RTD, residence time distribution; SEM, scanning electron microscopy; SME, specific mechanical energy; TSE, twin-screw extrusion; TTS, time-temperature superposition.
for the production of long-term release systems for parenteral administration (e.g., protein-loaded implants) or for solid-state protein stabilization by embedding proteins in polymeric carriers (Ghalanbor et al., 2010(Ghalanbor et al., , 2012Vollrath et al., 2017).
In early formulation development, 9-or 12-mm co-rotating twinscrew extruders (TSE) are frequently used for the production of amorphous solid dispersions (ASD) mainly resulting in a solubility enhancement of poorly water-soluble drugs, but still require batch sizes of about 20-30 g, which result in substantial amounts of drug substance and development costs, respectively (Jiang et al., 2023;Zecevic and Wagner, 2013). The amount of protein-based drug candidates within early formulation development studies however, is often limited and thus small-scale HME is particularly relevant (Dauer et al., 2021). Small-scale HME such as ram extrusion and TSE ideally enables the processing of batch sizes below 3 g accompanied with high yields of the produced protein-loaded extrudate under short processing times of <3 min (Dauer et al., 2022). The aim is a homogenous embedding of protein powder particles in a polymeric matrix by HME without negatively affecting the protein stability. The embedding of protein drugs in polymeric or lipidbased matrices by HME approaches at a small-scale level has been described in previously published works. Ghalanbor et al. assessed the feasibility of HME for preparing lysozyme-PLGA-implants and showed a complete recovery of active lysozyme from PLGA implants (Ghalanbor et al., 2010), whereas -mm mini-scale twin screw extruders for the preparation of BSA-PLGA-implants. A special focus was paid to the erosion properties and the in vitro release of the embedded BSA (Cossé et al., 2017). Another working group introduced the production of solid lipid implants (SLI) containing 10 to 20% either of a lyophilized mAb or a f ab -fragment by small-scale TSE (5 mm) at 35 • C and 40 rpm. The analytical investigations revealed a processrelated impact on the physical protein stability as there were a loss in the monomer content of both the mAb and the f ab -fragment and changes in secondary structure elements of the mAb (Vollrath et al., 2017). The use of novel, biodegradable phase-separated poly(ε-caprolactone-PEG)block-poly(ε-caprolactone), [PCL-PEG]-b-[PCL]) multiblock copolymers with different block ratios and with a low melting temperature (49 to 55 • C) for the production of protein-loaded implants was successfully shown by Stanković et al. The spray-dried protein powders goserelin, insulin, lysozyme, carbonic anhydrase, and albumin were incorporated into the polymers, whereby all proteins completely preserved their structural integrity as determined after extraction of the proteins from the polymeric implants (Stanković et al., 2015;Stanković et al., 2013).
The works published so far are limited only to the production of protein-loaded extrudates by small-scale HME and characterization of the implants in terms of protein stability, protein recovery, and protein release. A mechanistic understanding and investigation of how process parameters such as type of extruder, feed-rate, residence time distribution, and process-related stress factors (thermomechanical stress profiles along the ram extruder or TSE barrel, or in the extruder die) affect protein stability, remain "black boxes". Apart from that, the use of computational simulations to gain better HME process understanding and correlation of the simulation data with experimental results of protein stability was hitherto not considered.
One major issue in small-scale extrusion, especially when 5-mm TSE are used, has been identified to be a long residence time of up to 5 min, which subsequently results in an increased thermomechanical stress and a potentially negative impact on protein stability (Cossé et al., 2017;Patil et al., 2016). It is therefore crucial to carefully balance the process parameters: (i) Feed rate, (ii) screw design, (iii) screw speed, (iv) die geometry, and (v) L/D-ratio, in order to avoid spots of elevated shear and/or thermal stress along the process. As melt temperature and pressure in small-scale extrusion is usually determined in the die only, information on the extent and location of above-mentioned hot spots along the process are easily missed. The gap towards a mechanistic understanding of the interactions between process variables and quality attributes needs therefore to be filled via numeric simulation (Bochmann et al., 2018;Emin et al., 2021;Emin and Schuchmann, 2013a;Zecevic and Wagner, 2013). The present study evaluates ram extrusion and TSE as small-scale extrusion designs to produce proteinloaded extrudates. During ram extrusion, a powder mixture is loaded into a heated barrel where a piston is forced down onto the molten material (Dauer et al., 2022). The main shear stress generated during ram extrusion is found as the reservoir of the ram extruder is tapered into the die section. On the contrary, a TSE provides the opportunity of dispersive and distributive mixing (Wilson et al., 2012;Zecevic et al., 2018). The powdered starting material is fed into the feed zone and transported to the subsequent zones by the turning motion of the screw along the barrel under pre-heating (Bravo et al., 2000). This process of conveying introduces mixing and heat into the material through both external heaters and viscous heat dissipation (Crowley et al., 2007). In the die zone head pressure is developed, which is determined by several factors: (i) the molten blend viscosity, (ii) the flow rate of the molten blend, and (iii) the die temperature (Gedde et al., 2021). TSE provides a continuous system with much better mixing, shorter residence times, ease of material feeding, high kneading (distributive) and dispersion capacities as injection molding or ram extrusion (Wilson et al., 2012).
For our study, we used the hydrophilic polymer polyethylene glycol (PEG) 20,000 and the two model proteins lysozyme and bovine serum albumin (BSA) for the production of protein-loaded extrudates by ram extrusion and TSE. PEG 20,000 was selected as challenging polymer, since it exhibits a complex crystallization and melting behavior (Paberit et al., 2020) and thus a narrow extrusion process window. Due to the low melting temperature of PEG 20,000, it enables extrusion at a temperature of below 65 • C, which in combination with a short residence time can minimize the risk of heat-induced protein degradation during HME processing. PEG 20,000 is waxy and shows a very high viscosity below the melting temperature (not extrudable due to an extensive torque) and a very low viscosity above the melting temperature (not extrudable due to liquification). Additionally, the screw configuration including conveying screws, and screws with a single 90 • kneading element as well as the screw speed influence the resulting mechanical stress on polymer and protein particles (Emin and Schuchmann, 2013b). The present work aimed to include relevant extrusion process parameters such as: (i) screw configuration, (ii) screw speed, and (iii) residence time distributions (RTD) to facilitate potential correlations of the parameters with experimental data on polymer, and protein characteristics: (i) rheology, (ii) melting temperature shifts, (iii) protein recovery rates, and (iv) biological activity). Comparison of extrusion experiments were evaluated with the computations of the 1D and 3D simulation software Ludovic® and Ansys Polyflow®, respectively. The goal was to enable a fundamental and early starting point for the production of protein-loaded extrudates with sufficient protein stability and prior protein formulation development of long-term release systems by small-scale HME processing.
Materials
Lysozyme from chicken egg-white (Cat. No. L6876) was obtained from AppliChem GmbH (Darmstadt, Germany). Polyethylene glycol (PEG) 20,000 was obtained from Carl Roth (Karlsruhe, Germany). Bovine serum albumin (BSA) was purchased from Merck KGaA (Darmstadt, Germany). All chemicals were of analytical grade or equivalent purity. remove larger particles. For the preparation of protein-loaded extrudates, a physical mixture composed of polymer and 20, 40, or 60% w/w either of lysozyme or BSA powder was blended for 10 min at 50 rpm using a turbula mixer (Willy A. Bachofen AG, Muttenz, Switzerland). The physical mixtures of PEG 20,000 and protein powder (Table 1) were either ram extruded or hot melt extruded by using twin-screw extrusion (TSE).
A self-built ram extruder was used for the preparation of proteinloaded extrudates with lower shear stress as previously described by (Dauer et al., 2022). A barrel of 10 cm length and an inner hole of 10 mm diameter consisted of three heating zones and was equipped with a cylindrical die (diameter 1 mm, length 7 mm). Extrudates were prepared by feeding 3 g of the physical mixture into the 10-mm hole of the ram extruder. The temperature of the first segment (filling-zone) was set to 58 • C, and the other two segments were set to 63 • C. The mixture was molten for three minutes and the blends were then extruded through the die by a driving piston with a speed of 1 mm/s. TSE was performed using a 5 mm co-rotating twin-screw extruder (ZE 5, Three-Tec GmbH, Seon, Switzerland) with a length to diameter ratio (L/D) of 15:1 and two heating zones. The outlet of the extruder was equipped with a 1 mm die and either a conveying screw configuration, or a screw configuration with a single kneading element were used, which are presented in Fig. 1. The physical mixtures were fed into the extruder by using a powder belt conveyor (GUF-P Mini AD / 475 / 75, mk Technology Group, Troisdorf, Germany) at constantly kept feedrates of 0.4, and 0.8, or 2.0 g/min ± 5% for lysozyme-and BSAcomposed mixtures (Table 1), respectively. The screw speed was set to 100, 150, or 200 rpm. Extrudates were collected in the steady-state of the extrusion process and the amount of extruded formulations was 1.5 to 3.0 g depending on the feed-rates and process parameters.
Residence time distribution (RTD)
For TSE trials a 5 mm twin-screw extruder (ZE 5, Three-Tec GmbH, Seon, Switzerland) was used with two different screw configurations (Fig. 1). PEG 20,000 powder was extruded at 60 and 63 • C at five different feed-rates of 0.4, 0.6, 1.0, 1.4, or 2.0 g/min. The screw speed was set to 100 or 200 rpm. In order to investigate a potential effect of protein concentration on residence time, MRT measurements of powder blends composed of PEG 20,000 and 0, 20, 40, or 60% protein at a fixed feed-rate of 0.6 g/min were also performed. Mean residence times (MRTs) were measured with iron oxide as dye tracer (Sicovit® Red 30 E 172, BASF SE, Ludwigshafen, Germany) and calculated by using a camera system (ExtruVis3, ExtruVis, Riedstadt, Germany).
Particle size distribution analysis of protein powder
Protein powder were analyzed in terms of particle size distribution and particle shape via dynamic image analysis (DIA) (Camsizer® X2, Retsch Technology, Haan, Germany). The continuous minimum diameter (d xc min) of the analyzed particles was used for the particle size determination (n = 4).
Mechanical single crystal analysis
For mechanical analysis, single crystal compression tests were performed using a TiboIndenter Hysitron (Ti900, Bruker Minneapolis MN, USA) equipped with a 100 μm diameter diamond flatpunch probe. The protein crystals were dispersed on a flat sapphire substrate so that the average distance between them was >200 μm to ensure individual loading. BSA crystals were sieved with a 125 μm mesh sieve prior to sample preparation.
The size of each crystal was determined prior to each loading with the calibrated instrument optics by determining the longest axis of the project surface and the length approximately perpendicular to it. The load function was displacement controlled to a maximum of 10 μm for Lysozyme and 20 μm for BSA crystals at a displacement rate of 1 μm s − 1 .
Compression tests were performed in ambient air at a temperature of 22 • C and humidity between 45 and 60%. After each compression test, the flat punch probe was cleaned of any crystal debris. The forcedisplacement curves obtained in this way were analyzed for fracture events, which appear as discontinuities with a clear drop in force. The breaking force and breaking displacement were read from the first breaking event in each case.
Scanning electron microscopy coupled with energy-dispersive X-ray spectroscopy (SEM-EDX)
A scanning electron microscope (SU 3500, Hitachi High Technologies, Krefeld, Germany), equipped with a backscattered electron detector (BSE) was used to investigate the morphology of the surfaces and cross sections of the prepared extrudates. BSE images were collected at an acceleration voltage of 5 kV at a variable pressure mode of 30 Pa. The cut extrudates were placed on an aluminum stub. Samples were sputtered with a thin platin layer (Sputter Coater, Cressington Scientific Instruments, Watford, England). Protein particle distribution was examined by elemental mapping of the cross sections of extrudates for the characteristic X-ray peak of nitrogen. The elemental distributions were investigated by SEM combined with an energy-dispersive X-ray detector (EDX) (EDAX Element-C2B, Ametek, Weiterstadt, Germany). The percentage of detected nitrogen was evaluated by the TEAM software (Version 4.4.1, Ametek, Weiterstadt, Germany).
Vacuum compression molding (VCM)
Samples for rheological and DSC measurements were prepared with a VCM tool (MeltPrep GmbH, Graz, Austria) as described by (Treffer et al., 2015). In brief, a setup with a diameter of 20 mm (500 mg) and 5 mm (20 mg) for rheological and DSC measurements was used, respectively. PEG 20,000 and mixtures were molded at 65 • C for 5 min.
Melt rheology
An oscillatory, small amplitude (SAOS) rheometer (HAAKE MARS III, Thermo Scientific, Karlsruhe, Germany) equipped with a roughened plate-plate geometry (d = 20 mm) was used. All experiments were conducted in controlled deformation AutoStrain (CDAuto-Strain) mode after equilibration of the samples for 5 min at the starting temperature with a gap height of 1.2 mm. PEG 20,000 and PEG 20,000-BSA mixtures containing 20, 40, or 60% w/w BSA were measured using the following parameters: amplitude 0.0018% at 1.0-10.0 Hz for the temperatures 62.8, 63.5, 64.0, 64.5 and 65.0 • C. The amplitude of each blend and plain PEG 20,000 was determined to be in the linear viscoelastic range (LVR) by amplitudes sweeps. A horizontal time-temperature superposition (TTS) was conducted where feasible resulting in master curves shifted to 64.0 • C (Hajikarimi and Moghadas Nejad, 2021). The resulting master curve(s) were fitted to the power law fit: where K is the consistency Index (Pa•s), γ is the shear rate (s − 1 ) and n is the power law index. Subsequently, to allow simulations an expression of temperature dependency is necessary. Therefore, temperature sweeps with an underlying heating rate of 0.2 • C/min were conducted to allow the determination of the Arrhenius activation energy of flow (E A ) employing the following Eqs. (2) and (3): where |η*| is the measured complex viscosity (Pa•s), A is the preexponential factor, R is the gas constant (8.314 J•K − 1 •mol − 1 ), T is the respective temperature (K) and E A is the slope of the resulting plot.
Differential scanning calorimetry (DSC)
DSC studies of PEG 20,000, protein powder, and protein-loaded extrudates were performed with a DSC 2 (Mettler Toledo, Gießen, Germany) equipped with an auto sampler, nitrogen cooling and nitrogen as purge gas (30 mL/min). The system was calibrated with indium and zinc standards. Extrudates were milled with a mortar and pestle. At least three samples of ~10 mg were accurately weighed in 40 μL aluminum crucibles with a pierced lid. DSC scans were recorded from 25 • C to 230 • C using a heating rate of 10 • C/min. STAR e software (Mettler Toledo, Gießen, Germany) was employed for acquiring thermograms. Thermograms were normalized for sample weight. The heat capacity (input parameter for simulation) of PEG 20,000, pure BSA and Lysozyme, and protein-PEG 20,000 mixtures were determined using a multifrequency temperature modulation (TOPEM® mode) with an underlying heating rate of 2 • C/min, a pulse height of 1 • C, from 25 • C to 100 • C and a constant nitrogen purge (30 mL/min).
Biological activity of lysozyme after extrusion
Lysozyme activity was determined by a fluorescence-based assay (EnzChek® Lysozyme Assay Kit, Molecular Probes Europe BV, Leiden, The Netherlands) using a suspension of Micrococcus lysodeikticus labeled with fluorescein. The assay determines the lysozyme activity on the cell walls of Micrococcus lysodeikticus, which are labeled to such a degree that fluorescence is quenched. The fluorescence increase was measured using a microplate reader with a fluorescein filter and OptiPlate™-96 F microwell plates (VICTOR3™ Multilabel Plate Reader; 96-Well plates black, PerkinElmer Life and Analytical Sciences, Shelton, Germany). Preparation of DQ Lysozyme substrate stock suspension, lysozyme standard curve, as well as the procedure were conducted according to the manufacturer's protocol. The reaction mixtures were incubated at 37 • C for 30 min, protected from light. The fluorescence intensity of each reaction in a microwell plate was measured at 494 nm (absorption maximum) and 518 nm (emission maximum). The fluorescence values obtained from the control without enzyme were subtracted.
Simulation of mean residence time
The one-dimensional (1D) simulation software Ludovic® V6.0 PharmaEdition (Sciences Computers Consultants, Saint Etienne, France) was used for computing thermo-mechanical analysis. This approach allows the calculation of various parameters along the screw profile (e. g., temperature, residence time, etc.). The model assumes nonisothermal flow conditions and an instantaneous melting prior to the first restrictive screw element. Due to the unknown filling ratio of a starve-fed extruder, the computation starts at the die and proceeds backwards in an iterative procedure until the final product temperature is achieved. Input parameters were: (i) screw configuration, (ii) screw speed, (iii) temperatures of the segments, (iv) feed-rate, (v) thermal characteristics of the extruded mixture (i.e., heat capacity, density, thermal conductivity: 0.16 W/m•K, melting temperature, and melt rheology data).
3D isothermal simulation of the die sections and the screw section
The calculation of the numerical equations was performed with the commercial software Ansys Polyflow® by Ansys Inc. (Canonsburg, PA, Fig. 1. Screw configurations of the 5 mm twin-screw extruder with: A only conveying elements; B a single kneading element; 5 mm pitch (green), 7.5 mm pitch (blue), 90 • kneading element (red); Segment 1 (ambient temperature), segment 2 (63 • C), and segment 3 (60 • C). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) USA). The software is specifically suitable for HME processing and provides a finite element method solver for highly viscous media. Simulations were performed on a cluster server, computing one node with 32 Intel Xeon Gold 6230 processors and 70 GB of Random-Acess Memory.
The geometric dimensions of the simulated dies correspond to the dimensions of the dies of the ram extruder and TSE used in experiments. To reduce the computational effort, the geometries were quartered and calculated with symmetry planes. A mesh independence study resulted in computational meshes with 162,000 elements for the ram extruder die and 347,850 elements for the TSE die. A three-dimensional overview of the geometries and according meshes can be found in the Supplementary Material (Fig. S1-S3). The boundary conditions for the ram extruder were chosen as 1 mm/s normal velocity at the inflow, outflow conditions at the outlet, 1 mm/s normal velocity at the reservoir wall, zero velocity condition at the conical die inlet wall and the die wall. The boundary conditions for the TSE die were chosen as 0.15 g/min mass flow rate inlet at the inflow, outflow conditions at the outlet and zero velocity at all walls.
In order to simulate the rotating screw elements of the TSE, the mesh superposition technique was used (Avalosse, 1996). A detailed description of the application of the technique used on TSE can be found in a recent work (Emin et al., 2021). The geometric dimensions of the simulated screw elements correspond to the conveying element (5 mm pitch) and the 90 • kneading element shown in Fig. 1. A mesh independence study resulted in computational meshes with 245,000 elements for the barrel, 206,984 elements for each conveying element, and 237,711 elements for each kneading element. A three-dimensional overview of the geometries and according meshes can be found in the Supplementary Material (Fig. S1-S3). The boundary conditions were chosen as 0.6 g/min mass flow rate inlet at the inflow, outflow conditions at the outlet, zero velocity at the outer barrel walls, screw elements and inner barrel walls rotate with 200 rpm. While wall slip can occur at high shear rates in the extrusion process, it was neglected for the simulations for simplicity. Accordingly, the no slip condition was assumed at the surface of the barrel and the rotating screw. Due to its symmetry, only half a revolution of the screw elements was simulated in 60-time steps.
Energy equation and gravity acceleration were neglected. Linear velocities and linear pressure were chosen for interpolation settings. Iterations on the viscosity were performed with a Picard scheme. The material parameters used for the simulations correspond to the 100% PEG batch of the extrusion trials. The density of the material was set as 1200 kg/m 3 . The viscosity was described using a power law fit according to section 2.9. The fitted model is given in eq. (4): To further analyze the simulation results, particle tracking analysis were performed. Therefore, 2000 mass-and volume less, noninteracting particles were randomly distributed at the inflow. Based on the velocity field solved, particles move through the domain where each point of the trajectories and its corresponding values were tracked and recorded.
Statistical analysis
Statistical analysis and testing for statistical significance were carried out using Prism (GraphPad Software Inc., La Jolla, USA).
Protein content in extrudates and protein particle distribution over the cross section of extrudates
The particle size distribution analysis of the used protein powder particles for the preparation of highly-loaded protein-extrudates showed a narrow particle size distribution for lysozyme (d 50 = 133 ± 7 μm), whereas BSA (d 50 = 444 ± 70 μm) showed a very broad particle size distribution (Fig. 2).
The degree of protein powder particle embedding in the polymeric matrix and protein particle distribution over the cross section of extrudates produced by ram extrusion or TSE was analyzed by SEM-EDX (Fig. 3). The surface of the extrudates prepared by TSE appeared smoother compared to extrudates produced by ram extrusion. The crosssection cut of the extrudates prepared by ram extrusion showed no pores or cracks inside of the extrudates, whereas the extrudates prepared by TSE showed only few pores. Elemental mapping of the cross-sections showed an overall homogeneous distribution of protein particles in extrudates prepared by ram extrusion and TSE. Furthermore, the protein particles were only dispersed in the PEG-matrix and were not dissolved or notably grinded during extrusion. The protein recovery rate determined by HPLC was over 97% for all extruded samples.
Mechanical single crystal analysis
The mechanical properties of the protein crystals were determined using micro compression experiments. Fig. 4A shows a representative force-displacement curve for lysozyme (gray) and BSA with a significant fracture event. For lysozyme, 46 crystals were examined, 44 of which showed one or more breakage events. For BSA, 28 crystals were tested, of which 13 showed an evaluable breakage event. If more than one fracture event occurred, only the first one was evaluated, as the number and size of the particles for the following loading is unknown. Fig. 4B and C summarizes the micro-compression results. The values for breakage force and breakage displacement were significantly lower for lysozyme than for BSA. However, BSA with an average particle size of 94 μm was also coarser than lysozyme with an average particle size of 52 μm for the crystals examined by micro-compression. However, the fracture force for BSA was twice that of lysozyme in relation to the projected area. This ratio, which is similar to a fracture stress, tends to be a size independent material parameter. For comparison, compression tests of native lysozyme crystals in mother liquid are mentioned by (Cornehl et al., 2014), who determined a mean bursting force of only 238.3 ± 124.5 μN for a comparable particle size (48.7 ± 1.6 μm), which corresponds to a fracture stress of about 3.2 • 10 4 N/mm 2 . Therefore, the dried crystals used in this work are comparatively stable.
Unfolding temperature
Unfolding temperatures of lysozyme and BSA in protein-loaded extrudates produced by ram extrusion and TSE were determined by DSC (Fig. 5). The reported unfolding temperatures of the protein powders served as references and were compared to the produced proteinloaded extrudates. Lysozyme-and BSA-loaded extrudates (20, 40, and 60% protein-load) prepared by ram extrusion showed no significantly reduced unfolding temperatures compared to the unprocessed neat protein powder. Processing of 40% lysozyme by TSE at 100, 150, and 200 rpm screw speed and at a constant feed-rate of 0.4 g/min shifted the unfolding temperature towards lower temperatures whereas the unfolding temperatures of 60% lysozyme-loaded extrudates were not significantly reduced and were independent of the applied screw speed. The unfolding temperatures of 60% BSA-loaded extrudates were not reduced neither when produced by ram extrusion nor by TSE. 40% BSAloaded extrudates prepared by TSE with conveying or kneading screw configuration showed significantly reduced unfolding temperatures.
Lysozyme activity after extrusion
The biological activity of lysozyme in 40% lysozyme-loaded extrudates was investigated immediately after ram extrusion and TSE with conveying and kneading screw configuration, at 100, 150, and 200 rpm and at 63 • C. The activity of lysozyme embedded in melt extrudates by ram extrusion or TSE was higher than 90% and thus maintained the full biological activity when compared to the activity of the unprocessed lysozyme powder, which was 99.86 ± 5.29% (t-test: p < 0.05) (Fig. 6).
Residence time distribution
In order to receive information about the residence time distribution of the material within the extrusion barrel in dependence of feed-rate, screw configuration, and screw speed, MRT measurements were performed. Fig. 7 shows the determined MRT by an experimental and simulative approach for five feed-rates, two screw configurations and varying screw speed. For the simulation of MRT by Ludovic®, melt rheology data were key input parameters (refer to Supplementary Material, Figs. S4-6). The experimental MRT for a feed-rate of 0.4 g/min was found at 57.0 ± 0.2 s at 100 rpm and 47.4 ± 1.7 s at 200 rpm, with a comparably broad distribution using the conveying screw configuration, whereas simulation revealed MRTs of 67.1 s at 100 rpm and 63.3 s at 200 rpm. The experimental MRT measurements using the screw configuration with a single kneading element at a feed-rate of 0.4 g/min resulted in 68.3 ± 6.2 s at 100 rpm and 58.5 ± 1.1 s at 200 rpm. The simulated MRTs at a feed-rate of 0.4 g/min were found at 71.3 s and 68.1 s at 100 rpm and 200 rpm, respectively.
MRTs for 2.0 g/min at 100 and 200 rpm were found at 29.3 ± 1.0 s and 18.3 ± 0.5 s for the conveying screw configuration, and 33.4 ± 1.2 s Fig. 2. Particle size distribution of protein powder (lysozyme and BSA); q3 is the cumulative distribution referred to the percentage (%) of particles below that micron size (μm); error bars represent the standard deviation of four measurements by DIA. protein-loaded (lysozyme and BSA) extrudates prepared by ram extrusion (blue) or TSE with only conveying screw configuration (green), or screws containing a single 90 • kneading element (orange) at 100, 150, or 200 rpm screw speed, at 63 • C; error bars represent the standard deviation of three measurements for the unfolding temperature by DSC; the line shows the melting temperature of the unprocessed protein (reference) and the dotted lines represent the standard deviation (n = 3); statistical significance is compared to the reference and depicted by asterisks (*): * p < 0.05, ** p < 0.01, *** p < 0.001. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Fig. 6. Biological activity of lysozyme of 40% lysozyme-loaded extrudates prepared by ram extrusion (blue) or TSE with only conveying screw configuration (green), or screws containing a single 90 • kneading element (orange), at 63 • C and 100, 150, and 200 rpm screw speed; samples were immediately analyzed after extrusion processing; error bars represent the standard deviation of three measurements; the line shows the biological activity of the unprocessed lysozyme (reference) and the dotted lines represent the standard deviation (n = 3). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 7.
Mean residence time as a function of screw configuration (conveying, kneading; screw speed: 100 and 200 rpm); 100% PEG 20,000 was fed into the extruder at five different feed-rates (0.4, 0.6, 1.0, 1.4, and 2.0 g/min); error bars represent the standard deviation of three MRT measurements.
Fig. 8.
Mean residence time as a function of screw configuration (conveying, kneading; screw speed: 100 and 200 rpm) and protein-load (0, 20, 40, and 60% protein-PEG 20,000 mixtures) at a feed-rate of 0.6 g/min; simulated MRT is displayed in gray color; experimental MRT is displayed in red color; error bars represent the standard deviation of three MRT measurements; unpaired t-test (two-sample assuming equal variances) was used and statistical significance was depicted by asterisks (*) for MRT. ns = not significant, significant differences are indicated by asterisks: *, p < 0.05, **, p < 0.01 and, ***, p < 0.001. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) and 21.6 ± 0.9 s for the kneading screw configuration, respectively. The MRTs obtained by simulation were 21.9 s and 19.2 s for the conveying screw configuration at 100 and 200 rpm, respectively, whereas the MRT data for the kneading screws were 21.9 s at 100 rpm and 19.8 s at 200 rpm. Higher feed-rates resulted in more narrow residence time distributions (data not shown). The simulation accuracy of MRTs was higher at increasing feed-rates. For feed rates of 1.0 g/min and higher the simulated MRTs were within the experimental MRTs including their standard deviations. The smallest deviations between experimental and simulated MRT were found for the conveying and kneading screw configuration at a screw speed of 200 rpm. Fig. 8 shows the MRT data obtained by experiments or simulations as a function of protein concentration at a fixed feed-rate of 0.6 g/min. For the conveying screw configuration, the MRT decreased with increasing protein concentration as 0% compared to 60% protein-load showed a decrease in the experimental and simulated MRT of 11.9 s at 100 rpm and 6.2 s at 200 rpm and 12.9 s at 100 rpm and 12.0 s at 200 rpm, respectively. For the screw configuration containing a single kneading element, the MRT was slightly increased with increasing protein concentration. For the mixture containing 60% protein MRT was prolonged compared to 0% protein load about 6.6 s at 100 rpm and 9.4 s at 200 rpm, while simulation of the MRTs at 100 and 200 rpm revealed an increase of 4.8 s and 7.3 s, respectively. Fig. 9 shows color plots of the shear rate distribution in the dies of the ram extruder and the TSE. Low shear rates were found in the reservoir of the ram extruder. As the capillary narrows in the direction of flow, the flow is accelerated, resulting in increasing shear rates. In the area of the smallest radius, shear rates up to 1500 s − 1 were observed. In this area the shear rate increased over the radius of the die and reached its maximum at the wall. Likewise, the highest shear rate was also achieved in the area of the smallest radius for the TSE with values up to 150 s − 1 . The shear rate profile is similar to the ram extruder, but with values 10times lower.
Mechanical stress history in extrusion processing
The shear rate distribution of a fully filled screw section of the TSE is shown for a pair of conveying elements and kneading elements in Fig. 10. For both type of elements, the highest shear rates were observed in the gap between the screw tip and the barrel wall as well as in the gap between the rotating elements. The maximum shear rates were similar for both element types with values up to 1500 s − 1 . For the kneading element, the shear rates were higher than 30 s − 1 in the whole area, whereas for the conveying elements in the channel also low shear rates were found.
Although the color plots of the different geometries provided information about the shear rate distribution, it was not possible to conclude about the mechanical stress experienced by the material in these sections. Therefore, particle tracking analyses were performed with the solved velocity fields and particle trajectories were obtained. The maximum shear rate and residence time were calculated for each trajectory and cumulative distributions were created. Fig. 11 shows the cumulative distributions of the maximum experienced shear rate and the residence time distribution for the different dies and screw elements. As expected, the material experienced higher shear rates in the screw section than in the die section for both screw configurations and in comparison to the shear stress in the ram extruder die. The median (Q 0,50 ) was found at 237 s − 1 for the ram extruder die, 46 s − 1 for the TSE die, 1295 s − 1 for the conveying elements, and 1040 s − 1 for the kneading elements.
Regarding the residence time distribution, it can be observed that particles remained longer in the die of the TSE than in the ram extruder die. However, the residence times were shorter than in the simulated screw elements. Comparing the screw elements, a slightly longer residence time of the particles in the kneading block were observed, although the volume which passed the kneading block was similar and the mass flow constant, respectively. The median (Q 0,50 ) was found at 0.05 s for the ram die, 0.16 s for the TSE die, 7.4 s for the conveying elements, and 9.4 s for the kneading elements.
Discussion
While for the ram extruder melting and extruding, i.e., introducing mechanical stress, were applied sequentially, these processes were introduced simultaneously during TSE and thermomechanical stress was certainly present for the unmolten and molten material during TSE processing. Monitoring and characterization of the magnitude and duration of the generated thermomechanical stress is often not feasible and highlights the need of computational simulation for gaining insights into small-scale HME processing including temperature profiles, mechanical stress distributions, and RTDs. The particle tracking approach utilized describes infinitesimally small, massless particles moving through the resolved flow fields. Accordingly, the motion of actual protein particles was only partially replicated.
In regard of mechanical stress, the die area of both extruders was comparable and the highest shear rates were found close to the wall, gradually decreasing towards the center of the die channel. Particles dispersed in the PEG-melt passing the die rather in the center, subsequently experienced a lower shear stress compared to those closer to the wall. For TSE the highest shear rates were present in the screw section Fig. 9. Color plots of the shear rate distribution of A ram extruder die; B twin-screw extruder die. and particularly in the gap between the screws and the gap between screw and barrel. The increased shear rate for the kneading element can be attributed to different flow profiles resulting from the restriction of 90 • kneading discs.
The MRTs of the blends in the heated barrels were short, namely 3 min and <80 s, for ram extrusion and TSE, respectively, and thus did not lead to a dissolution of protein particles but rather a solely dispersing of protein particles in the PEG-matrix. In this study, lysozyme and BSA particles could be regarded as fillers during extrusion, since the particle size of the used protein powders was identical for the starting material and particles determined in the cross section of extrudates. Ram extrusion facilitated sufficient dispersion of protein particles for protein-loads in the range of 20 to 60%. Consequently, the distributive mixing power of TSE was not necessary for the production of protein-loaded extrudates and TSE was thus not superior in particle distribution compared to ram extrusion. The simulation of maximum shear rates raised the expectation of particle size reduction during extrusion, especially when TSE was used. Micro compression analysis showed that BSA crystals were more ductile and less fragile than solid lysozyme particles (Fig. 4). However, the mechanical stress generated during ram extrusion and TSE was not high enough to break either the lysozyme or the BSA particles. Since the protein particles showed also an elastic deformation behavior, a compensation of local shear stress peaks during extrusion could be possible (Kubiak et al., 2021;Suzuki et al., 2018). However, a correlation of the simulated mechanical stress with the micro compression analysis could not be established in this study, as more advanced approaches would be needed, such as coupling CFD simulations with DEM simulations (Frungieri et al., 2020).
Ram extrusion had the lowest impact on unfolding temperatures as the main mechanical stress was generated only in the die for <1 s as confirmed by the results of 3D simulation. The ram extrusion process had unexceptionally no negative effect on the unfolding temperatures of lysozyme and BSA in the various extrudates compared to the unprocessed proteins and were also independent of the protein-load. As experimental and simulated data revealed, ram extrusion provides a gentle approach for the production of protein-loaded extrudates.
In contrast, a protein-load effect on the unfolding temperature was observed for extrudates produced by TSE. Interestingly a higher proteinload of 60% was protective resulting in a less pronounced decrease of unfolding temperatures. According to a correlation of the unfolding temperatures with the simulated temperature profiles along the TSE process and MRTs, we hypothesize that the generated shear stress during TSE were distributed to a larger protein particle collective and thus, protein-loads higher than 50% should be favored for the embedding of a thermally stable protein. For a better understanding of TSE processing, MRTs determined by TSE experiments and 1D simulation (Ludovic®) were compared. The experimental MRTs for the pure PEG and 20% protein-polymer mixture were congruent with simulated data (Fig. 7). In mixtures with 40% or 60% protein, the impact on melt rheology behavior (Figs. S4-6) and thus the MRT is no longer neglectable and was reliably predicted by Ludovic®. The MRT in the kneading element was slightly increased, implying that protein particles were entrapped within 11. A Cumulative number distribution of the maximum shear rate along the particles tracked for the TSE die, the ram die, the conveying elements and the kneading elements; B Residence time distribution along the particles tracked for the twin-screw extruder die (area of smallest radius only), the ram extruder die (area of smallest radius only), the conveying elements and the kneading elements. the gaps of the kneading element, resulting in an increased viscous dissipation (see Fig. S7). This is confirmed by the results of unfolding temperatures and provides an important correlation for the evaluation of extrusion process design on protein stability. The shorter simulated MRTs for the screws comprising the kneading element at 100 rpm might resulted in a deviation between the apparent melt viscosity in the experiment compared to the anticipated melt viscosity of 1D simulation (Fig. 8). As the simulation anticipates a molten continuum to begin with, in reality this condition needs to be established within the first part of the extrusion process. 1D simulation revealed that the highest temperature of the molten blend was achieved by entering in the die zone (Fig. S7). At 100 rpm the contribution of the introduced viscous dissipation to the overall melt energy was lower compared to 200 rpm (peak temperatures: 75.6 • C and 78.1 • C, respectively). As the kneading elements have no conveying functionality the mass will pile up in front of the element. As the condition of being molten in reality would have been reached later at 100 rpm, this was likely the reason of the more pronounced deviation of experimental and simulated MRTs using segmented screws at low screw speed.
Compared to the reliably simulated thermomechanical and temperature profiles and RTDs along the TSE barrel and the die and as torque could not adequately measured in the 5 mm TSE experiments, 1D simulation could not reveal pressure distribution, which would otherwise be standard. For an increased simulation accuracy, improved instrumentation especially for pressure and melt temperatures along the entire process length would be necessary. Especially, the pressure would indeed be an excellent variable to validate the 3D simulations of the die section. Advanced instrumentation would facilitate improved model validation as pressure and temperature profiles could be correlated. Unfortunately, such advanced instrumentation was not available for the applied extruders and will be part of further studies. Additionally, some simplifications during simulation might have added to an higher variance, such as isothermal conditions, no wall slippage and no viscosity transition from waxy condition into melt. Subsequently, the simulated pressure values are likely to be higher than in an experiment. Therefore, simulations were primarily intended to compare geometries and to improve process understanding for the appropriate selection of proteinrelated process and formulation demands.
Conclusion
A fundamental and early starting point for the production of highly protein-loaded extrudates with sufficient protein stability and enabling an understanding of extrusion processing prior protein formulation development of potential long-term release systems by small-scale extrusion processing was provided. A mechanistic understanding and investigation of how process parameters such as type of extruder, feedrate, RTD, and process-related stress factors (thermomechanical stress profiles along the ram extruder or TSE barrel, or in the extruder die) affect protein stability, was hitherto not considered in the literature. In our study, several extrusion process characteristics and material properties such as heat capacity, density, melt rheology data of the investigated polymer-protein mixtures have been considered as input parameters for 1D and 3D simulations. The combination of experimental and numerical approaches resulted in a better understanding of HME process parameters including mainly the type of extruder, and residence time distribution on protein stability with a special focus on thermal protein stability and a potential process-induced loss in lysozyme activity. The 1D and 3D simulation software Ludovic® and Ansys Poly-flow®, respectively proved to be valuable tools for the evaluation of extrusion process design and provided important insights into extrusion processing optimization and potential scale-up challenges at a smallscale level as critical process steps and locations of thermomechanical stress hot spots along the process were identified. This approach supports also a rationale to identify appropriate extrusion process conditions for the production of highly protein-loaded extrudates and to define in perspective the best protein-polymer compositions. In particular, ram extrusion was identified as favored method for the production of stable protein-loaded extrudates, since the thermomechanical stress was low and still a homogenous distribution of protein particles over the cross section was achieved. However, so far, no simple 1D simulation model is available for ram extrusion thus, 3D simulations had to be employed for crucial process steps or areas, respectively. As the die in the ram extrusion was identified as the most critical process area, this limitation is negligible. In contrast, TSE strongly profits from both 1D and 3D simulations where 1D identifies critical process steps while 3D simulations boost the mechanistic understanding. Nonetheless simulation via Ludovic® is expected to better converge with experimental data when using higher feed-rates and larger size 9-, 11-, or 12-mm TSE with better instrumentation spread over the entire extrusion process (e.g., multiple temperature, die pressure, and torque monitoring). The present procedure enables a good starting point for ram extrusion and TSE trials for the production of highly-loaded protein extrudates with sufficient protein stability, protein recovery rates and homogenous distributed protein particles.
Declaration Competing Interest
The authors declare no competing interests.
Declaration of Competing Interest
None.
Data availability
Data will be made available on request.
|
2023-07-11T16:51:40.966Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "cf8f07376bd9ea1a0479cc5e89a904e0eef63d90",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijpx.2023.100196",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3490d1af7f09417c372570fe8a0d94493f65bede",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236695490
|
pes2o/s2orc
|
v3-fos-license
|
Mathematical modeling of intrinsic motivation in reversal theory – Promoting exploration for AI agents
: The function and performance of cloud connected products such as AI speakers are continuously updated over time. Such updates are based on the user’s exploration of unknown functions. Apter’s reversal theory proposed a mental condition termed the paratelic mode in which one acts to explore the purpose and enjoy certain actions in itself. We assume that the paratelic mode motivates users to explore continuously updated functions of cloud connected products which enables them to make full use of them. In this study, we aim to create a mathematical model that can explain the paratelic mode. We propose a model that explains the condition of the paratelic mode by integrating two principal motivation theories: Apter’s reversal theory and Berlyne’s optimal arousal level (OAL). We mathematically formulate the model by applying the Bayesian information gain as an index of arousal. By analyzing the model, we predict two hypotheses: a) when OAL is low, the lower the uncertainty, and the more likely it is that the paratelic mode is achieved, and b) when OAL is high, the higher the uncertainty, the more likely it is that the paratelic mode is achieved. The experimental result of our previous study using an AI speaker supported the former hypothesis. In this study, we verify the latter hypothesis by conducting an experiment using two AI speakers with different uncertainties. The results showed that when OAL is assumed to be high, users are more likely to be in the paratelic mode for an AI speaker which was subjectively evaluated to have higher uncertainty.
INTRODUCTION
What is the process that leads to long-term use of a product? Krippendorff stated that there are three stages between the recognition of the product and its continuous use: perception, exploration, and reliance [1]. Here, exploration determines what the product can do and how it can be used. For cloud-connected products such as AI speakers, functions and services will expand over time. In such products, it is necessary to encourage users to continuously learn about the functions and performance that are updated over time. Therefore, in this study, we focused on the motivation for continuous exploration.
We consider that motivation to explore refers to the motivation to explore an unknown purpose. Ryan and Deci defined the motivation to act for a specific purpose as extrinsic motivation, and the motivation to act for potential satisfaction rather than for a certain purpose as intrinsic motivation [2]. Building on this classification of motivation based on the existence or absence of the purpose, Apter proposed two psychological states: the goal-oriented telic mode and action-oriented paratelic mode. He proposed the reversal theory, in which the two psychological modes are reversed depending on the situation [3]. In the paratelic mode, people search for the purposes. In this study, we consider that exploration could may be encouraged by encouraging users to enter the paratelic mode. The reversal theory is a qualitative model, and there is no mathematical model that can be applied to product design. In addition, the conditions for achieving the paratelic mode in product use have not been clarified.
The purpose of this study is to determine a product design that motivates users to take exploratory actions. In order to achieve this, we propose a model that mathematically explains the paratelic mode of reversal theory. We apply the knowledge obtained from model prediction to the dialog interface in AI agent-equipped products and verify its effectiveness.
Correspondence between optimal arousal level and reversal theory
In Berlyne's arousal potential theory, when the horizontal axis refers to the degree of arousal obtained from novelty and complexity, and the vertical axis refers to the degree of pleasant feeling (valence), it shows an inverted U shape, as shown in Figure 1. It is supposed to form a Wundt curve [4]. Based on this model, there are no emotions that are considered neither pleasant nor unpleasant in the low arousal level, and unpleasant in the high arousal level. There is an optimal arousal level that maximizes a pleasant feeling. Hereafter, we abbreviate the optimal arousal level as OAL.
According to the reversal theory, in the telic mode (goal-oriented), low arousal gives relief, and high arousal gives anxiety. In the paratelic mode (action-oriented), people get bored when their arousal is low, and they feel excited when their arousal is high [3]. In the telic mode, arousal and the degree of pleasant (hedonic tone) feeling are inversely proportional, and in the paratelic mode, they are proportional. Depending on the situation, OAL may fall in between the two curves, and the two psychological states are reversed, as shown in Figure 2. In this study, we considered that the monotonically increasing part of Berlyne's inverted U-shaped curve corresponds to Apter's paratelic mode, and the decreasing part corresponds to the telic mode. However, the idea that the low arousal part of one inverted U-shaped curve is the paratelic mode and the high-arousal part is telic mode differs from that of Apter's theory. Apter stated that even with the same arousal, there are different psychological states: relief and boredom where arousal is low, and anxiety and excitement where the arousal is high. Therefore, we consider that the OAL of Berlyne shifts in the horizontal axis depending on the situation, as shown in Figure 3. In the part surrounded by the square frame in Figure 3, both telic and paratelic modes can be achieved even with the same arousal. Our theory explains both theories in a unified manner. We verify this unified model using a mathematical analysis.
Mathematical model of information gain corresponding to arousal
Yanagisawa, the second author, proposed a perceptual model based on Bayes' theorem and experimentally verified this model [5]. Yanagisawa et al. further proposed that the arousal level of emotions is formulated using information gain (Bayesian surprise or KL divergence from posterior to prior) [6], which refers to the amount of information acquired after the experience of the event [7]. Assuming a normal distribution for the prior and posterior distributions of the Bayesian model, the information gain between these can be expressed by the function shown in Eq. (1).
is the difference between the expected value of the prior distribution and the likelihood function. represents the difference between the prior prediction and the actual stimulus, which is called the prediction error.
! represents the uncertainty of expectation and the difficulty of making predictions. " is the variation of data, that is, the disturbance (noise) mixed with the sensory stimulus. Eq. (1) shows that the information gain is a function of three parameters , ! and " . We use Eq. (1) as a mathematical formulation of arousal.
Analysis of the relationship between information gain and prediction error
A previous study analyzed how the relationship between information gain and prediction error changes depending on the two states of uncertainty [7]. Consider two situations: one when the uncertainty is high ( ! 1), and one when the uncertainty is low ( ! 2). The noise ( " ) is Valence fixed to one value, assuming that the variation can be controlled depending on the experimental system. When the product of the two uncertainties is larger than the square of the noise ( ! 1 × ! 2 > " # ) , if the prediction error becomes large, reversal occurs.
Hypotheses obtained from model analysis
Consider the case where uncertainty is greater than the disturbance ( ! 1 × ! 2 > " # ) . In this case, from the analysis of 3.1, the relationship of the information gain under two uncertainties is reversed as the prediction error increases (shown in Figure 4 and 5). In our model, OAL signifies optimal information gain. Now, consider two states, a state where OAL is low (Wundt curve on the left in Figure 3) and a state where OAL is high (Wundt curve on the right in Figure 3). In these two states, the OALs (information gain at which valence peaks) are and ℎ, respectively. Figure 4 shows the range of telic and paratelic modes when the optimal information gain is low. From the intersection of and information gain, the prediction error at the boundary between the telic and paratelic modes is larger when the uncertainty is low ( " 1 < " 2). On the other hand, as shown in Figure5, in the case of ℎ, where OAL is high, there is a larger prediction error for higher uncertainty at the boundary between the telic and paratelic modes ( $ 1 > $ 2). Keeping the paratelic mode for a larger prediction error means that it is easy to enter the paratelic mode.
From the above discussion, we propose the following two hypotheses depending on the location of the OAL: (a) When people are in a state of high OAL( ℎ), they are more likely to be in the paratelic mode with higher uncertainty. (b) When people are in a state of low OAL( ), they are more likely to be in the paratelic mode with lower uncertainty. paratelic mode when the optimal information gain is low.
Figure 5:
Relationships between prediction error and information gain in different uncertainties. Range of telic or paratelic mode when the optimal information gain is high.
The experimental results of our previous study supported hypothesis (a) using AI speakers [8]. These findings were not based on the hypothesis derived from the proposed mathematical model. However, if hypothesis (b) is confirmed by operating OAL in higher conditions than those of the previous study [8], the prediction by the unified model can be verified.
Method
To verify the hypothesis derived from the analysis of the proposed model, we conducted an experiment using two AI speakers, as in the previous study [8]. We controlled the parameters by asking the experiment participants to ask prepared questions for two AI speakers as learning. The speaker with low uncertainty always answers easy questions correctly and always answers difficult questions incorrectly. On the other hand, the other speaker with high uncertainty sometimes answers easy questions incorrectly and difficult questions correctly.
According to Figure 6, the range of the paratelic mode when the same information gain (arousal) is obtained is wider when the OAL is at a higher position ( ℎ) than when it is at a lower position ( ). We consider that OAL would increase when the user was involved in the product in the paratelic mode. We then prepared questions that would allow the participants to engage in the paratelic mode for both conditions with low and high uncertainty. We confirmed the effect of learning by subjectively evaluating whether AI agent participants felt paratelic or telic motivation in a questionnaire. Figure 7 shows the average value of how much 20 people were motivated. The degree of those in the paratelic mode is 0.975 out of a maximum of 5 in the condition with high uncertainty, and the degree of telic mode was 0.525 in the condition with low uncertainty. There was a significantly higher degree of paratelic mode for higher levels of uncertainty.
DISCUSSION
Mathematical analysis of the proposed model in Figure 3 yields the two hypotheses shown in 3.2.
In hypothesis (a), if OAL is assumed to be low, people are more likely to go into the paratelic mode in situations with low uncertainty. The results of our previous study verified this hypothesis [8]. Apter proposed a notion of protective frame as a condition for promoting the paratelic mode [9]. The protective frame is a theory that states that if safety is guaranteed, a person will be in a psychological state that seeks more danger and excitement, that is, the paratelic mode. In state (a), we consider that the ease of making predictions for a product becomes a protective frame for the safety and credit of the product. Therefore, it is easy to enter the paratelic mode.
In hypothesis (b), when OAL is assumed to be high, people are the more likely to go into the paratelic mode in situations with larger uncertainty. In the subjective evaluation of this experiment, we found a significant difference in the fact that AI speakers with high uncertainty promote more paratelic motivation. This finding supports the prediction of the model. According to Figure 6, when OAL is high, there is a wide range of paratelic modes when the same information gain is obtained. We consider that the user is already involved in the product and the paratelic psychological state. In state (b), they trust the relationship with the product itself and develop a protective frame. Therefore, we consider that the more difficult it is to predict the behavior of a product, the more curious people will be about it and the more motivated they will be to explore the product.
CONCLUSION
In this study, we proposed a mathematical model that explains motivational states based on theories such as the reversal theory and arousal potential theory. We derived two hypotheses (shown in 3.2) by analyzing the mathematical model, using information gain as the arousal level. Our previous study supported the first hypothesis [8]. We conducted an experiment to verify the second hypothesis that when OAL is high, the higher the uncertainty, and the more likely it is to be in the paratelic mode. We found a significant difference in that participants had more paratelic motivation for AI speakers with higher uncertainty. Based on the findings of the previous study and the subjective evaluation of this study, we verified the hypothesis based on the model and its predictions.
|
2021-08-03T00:05:39.747Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "332fd4037c83e3a9d90a75b1bca4ed0ccee186e3",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/isase/ISASE2021/0/ISASE2021_1_15/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "700106375d24125f2ae3bd8a60af272b8106754a",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
3779346
|
pes2o/s2orc
|
v3-fos-license
|
MicroRNA-210 Regulates Mitochondrial Free Radical Response to Hypoxia and Krebs Cycle in Cancer Cells by Targeting Iron Sulfur Cluster Protein ISCU
Background Hypoxia in cancers results in the upregulation of hypoxia inducible factor 1 (HIF-1) and a microRNA, hsa-miR-210 (miR-210) which is associated with a poor prognosis. Methods and Findings In human cancer cell lines and tumours, we found that miR-210 targets the mitochondrial iron sulfur scaffold protein ISCU, required for assembly of iron-sulfur clusters, cofactors for key enzymes involved in the Krebs cycle, electron transport, and iron metabolism. Down regulation of ISCU was the major cause of induction of reactive oxygen species (ROS) in hypoxia. ISCU suppression reduced mitochondrial complex 1 activity and aconitase activity, caused a shift to glycolysis in normoxia and enhanced cell survival. Cancers with low ISCU had a worse prognosis. Conclusions Induction of these major hallmarks of cancer show that a single microRNA, miR-210, mediates a new mechanism of adaptation to hypoxia, by regulating mitochondrial function via iron-sulfur cluster metabolism and free radical generation.
Introduction
Hypoxia is a major physiological difference between tumours and normal tissue, mainly generated by tumour growth with inadequate blood supply and consumption of oxygen by tumour cells [reviewed in [1]]. Hypoxia induces a complex transcriptional response mainly via induction of hypoxia inducible factor 1a (HIF1a), affecting many biological processes such as the glycolytic pathway, angiogenesis, pH regulation, invasion and immortalisation [2].
An emerging paradigm in hypoxia is that mitochondria produce reactive oxygen species, mediated by electron transport continuing in hypoxia [3]. This free radical pathway contributes to upregulation of HIF [4] and enhanced growth in vivo [5], yet may also be toxic. A variety of pathways induced by HIF have already been reported to protect from the latter effect, for example induction of pyruvate dehydrogenase kinase inhibits the enzyme complex of pyruvate dehydrogenase and blocking the conversion of pyruvate to acetyl coenzyme A, the first step in the Krebs cycle [6] and enhances lactate production [7]. Mitophagy can be induced by the BH3 domain protein BNIP3, [8], and cytochrome C oxidase subunits may switch to more efficient ones [9].
Recently microRNAs (miRs), which are small (,22 nt) noncoding RNAs that regulate post-transcriptional gene expression by blocking translation of target mRNAs or by accelerating their degradation [10,11], have been reported to be induced by hypoxia. However, few of their targets or mechanisms of action are known [reviewed in [12]]. We and others [13] have shown miR-210 is robustly induced by hypoxia in many cell lines, via HIF1a [14]. We recently analysed its expression in a series of 216 breast cancer patients and showed miR-210 expression was correlated with many HIF1a targets at mRNA level (as measured by a hypoxia metagene) and was strongly associated with poor patient survival. Derived only from sequence-based algorithms, some of the previously validated targets of miR-210 include Ephrin A3 [15], E2F3 [16], RAD52 [17], CASP8AP2 [18], and MNT [19]. We combined publically available algorithms, with our gene array datasets, to predict potential miR targets of importance in cancer cells. We found that the mitochondrial iron sulfur cluster homologue ISCU was the highest predicted target for miR-210. Recently, ISCU has been identified as a miR-210 target also in normal pulmonary endothelial cells [20], where it contributes to the Pasteur effect and controls the level of ROS production in hypoxia, suggesting its potential adaptive role to hypoxia in the context of pulmonary endothelium.
Iron sulfur clusters [Fe-S] are present in the active sites of many enzymes and proteins, critical for their activity and capable of conferring regulation by redox status [21]. These clusters are assembled in mitochondria [22] by a complex series of chaperones and enzymes including ISCU, then exported to the cytoplasm, where they are assembled into the relevant protein [23]. Amongst the Fe-S cluster proteins involved are several that comprise key components of complex I, II and III in the mitochondria, and components of the Krebs cycle such as succinate dehydrogenase and aconitase. The cytoplasmic form of the latter regulates iron metabolism via its function as a translational regulator-IRP1 [24].
In this report we show the major biological effects of miR-210 targeting ISCU, all of which are likely to contribute to important phenotypes in cancer. By downregulating ISCU, miR-210 decreases Krebs cycle enzyme activity and mitochondrial function, provides a major mechanism for the increased free radical generation in hypoxia, increases cell survival under hypoxia, induces a switch to glycolysis in normoxia and hypoxia (Warburg and Pasteur effects) and upregulation of the iron uptake required for cell growth. Importantly, analysis of over 900 patients with different tumour types showed that suppression of ISCU is strongly correlated with a worse prognosis. This study thus reveals a new pathway activated in hypoxic tumours, mediated by miR-210 affecting mitochondrial enzyme activity and free radical generation and highlights the importance of mitochondrial metabolism in hypoxia biology [3].
Selection of ISCU as a potential target
We compared miR-210 expression in our published series of breast cancer [14] with our hypoxia metagene of clustered mRNAs [25] and combined assessment with target prediction algorithms showed ISCU was the highest predicted target, and three known target genes also sit in highly-ranked positions were selected by this approach (Supporting Material and Methods S1, Supporting Table S1).
MiR-210 and ISCU mRNA undergo reciprocal regulation
In agreement with previous publications [13,14], miR-210 was robustly induced in MCF7 and HCT116 by hypoxia (1% or 0.1% oxygen) at 24 and 48 hrs, with maximal induction observed at 48 hours in 0.1% oxygen ( Figure 1A and Supporting Figure S1A). ISCU mRNA quantified by RTPCR was inversely correlated to miR-210 and decreased most significantly when cells were exposed to more severe hypoxia for longer periods ( Figure 1B and Supporting Figure S1B). Similar to the regulation of miR-210 under hypoxia, ISCU suppression at the transcript level was dependent on HIF1a and not HIF2a (Supporting Figures S1C and S1D). Furthermore, the phenomenon of reciprocal regulation of miR-210 and ISCU by hypoxia was also found in many other cell lines (Supporting Figures S1E and S1F).
Mimic-210 suppressed ISCU in normoxia and anti-210 reversed the suppression in hypoxia
We recapitulated the observed hypoxic induction of miR-210 by transfecting MCF7 and HCT116 with mimic-210 in normoxia. Mimic-210 suppressed ISCU mRNA levels by approximately 60% ( Figure 1C and Supporting Figure S2A). We then antagonised miR-210 induced under hypoxic conditions with anti-210. Inhibition of endogenous miR-210 significantly rescued the suppression of ISCU mRNA, although this was not complete ( Figure 1C and Supporting Figure S2A).
ISCU has two main alternatively spliced isoforms. ISCU1 has an alternative N-terminal sequence and is localised to the cytosol, while ISCU2 is associated with the mitochondria. To confirm the specificity of the antibody used in these studies, MCF7 cells were treated with siRNAs against ISCU. The band detected in the control cells was completely suppressed upon transfection with siISCU1 and siISCU3 (Supporting Figure S2B). The two isoforms of ISCU could not be distinguished by Western blotting of whole cell lysates because of the minor difference in molecular weight. However, the immunoreactive band was detected in both the cytosolic and mitochondrial fractions (Supporting Figure S2C). This band is referred to as ISCU protein and its molecular weight is compatible with the results of Tong et. al. [23].
Under hypoxia, MCF7 and HCT116 cells showed a reduction in ISCU protein, and this was replicated with mimic-210 in normoxia ( Figure 1D and Supporting Figure S2D). Moreover, anti-210 partially reversed the hypoxic suppression of ISCU protein ( Figure 1D and Supporting Figure S2D). Hypoxia strongly suppressed a luciferase reporter containing the 39UTR of ISCU that included the putative miR-210 target site and this suppression could be partially rescued by transient transfection of anti-210 (Supporting Figure S2E). In addition, mimic-210 substantially suppressed the luciferase activity compared to the control in normoxic MCF7 cells (Supporting Figure S2E). Finally, while siRNA against ISCU led to a downregulation of both endogenous ISCU and an exogenous tagged ISCU lacking the 39UTR, mimic-210 mediated downregulation was only observed on the endogenous ISCU protein (Supporting Figure S2F). Taken together, these observations demonstrate that miR-210 downregulates ISCU mRNA and protein levels by targeting its 39UTR.
Effects of ISCU downregulation on Fe-S proteins
An expected effect of downregulation of ISCU would be a reduction of Fe-S delivery to target proteins. Therefore we analysed the impact of ISCU repression on two enzymes that require Fe-S for their activities, aconitase and mitochondrial complex I. siRNA against ISCU significantly reduced aconitase activity in MCF7 and HCT116 cells compared to control cells in normoxia ( Figure 2A). A similar decrease in aconitase activity was evident in both cell lines upon transfection of mimic-210 ( Figure 2A). However there was no change in aconitase protein levels as assessed by Western blot (Supporting Figure S3A). Aconitase depleted of Fe-S acts as the translational regulator-IRP1, to increase uptake of iron. We found the mimic-210 increased iron uptake in HCT116 cells (Supporting Figure S3B). There was also clear inhibition of mitochondrial complex I activity induced by mimic-210, in the two cell lines ( Figure 2B). In both cell lines there was down regulation of activity by hypoxia, to a similar extent to that induced by mimic-210 or by ISCU depletion. Moreover, transfection of anti-210 in MCF7 cells was able to significantly increase aconitase activity in hypoxia ( Figure 2C).
As the decrease in aconitase and complex I activity reduces the Krebs cycle and mitochondrial function we investigated whether there was a shift to glycolysis and lactate production. Our results showed a highly significant reduction of pyruvate and increase in lactate in normoxia with the mimic-210, with an increase in lactate pyruvate ratio. There was also a decrease in lactate production with the anti-210 in hypoxia ( Figure 2D).
Effects of ISCU downregulation on free radical production
The loss of Fe-S from complex I is likely to affect the transport of electrons in the electron transport chain and impact on free radical production. We therefore measured the production of superoxide in MCF7 and HCT116 cells. In normoxia, both cell lines showed a marked increase in free radical production upon transfection of mimic-210 compared to mimic-control ( Figure 3A).
In hypoxia we noted a highly significant increase in superoxide production, which was nearly completely reversed by anti-210 ( Figure 3B). Additionally, transfection of the ISCU2 construct nearly completely reversed the free radical induced by miR-210 ( Figure 3C). This demonstrates a major new mechanism for regulation of ROS in hypoxia, and that it is not a passive effect of reduced oxygen availability.
Effects of miR-210 on apoptosis and survival in normoxia and hypoxia
Previous studies in yeast have shown the lethal consequences of the ISCU homologs deletion [23]. This notion led us to investigate the effects of miR-210 on apoptosis in normoxia and hypoxia. There was a striking difference in the effects of mir-210 in these opposing conditions. In normoxia, mimic-210 substantially increased apoptosis as measured by annexin V staining, but in hypoxia, antagonism of miR-210 increased apoptosis ( Figure 4A).
To evaluate effects of miR-210 on cell proliferation under hypoxic conditions, a clonogenic assay was used with a locked Figure 4B). Although most of the reduction in clonogenicity in hypoxia is clearly via other mechanisms, the effects of the anti-210 resulted in decreased clonogenic survival in hypoxia, which we also found in a more hypoxia-resistant cell line, HeLa cells ( Figure 4C).
In vivo suppression of ISCU by hypoxia and clinical significance of ISCU expression
We investigated whether expression of ISCU gene expression was regulated in vivo, by studying human tumour xenografts. Xenografts of the glioblastoma cell line U87 were treated with the VEGF inhibitor Avastin (Bevacizumab), or with vehicle control. Immunohistochemistry of these tumours demonstrated Avastin-induced necrosis, expression of HIF1a and the HIF target genes CA9 and VEGF (data not shown). We analysed the mRNA from the tumours and found marked upregulation of miR-210 and reciprocal downregulation of ISCU mRNA ( Figure 4D).
The only data available for comparison of miR-210 with ISCU RNA in human tumour samples is from our breast and head and neck series. There was a highly significant inverse relationship of miR-210 to ISCU expression in 216 patients with breast cancer (rho = 20.39, p,0.001) ( Figure 5A, left). There was a highly significant inverse correlation of ISCU with more aggressive high grade tumours (p = 0.008) and a poor relapse free survival ( Figure 5A, right). In a multivariate analysis, ISCU remained significant (p = 0.028), along with nodes, grade and age (Supporting Table S2). In 2 other series of breast cancer (Rotterdam [26], 286 cases: Uppsala [27]; 235 cases) low ISCU was a poor prognostic factor in multivariate analysis (Supporting Tables S3 and S4) (p = 0.038 and 0.015 respectively). In the Uppsala series [27] we could assess the precursor miR-210 using the Affymetrix probes (Affymetrix U133b, 230710_at) and this also showed an inverse relationship of the precursor to ISCU and poor survival ( Figure 5B). In our head and neck cancer series [25], there was an inverse relationship of these variables ( Figure 5C, left). In a series with published follow up, Chung et al [28], there was a strong association of suppressed ISCU with poor outcome (Figure 5C, right). Analysis of expression in normal tissues versus tumour tissues in 9 tumour data sets using Oncomine showed in all studies suppression compared to normal tissues (Supporting Figure S4). was measured in MCF7 (left) and HCT116 (right) cells treated with miR-210 inhibitor or mimic, after 48 hours exposure to 0.1% oxygen or normoxia. Mean 6 s.e.m. is representative of 3 independent experiments (* p,0.05, ** p,0.01, *** p,0.001). B, after transfecting MCF7 cells with scrambled LNA or mir-210 LNA and exposed to 0.1% oxygen or normoxia (48 hrs), the cells were re-plated (100 and 500 cells/well). After a 12-day incubation, colonies were fixed, stained, and the surviving fractions were calculated based upon the plating efficiency. C, surviving fractions were calculated as in (B) in HeLa cells, which were transfected with anti-ctrl or anti-210 (Applied Biosystems/Ambion, Austin, TX, USA) and exposed to 0.01% oxygen or normoxia. In all colony assays, mean 6 s.e.m. is representative of 2 independent experiments carried out in triplicate employing 2 different plating densities. (** p,0.01, *** p,0.001). D, RT-PCR for ISCU and miR-210 in U87 xenografts treated with anti-angiogenic therapy (bevacizumab). Relative expression of ISCU normalised to b-actin, miR-210 normalised to three small nucleolar controls, RNU43, RNU44 and RNU48. Mean expression 6 s.e.m. in 4 animals/group is shown (** p,0.01, ***p,0.001). doi:10.1371/journal.pone.0010345.g004
Discussion
Our studies clearly show miR-210 regulates the expression of ISCU RNA and protein using the mimic in normoxia in cancer cell lines. ISCU mRNA and protein were suppressed under hypoxia and this effect was significantly reversed by anti-210. The incomplete reversal implies other mechanisms are also involved in ISCU downregulation and it is well recognised that RNA translation is reduced by hypoxia via the unfolded protein response [29].
A decrease in ISCU would impair the ability of cells to generate Fe-S clusters and would subsequently impact on the activity of enzymes that require these co-factor moieties. We showed that aconitase, a key enzyme of intermediary metabolism that requires Fe-S clusters, had decreased activity following ISCU reduction. Of specific relevance to cancer is the dual role of aconitase as IRP1 when it lacks its Fe-S cluster. IRP1 regulates the stability and translation of mRNA for transferrin and ferritin [24]. The effects of loss of aconitase would be to modify iron metabolism and increase its uptake, which is critical for tumour growth. Indeed, we observed an accumulation of intracellular iron upon transfection of cells with mimic-210 or siISCU.
Mutations in ISCU give rise to lactic acidosis in individuals after moderate exercise, providing strong evidence for the importance of this target in maintaining efficient Krebs cycle activity [30,31]. An associated effect from decreased ISCU levels and subsequent decrease in the Krebs cycle activity was a switch to glycolysis and increased lactate production. This was induced in normoxia by miR-210 and thus is similar to the Warburg effect. This may be relevant in situations where miR-210 upregulation occurs in normoxia. For example, this has been shown in renal cancer cell lines with mutations in the von Hippel Lindau protein [14]. Conversely, in hypoxia, anti-210 reduced lactate production, reversing the Pasteur effect (switch to glycolysis in hypoxia).
Iron-sulfur clusters are integral for efficient activity of the mitochondrial electron transport chain. We have shown that Complex I activity is decreased under hypoxia and upon transfection of mimic-210 in normoxia. In addition to Complex I, Complexes II and III also contain Fe-S clusters and we speculate that the activities of all three complexes could be decreased in a miR-210 dependent manner. An impaired electron transport chain would lead to an increase in free radical production as a consequence of electron leakage. In particular, complex III is a major contributor to ROS in hypoxia [32] and complex I and II contribute about 30% of the ROS [33]. In normoxic conditions, transfection of mimic-210 led to increased free radical production. Furthermore, the normoxic ROS production observed with mimic-210 was almost completely prevented with a miR-210 resistant ISCU construct. Conversely, in hypoxia, we observed that the induction of miR-210 was associated with an increase in ROS, and this could be almost completely reversed by anti-210. This provides a major new link to explain the mechanism of ROS induction in hypoxia, which has been reported by several groups [33] and may account for the majority of hypoxic ROS induction. The ROS released in hypoxia have been shown to function as O 2 sensors, acting as signalling agents that activate HIF [32], mediate tumour growth in vivo via a HIF dependent mechanism [5] and extend lifespan of cells [34].
In contrast to the hypoxic induction of ROS and alleviation with anti-210 observed in our study, Chan et al. could not measure any significant change in ROS production after exposure to hypoxia, and showed an inverted trend when miR-210 was inhibited [20]. The reason for this discrepancy is unclear but may potentially reflect an underlying difference in cancer compared to normal endothelial cells.
Finally, we analysed whether there was any association of ISCU downregulation in primary human cancers with hypoxia and outcome, and found an aggressive phenotype was associated with this change in studies of over 1000 patients, and downregulation compared to normal tissues in many tumour types.
In conclusion, we have found a major new mechanism for the response of cancer cells to hypoxia, co-ordinated by miR-210 suppression of ISCU and the subsequent decrease in activity of iron-sulfur cluster proteins (Supporting Figure S5). In addition to aconitase, many metabolic enzymes require Fe-S clusters for their activity suggesting that downregulation of ISCU would have far reaching consequences on cell metabolism. Mutations in many of these Fe-S cluster enzymes including succinate dehydrogenase subunits SDHD, SDHB and SDHC [35] and fumarate hydratase [36] are implicated in hyperproliferative disorders and cancer, demonstrating an important link between cellular metabolism and subsequent transformation and highlights a role for HIF, via a miR-210-ISCU axis in these processes. In addition to metabolic enzymes, several DNA repair enzymes [37] with Fe-S clusters are potentially targets. We have previously shown miR-210 can be detected at an elevated level in serum of cancer patients [38] so it will be of interest if they reflect the metabolic state of their primary cancer and hence could be used in future therapy selection. The regulation of distinct biological pathways by ISCU downregulation suggests a potential therapeutic approach of synthetic lethality whereby drugs that mediate DNA damage repaired by the ironsulfur cluster containing helicases Rad3 [39] or Fanconi anaemia [37] could be use in synergy with PARP inhibitors, or inhibitors of glycolysis [36] and glutaminolysis in the subgroup of patients with highest miR-210 levels.
While this work was in preparation, three new miR-210 targets were validated, i.e. HOXA1, HOXA9 and FGFRL1, and the analysis of tumour xenografts derived from cancer cell lines overexpressing miR-210 suggested a potential involvement of miR-210 in tumour growth initiation [40].
Cell culture
Cell exposure to hypoxia (1%, or 0.1%, or 0.01% oxygen) was undertaken in a hypoxia incubator (MiniGalaxy A, RS Biotech, Scotland, UK), using a continuous flow of a humidified a mixture of 95% N 2 and 5% CO 2 .
miRNA mimic or inhibitor transfection
Transfection was performed with Dharmacon anti-210, miR-210 mimic, or control oligos (Thermo Scientific, CO, USA), at a final concentration of 20 nM, using Optimem serum-free medium and Oligofectamine reagent (both from Invitrogen, Paisley, UK) as per the manufacturer's protocol.
Transfection and luciferase assays
Luciferase reporter plasmids containing 39 untranslated regions (UTR) of ISCU or random genomic sequence (control) were obtained from SwitchGear Genomics (Menlo Park, CA. A Renilla luciferase expression plasmid (pRL-TK vector, Promega, Southampton, UK) was used as transfection control.
Enzyme activity assays
Aconitase activity was using the Bioxytech Aconitase-340 assay (Oxis International Inc, Beverly Hills, CA). Data are presented as % of activity compared to control.
Cells were assayed for Complex I activity using the Mitoprofile Complex I Rapid Elisa Kit (MitoSciences, Eugene, OR, USA) according to manufacturer's instructions. Data are presented as % of activity compared to control.
Lactate and pyruvate measurements
Lactate and pyruvate were assayed using kits from Instruchemie (Delfzijl, The Netherlands).
Survival analyses and other statistical analyses
Spearman correlation was used for continuous variables and Wilcox test for categorical variables. The Log-Rank test was used for univariate, and Cox survival was used for multivariate analysis. Distant-relapse free survival (DRFS) and relapse-free survival (RFS) were calculated by the STEEP criteria [44]. Statistics were carried out using non-paired t-tests and significance is represented by: *, p,0.05, **, p,0.01, and ***, p,0.001.
Supporting Information
Materials and Methods S1 Knockdown of ISCU with siISCU3 led to an accumulation of ferric iron compared to control siRNA transfected HCT116 cells. This effect was reproduced when cells were transfected with mimic-210 compared to mimic-ctrl. HCT116 cells were transfected as described above and treated with 100 mg/L Ferric Ammonium Citrate for 16 hours. The cells were then fixed and stained with 4% Formalin and Perl's solution (1% K4Fe(CN)6 and 1% HCl) for 30 minutes at room temperature. Cells were then incubated with 0.75 mg/ml diaminobenzidine (DAB), H2O2 in 1 M Tris pH 7.5 for 60 minutes. The reaction was then completed by washing the cells in PBS. Original magnification: 1006. Found at: doi:10.1371/journal.pone.0010345.s004 (1.19 MB TIF) Figure S4 Oncomine data. The Oncomine website (Oncomine.org) was searched for microarrays containing the gene ISCU. When compared to normal tissue, ISCU was significantly downregulated in 9 tumour microarray experiments; data for relative expression units, normalised to median z-score to enable comparison across multiple studies, were downloaded and graphed. p,0.05 for all, Student's t-test. Found at: doi:10.1371/journal.pone.0010345.s005 (0.97 MB TIF) Figure S5 Hypoxia regulates ISCU via HIF1a induction of miR-210. miR-210 represses ISCU 39-UTR and reduces ISCU protein. Reduced Fe-S assembly in the mitochondrial electron transport chain results in inhibition of major sites of electron transfer. This is a key mechanism for generating ROS, which have both positive and negative effects in hypoxia. Reduction of aconitase activity inhibits the Krebs cycle, resulting in an increase in glycolysis and lactate production. Aconitase 1 (cytosolic aconitase) when depleted of Fe-S becomes an iron regulatory protein, irp1, binding to iron response elements in the 39-UTR of the transferrin receptor. This stabilises the mRNA increasing transcription and enhancing iron uptake. Other Fe-S proteins are potential targets. Found at: doi:10.1371/journal.pone.0010345.s006 (0.60 MB TIF)
|
2014-10-01T00:00:00.000Z
|
2010-04-26T00:00:00.000
|
{
"year": 2010,
"sha1": "6272c42c45512b06891ee83b9bb17c56ef05c048",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0010345&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6272c42c45512b06891ee83b9bb17c56ef05c048",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Biology",
"Medicine"
]
}
|
198709974
|
pes2o/s2orc
|
v3-fos-license
|
Orientational order parameter of liquid crystalline nanocomposites by Newton’s rings and image analysis methods
Liquid crystalline nanocomposites are prepared by dispersing TiO2, ZnO, Fe2O3 and Fe3O4 nanoparticles separately in 4-Cyano 4’-Propoxy-1, 1’-Biphenyl (3O-CB) liquid crystal in a 1:100 ratio. The characteristic textures exhibited are captured at different liquid crystalline phases by using POM. The phase transition temperatures are measured by both polarizing optical microscope (POM) and differential scanning calorimeter (DSC). The optical textures are analyzed by using MATLAB software to compute birefringence and order parameter of samples. The birefringence and order parameter also measured by conventional Newton’s rings technique, the results are discussed.
Introduction
Liquid crystal technology has a major effect on many areas of science and technology. Applications of this kind of materials are being discovered and continued to provide effective solution to many different problems. For modern industrial application, wide temperature range of liquid crystal phase, high optical and dielectric anisotropy and fast switching time are required. By composing liquid crystalline mixtures or using guest materials in host liquid crystals are two basic methods for obtaining liquid crystals with enhanced properties. Metal oxide nanoparticles are novel type of guest materials. Doping nanomaterials in liquid crystals enhances the properties of liquid crystals. Different types of metal oxide nanoparticles are used to achieve this purpose [1].
Liquid crystals, being anisotropic media, provide good support for self-assembly of nanomaterials into large organizing structures in multiple dimensions. Therefore, doping of nanoparticle into liquid crystals has emerged as a fascinating area of applied research. Nano objects (Guests) are embedded in the liquid crystals (Hosts) that can trap the ion concentration, electrical conductivity and improve the electro-optical response of the host [2].
Incorporation of metal oxide nanoparticles into liquid crystals makes it easier to obtain better display parameter profiles [3]. The metal oxide nanoparticles embedded in liquid crystal bases have attracted much interest not only in the field of magnetic recording media but also in the area of medical care. The medical applications,which include radio frequency, hyperthermia, photo magnetic and magnetic resonance imaging, cancer therapy, sensors and high frequency applications, were reported [4][5][6][7].
In the present work, an effort has been made to study the effect of metal oxide nanoparticles on the orientation order parameter of 4-Cyano 4'-Propoxy-1, 1'-Biphenyl liquid crystal by image analysis and Newton's ring techniques. There are several techniques for studying the temperature dependence of liquid crystal properties [8][9][10][11]. But they involve technical difficulties in measuring required parameters.
In this paper, we have explored image analysis-computer program technique to find orientation order parameter of liquid crystalline nanocomposites. In the image analysis technique, textures of liquid crystal samples are captured from crystal to isotropic phase by using POM. The changes in textural feature as a function of temperature are useful to compute thermo-optical properties of liquid crystals. By this technique, it is possible to extract as much information as possible from the textural image by means of applying computational algorithms on image data or intensity values. MAT LAB software [12,13] is used for the analysis of liquid crystal textures and to estimate the orientational order parameter.
Materials and methods
In this present work 4-Cyano 4'-Propoxy-1, 1'-Biphenyl liquid crystal was purchased from TCI, Ltd. and ZnO, TiO 2 , Fe 3 O 4 and Fe 2 O 3 nanoparticles were obtained from VTU-PG centre, Muddenahalli, Chikkaballapura District, Bengaluru, India. Liquid crystalline nanocomposites are prepared using sonication method and its composition names are given in the Table 1. Textural features are studied using POM to confirm its liquid crystalline behavior. Then the transition temperature is measured with DSC studies for reliable information. Optical parameters like birefringence and order parameters of such nanocomposites are studied using conventional newton's ring method as well as computational methods using MATLAB program.
Polarizing optical microscope
The liquid crystalline nanocomposites are characterized by different liquid crystalline phases due to the changes in local molecular order with temperature [14]. The characterization of these mesophases will provide very important information on the pattern and textures of LCs. The transition temperatures and optical textures observed by polarizing optical microscope is shown in Figs. 1-5. As a representative case, the schelieren texture of nematic phase from isotropic phase is observed at 61.5 • C then it is grown to curved brushes texture observed at 54 • C and finally at 47.5 • C the crystalline texture is formed which is not transparent hence look dark, all this textures are represented in the set of Dispersion of nanoparticles with liquid crystal influenced the textural features of the sample at different phases with respect to temperature and nanomaterial is observed in POM textures. The surface to volume ratio of sample liquid crystal increases due to surface restructuring by nanoparticle dispersion. This phenomenon of molecular restructuring varies as a function of temperature and nanoparticle composition. It is observed that orientational order of liquid crystal molecules are further strengthened heavily at isotropic to nematic phase and strengthened less at crystalline phase by the nanoparticles with shifting transition temperature. This is because possibility of increasing its surface area and decreasing its volume. It is also observed that molecular weight of nanoparticles plays vital role in restructuring liquid crystal molecules. The observation of POM shows that restructuring of molecules leads to defective textural images at various instances. Phase transition temperatures observed for liquid crystalline nanocomposites using POM and DSC shows that they are reduced due to the dispersion of nanoparticles. The nanoparticles ZnO, TiO 2 , Fe 3 O 4 and Fe 2 O 3 influences the liquid crystal and reduces the transition temperature by 1 • C, 1 • C, 1 • C and 2 • C respectively.
Differential scanning calorimeter (DSC) studies
The thermal analysis by DSC study provides data regarding the temperatures and heat capacity of different phases. DSC study reveals presence of phase transition in materials by detecting the enthalpy change associated with each phase transition. DSC study is used in conjunction with optical polarizing microscopy to determine the mesophase types exhibited by the materials. The different thermograms of liquid crystalline nanocomposites are recorded from DSC as shown in Figs. 6, 7.
Birefringence studies by Newton's rings method
The experimental setup consists of plano-convex lens of small radius of curvature (13 mm) and plane glass plate which is being placed in hot stage connected to specially designed microcontroller based temperature and image capturing device. The LC sample is introduced between the glass plate and lens and set the polarizer and the analyzer in the crossed position. The hot stage along with LC sample mount is placed on the microscope stage and then adjust hot stage axis to coincide with the microscopic axis. Set the reflector of the microscope to pass the light through LC sample until the clear Newton's rings are formed on the monitor. These rings are formed due to the interference of ordinary and extra ordinary rays after passing through the analyzer. The diameter of various rings was measured. The experimental setup is shown in the Fig. 8 and ring pattern in Fig. 9. The optical path difference between e-ray (extra ordinary ray) and o-ray (ordinary ray) is given by y, δn which corresponds to ring number k and wavelength λ for a bright fringe is given by: From equations (1) and (2): Since 2Rλ = c, cell constant for the given wavelength of light: where x is the radius of the ring and R -the radius of curvature of the lens used. As the temperature decreases, birefringence δn increases. The method adopted for the estimation of orientational order parameter from δn given by Kuczynski et al. as follows where ∆n is birefringence at crystalline phase and is obtained by linear regression method shown in Figs. 14-16 [15,16]:
Birefringence and order parameter by image analysis
Phase transitions are characterized by abrupt changes, discontinuities, breaking of symmetry and strong fluctuations of the molecules in a compound. The identification of transition temperatures is essential to study the physical properties of the LC materials. The transition also indicates the transformation from an ordered phase to relatively disordered phase and Vice versa as the temperatures are raised or cooled [17,18].
The behavior of light with respect to temperature is known as the thermo-optical parameters. Optical birefringence and order parameters are the important thermo-optical parameters. Image analysis is the extraction of meaningful information from images (Textures) by applying computational techniques and algorithms to image data. Image analysis technique compute the statistics and measurement based on grey level intensities of the image pixels. In the present work, optical birefringence and order parameter are computed from the optical textures of samples as a function of temperatures by image analysis technique.
The birefringence of the liquid crystals was measured as a function of temperature by substituting the thickness d of liquid crystalline sample layer and wavelength of color in the following equation to calculate birefringence [8]: where d is thickness of liquid crystal layer, I 0 is the intensity of light observed when there is no sample (Liquid crystal layer) between light source and lens, I is the Intensity of light observed when there is sample (Liquid crystal layer).
The temperature dependent birefringence values of the samples are used to calculate the order parameter using Kuczynski equation given below [16,19]: where ∆n is birefringence at crystalline stateand is obtained by linear regression method using Newton's rings experiment.
In the image analysis technique, optical textures, thickness of the liquid crystal layer (d) and birefringence in perfect order (∆n) are given as input to obtain birefringence and order parameter. The birefringence and order parameter evaluated at different liquid crystalline phases by image analysis and Newton's rings methods are represented in Tables 3-7. The order parameter values are found to be same using both methods. The order parameter found to decrease with increase of temperature. The temperature variation of order parameter is depicted in Fig. 17.
Conclusion
The advantage of the image analysis method is that it is simple, less complex, efficient and reliable in this type of studies, unlike other techniques, there is no need to arrange different experimental setup except to arrange POM. By conventional techniques, the order parameter can be estimated in the nematic and smectic phases only; however, in the image analysis method, the order parameter can be evaluated in the crystalline phase in addition to the nematic and smectic phases. By the image analysis technique, the order parameter can be estimated in all liquid crystalline phases such as nematic, smectic and crystalline phases, whereas in Newton's rings method it can be evaluated only in nematic, smectic phases. Due to the dispersion of Nano particles, the birefringence anisotropy increases. Therefore,the view angle increases and this can be most advantageous in liquid crystal display devices, to produce large panel LC displays with good depth.
|
2019-07-26T12:37:16.383Z
|
2019-06-30T00:00:00.000
|
{
"year": 2019,
"sha1": "838820da18c66972895bf8a21e113b4343c0c88a",
"oa_license": null,
"oa_url": "https://doi.org/10.17586/2220-8054-2019-10-3-243-254",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e236d96a1c4e104d108669beb1ea1b553a12129c",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119706549
|
pes2o/s2orc
|
v3-fos-license
|
Counting Multiplicities in a Hypersurface over a Number Field
We fix a counting function of multiplicities of algebraic points in a projective hypersurface over a number field, and take the sum over all algebraic points of bounded height and fixed degree. An upper bound for the sum with respect to this counting function will be given in terms of the degree of the hypersurface, the dimension of the singular locus, the upper bounds of height, and the degree of the field of definition.
Introduction
In this paper, we consider a problem of counting multiplicities in projective schemes. More precisely, let k be a field, and X be a scheme of finite type over Spec k, we are interested in the estimate of the sum where S X k is a subset of X(k) which satisfies some conditions, and f ( . ) is a positive function and µ ξ (X) is the multiplicity of ξ in X defined via the local Hilbert-Samuel function of X at ξ as follows.
We say that X is a pure dimensional scheme (or X is of pure dimension) if all its irreducible components have the same dimension. Let X be of pure dimension, and ξ ∈ X be a point. Consider the local ring O X,ξ , whose maximal ideal is m ξ and residue field is κ(ξ). The local Hilbert-Samuel function of X at ξ is given as H ξ (m) = dim κ(ξ) m m ξ /m m+1 ξ defined for all m ∈ N + . Suppose dim(O X,ξ ) = t 1, then there exists a polynomial P ξ (T ) of degree t − 1 such that H ξ (m) = P ξ (m) when m is large enough. In addition, there exists an integer µ ξ (X) 1 such that We define the integer µ ξ (X) as the multiplicity of the point ξ in X. In particular, if the point ξ is regular in X, which means that O X,ξ is a regular local ring, then we have µ ξ (X) = 1.
If we take the counting function f as the constant function f ≡ 1, then this problem reduces to the classical problem of counting algebraic points on the scheme X. There have been many literatures about this problem hitherto. If we take f to be a nontrivial function, and in addition we require f (1) = 0, then this problem will be a question about the complexity of the singular locus of X.
1.1. Known results. -First we consider the case where X is a reduced plane curve of degree δ. In Exercise 5-22 in page 115 of [7], we have (1.2) ξ∈X µ ξ (X) (µ ξ (X) − 1) δ(δ − 1), which is obtained by the Bézout's Theorem in the intersection theory. In addition, let g(X) be the genus of X, if X is geometrically integral, by Corollary 1 in page 201 of [7], we have This inequality is deduced from the Riemann-Roch Theorem of plane curves. More generally, let X ֒→ P n k be a projective hypersurface over an algebraically closed field k of characteristic 0, whose singular locus is of dimension 0. Through the method of Lefschetz pencil, a direct corollary of [15,Corollaire 4.2.1] gives the inequality ξ∈X µ ξ (X)(µ ξ (X) − 1) n−1 δ(δ − 1) n−1 .
But the condition that the singular locus is of dimension 0 is too restrictive for a general counting problem. In general, the sum in the left hand side of the above inequality depends on the candidate of the base field k.
In [17,Théorème 5.1], the second author of this paper obtained a result of this type over finite fields. More precisely, let n 2, δ 1, s 0 be three integers, and F q be the finite field with exact q elements. He proved that the estimate (1.3) ξ∈X (Fq) µ ξ (X)(µ ξ (X) − 1) n−s−1 ≪ n δ n−s max{δ − 1, q} s holds uniformly for all reduced hypersurface X of degree δ of P n K whose singular locus is of dimension s. In the above formula we have used Vinogradov's symbol ≪ in its usual sense: let Ω and P be two sets, and Ω be a subset of Ω×P . Suppose that f (x, y) and g(x, y) are two real-valued functions defined on Ω, where x ∈ Ω and y ∈ P . Then the expression f (x, y) ≪ y g(x, y) means that there exists a non-negative function C( . ) on the set P such that |f (x, y)| C(y)|g(x, y)| holds for every (x, y) ∈ Ω. Some examples are given in [17] to show that the order of δ and max{δ − 1, q} in (1.3) are both optimal when q δ − 1. This estimate is obtained by the technique of intersection tree introduced in [17, §2.1] via the intersection theory on projective spaces.
Principal
Result. -In this paper, we consider a sum of the same type as in (1.3) over number fields. More precisely, we take the sum over all the algebraic points, whose fields of definition are of fixed degree over the base field, of bounded height in a hypersurface of a projective space. By the Northcott's property (cf. [14,Theorem B.2.3]), this is a finite set, hence the sum always makes sense. The principle result (Theorem 4.5) is stated as follows: Theorem 1.1. -Let K be a number field, n 2 be an integer, and h( . ) be the absolute logarithmic height function on P n K . For any closed subscheme X of P n K , any D ∈ N + , and any B 1, let Let δ and s be integers such that δ 1 and s 0. Then the inequality holds for all reduce hypersurfaces X of degree δ of P n K whose singular locus is of dimension s, where, for t ∈ {0, . . . , s}, Z t is a set of closed subschemes of X of dimension s − t, whose construction will be explained in §4.2. We keep all the notation in Theorem 1.1. If we want to get an upper bound of the sum ξ∈S(X;D,B) µ ξ (X)(µ ξ (X) − 1) n−s−1 through Theorem 1.1 for all X satisfying the above conditions, it is important to understand the term which originates from the classical problem of counting algebraic points, or of counting rational points for the case of D = 1.
We have the following corollary of Theorem 1.1 for the case of K = Q.
holds uniformly for all reduce singular hypersurfaces X of degree δ of P n Q whose singular locus is of dimension s.
Moreover, we can construct some examples (for instance, Example 4.8) to show that for all X considered in Theorem 1.1, the exponents of δ and max{B, δ − 1} in Corollary 1.2 are both optimal when B δ − 1. We will also explain (Remark 4.6) that the consideration in Theorem 1.1 is necessary.
1.3. Principal Tools. -We shall follow the construction of intersection trees introduced in [17, §2.1] to control the multiplicities of singular points. We construct a series of intersections over P n K , and cut X into several irreducible components. The multiplicity of each irreducible component can be bounded by its multiplicity in the intersection trees. Different from techniques used in [17] over a finite field, we work over a number field in this paper, whose cardinality is infinite. Consequently this allows us to work over the original base field directly, and we do not need to take a finite extension of the base field in order to make sure that we can construct useful auxiliary schemes, and then descend it back to the original base field.
Meanwhile, we need to consider the number of rational points and algebraic points of bounded height. Since we require that the constant in the estimate in Corollary 1.2 only depends on n, we need a uniform estimate of the number of algebraic points of bounded height in arithmetic varieties, which has a weak dependance on the degrees of varieties. This paper is organized as follows: in §2, we introduce the technique of intersection tree in [17]. In §3, we recall some useful results on counting rational points and algebraic points, and we consider a uniform estimate of rational points of bounded height over Q, which is a generalization of [22,Theorem 1] and [5,Theorem 3.1]. In §4, we give an upper bound of this multiplicity-counting problem as a function of intersection trees, and we give a uniform upper bound of it via a generalized Schanuel's estimate.
Acknowledgment. -We would like to thank Dr. Yang Cao and Dr. Enlin Yang for some useful suggestions on some technical details in this paper, and we would like to thank the anonymous referees for their useful comments and suggestions. Chunhui Liu is supported by JSPS KAKENHI Grant Number JP17F17730.
Operations over intersection trees
In this section, we recall the notion of intersection tree in the settings of graph theory and some useful properties of it. These are introduced in [17]. We fix a base field k throughout this section.
2.1. Preliminaries of intersection theory. -Let X be a projective scheme and ξ ∈ X. In (1.1), we have defined the multiplicity of the point ξ in X, noted by µ ξ (X). In addition, if M is an integral closed subscheme of X whose generic point is ξ M , we define the multiplicity of M in X as µ ξM (X), noted by µ M (X) for simplicity.
In the following, we will recall some useful notions and properties of the intersection theory. We will follow the strategy of [24] and [8].
Let Y be a separated regular k-scheme of finite type, r 2 be an integer, and X 1 , . . . , X r be pure dimensional closed subschemes of Y . We denote by C(X 1 · . . . · X r ) the set of irreducible components of the intersection product X 1 · . . . · X r . Let X be a pure dimensional closed subscheme of Y , we denote by C(X) the set of irreducible components of X. If not specially mentioned, each element of C(X 1 · . . . · X r ) and C(X) is considered to be an integral closed subscheme of Y . Let M ∈ C(X 1 · . . . · X r ), we denote by the intersection multiplicity of the intersection product X 1 · . . . · X r at M , and we refer readers to [8,Chapter 7 and 8] for its definition.
If the equality holds and the intersection is not empty, we say that X 1 , . . . , X r intersect properly at M in Y , and M is a proper component of the intersection product X 1 · . . . · X r in Y . If X 1 , . . . , X r intersect properly at all its irreducible components, we say that X 1 , . . . , X r intersect properly.
Bézout's Theorem. -Let Y be a regular projective k-scheme and L be an ample invertible O Y -module. If X is a closed subscheme of Y , we denote by deg L (X) the degree of X with respect to the invertible O Y -module L , which is defined as , we note the degree by deg(X) for simplicity.
The Bézout's Theorem is a description of the complexity of a proper intersection in P n k in terms of degrees with respect to the universal bundles.
Theorem 2.1 (Bézout's Theorem). -Let X 1 , . . . , X r be a family of closed pure dimensional subschemes of P n k , which intersect properly. Then we have We refer readers to [8,Proposition 8.4] for more details. See also the equality (1) in page 145 of [8].
2.2. Definition of intersection tree. -Let Y be a regular separated k-scheme and L be an ample invertible O Y -module. Let δ 1 be an integer. We call a directed rooted tree T with labelled vertices and weighted edges an intersection tree of level δ over Y , if it satisfies the following conditions: 1. the vertices of T are the occurrences of integral closed subschemes of Y (an integral closed subscheme of Y can appear several times in a tree); 2. each vertex X of T is attached with a label, which is a pure dimensional closed subscheme of Y or empty; 3. a vertex of T is a leaf if and only if its label is empty; 4. if X is a vertex of T which is not a leaf, then -its label X satisfies the inequality deg L ( X) δ and the closed subschemes X and X intersect properly in Y ; -the children of X are precisely the irreducible components of the intersection product X · X in Y ; -for each child Z of X, the edge ℓ which links X and Z is attached with a weight w(ℓ) which equals the intersection multiplicity i(Z; X · X; Y ). For every fixed intersection tree T , we call any of the complete subtrees of T an sub-intersection tree, which is necessarily an intersection tree.
Weight of a vertex. -Let Y be a regular separated scheme over Spec k, equipped with an ample invertible sheaf L , and T be an intersection tree over Y . For each vertex X of T , we define the weight of X as the product of the weights of all edges in the path which links the root of T and the vertex X, denoted as w T (X). If X is the root of an intersection tree, we define w T (X) = 1 for convenience.
Weight of an integral closed subscheme. -Let Z be an integral closed subscheme of Y . We define the weight of Z relative to the tree T as the sum of the weights of all the occurrences of Z as vertices of T , noted by W T (Z). If Z does not appear in the tree T as a vertex, for convenience the weight W T (Z) is defined to be 0. Let Z be a vertex in the intersection tree T . When we write W T (Z), the symbol Z is considered as an integral closed subscheme of Y . In other words, we count all the occurrences of the subscheme Z in the intersection tree T .
Example of intersection trees. -We refer the readers to [17, Exemple 3.2] as an example of the notion of intersection tree.
Estimate of weights of intersection trees. -
In order to estimate the weights in intersections trees, we first introduce the following result.
be a family of closed pure dimensional subschemes of P n k which intersect properly in P n k . For each irreducible component C ∈ C(X 1 · . . . · X r ), let T C be an intersection tree whose root is C. We consider a vertex M in the intersection trees proper subscheme of Z, then there exists an occurrence of M as a descendant of Z. Then we have Keeping all the notation in Theorem 2.2, we introduce the following notions.
-Let s be a non-negative integer. We define C s as the set of all vertices of depth s in the intersection trees T C , where C ∈ C(X 1 ·. . .·X r ). In addition, We define a subset of C s for each non-negative integer s as below. By definition, we have Z 0 = C 0 = C(X 1 · . . . · X r ). In fact, Theorem 2.2 is satisfied for every element in Z * . Definition 2.5. -Let s be a non-negative integer. We denote by C ′ s (resp. Z ′ s , C ′ * and Z ′ * ) the set of the labels of C s (resp. Z s , C * and Z * ).
The following proposition is a corollary of Theorem 2.2, which is proved via Theorem 2.1.
Proposition 2.6 (Proposition 4.6, [17]). -With all the above notation and the conditions in Theorem 2.2, we suppose that all the non-empty elements in C ′ * have the same dimension. Then we have where we define by convention
Counting algebraic points in arithmetic varieties
Let K be a number field. In order to describe the arithmetic complexity of the closed points in P n K , we introduce the following height function.
3.1. Definition of height functions. -Let K be a number field, K be an algebraic closure of K, and M K be the set of all places of K. For every element x ∈ K, we define the absolute value be a closed point and K ′ be any field such that [K ′ : K] < +∞ and ξ ∈ P n K (K ′ ). We write a K ′ -rational homogeneous coordinate of ξ as [x 0 : · · · : x n ]. We define the absolute logarithmic height of the point ξ as which is independent of the choice of the projective coordinate by the product formula (cf. [20, Chap. III, Proposition 1.3]).
We can prove that h(ξ) is independent of the choice of the field If ξ is an algebraic point of P n K valued in a number field K ′ containing K, we define the relative multiplicative height of the point ξ to be When considering the closed points of a subscheme X of P n K with the immersion φ : X ֒→ P n K , we define the height of ξ ∈ X(K) to be h(ξ) := h(φ(ξ)).
We shall use this notation when there is no confusion of the immersion morphism φ. Let B 1, D ∈ N + , and X be the subscheme of P n K defined above. We denote by where K(ξ) is the residue field of ξ in P n K . In particular, we denote by for simplicity. We denote For the problem of counting rational points or algebraic points, it is essential to understand the functions N (X; B) and N (X; D, B) in variables B and D. There are fruitful results on this topic, and we will introduce some which are useful in the multiplicity-counting problem.
1, D ∈ N + and X ֒→ P n K be a projective scheme. With all the notation above, it is natural to consider the density of algebraic points via some properties of N (X; D, B) and N (X; B). First we consider the case where X = P n K . 3.2.1. The density of rational points of projective spaces. -For N (P n K ; B), we have the following asymptotic estimate for all n ∈ Z + , where the constant α(K, n) is articulated in the paper of S. Schanuel [22,Theorem 1]. For the case of K = Q. Let ξ ∈ P n Q (Q), we take the primitive projective coordinate of ξ as [ξ 0 : · · · : ξ n ], which means each ξ i ∈ Z and gcd(ξ 0 , . . . , ξ n ) = 1. In this case, we have where | . | is the usual absolute value. In addition, we have for all n ∈ N + , where ζ(n) is the usual Riemann zeta function. We refer to [5, Theorem 1.2] for a proof, which is simpler than that of [22,Theorem 1]. In this case, we have an explicit uniform estimate of N (P n Q ; B) as following.
holds for all B 1 and n ∈ N + .
Proof. -We consider the set Because there are at most 2B + 1 integers whose absolute values are smaller than B, we have In addition, we have N (P n Q ; B) #R(A n+1 Z ; B). So we get the result.
3.2.2.
The density of algebraic points of projective spaces. -We have discussed N (P n K ; B) = N (P n K ; 1, B) above for the case of rational points. For N (P n K ; D, B) with arbitrary D ∈ N + , the situation is very different. Until now, to the authors' knowledge, there is no optimal asymptotic estimate of N (P n K ; D, B) for general n, D and K. We only have some partial results for these n, D and K satisfying certain conditions.
Let A(K, n, D) be a series of positive constants depending on n, D ∈ N + and the number field K. First we consider the case of n = 1, in which case P n K is a projective line. In this case, we have 1, D)B D+1 for all D ∈ N + and for all number field K, see [19] or [16, Théorème 5.1] for a proof, where the constant A(K, 1, D) is explicitly given in the above two references.
Higher dimensional cases are more complicated. Actually, when n 3, we have N (P n K ; 2, B) ∼ A(K, n, 2)B n+1 for all B 1 and arbitrary number field K in [11, Theorem 1.2.1]), where the constant A(K, n, 2) is given explicitly (loc. cit.). The case of K = Q is treated in [23].
For the cases of higher extension degrees D ∈ N + , we have holds uniformly for all pure dimensional closed subscheme X of P n Q of dimension d and degree δ.
In order to prove it, we will introduce auxiliary results. First, we introduce the following definition.
Definition 3.4. -Let k be a field, and X be a closed subscheme of A n k . We define the degree of X in A n k to be the degree of its projective closure in P n k . The degree of X defined above is denoted by deg(X) if there is no confusion.
By Definition 3.4, we have the following result. where C(X · L) is the set of irreducible components of the intersection X · L, and i(Z; X · L; P n k ) is the intersection multiplicity of X · L at Z. For each Z ∈ C(X · L), let a(Z) be the restriction of Z in A n k involved above. By Definition 3.4, we have where we define deg(a(Z)) = 0 if a(Z) = ∅ above. The reason is that each intersection multiplicity is larger than or equal to 1. So we have the result.
We need the following lemma about the definition of Krull dimension of a topological space. We refer the reader its definition at [18, Definition 2.5.1].
Lemma 3.6. -Let k be a field, and X be a non-empty closed irreducible subset of the affine space A n k whose dimension is d, where d 0. Then X has no proper closed subset of dimension d. A proper subset of X means a subset of X which is not equal to X itself.
Proof. -We suppose that X has a proper irreducible closed subset X ′ of dimension d. Let X ′ = X 0 X 1 · · · X d be a sequence of non-empty irreducible closed subsets of X ′ . Then we have the following sequence of non-empty irreducible closed subsets of X X X 0 X 1 · · · X d , which shows that the dimension of X is at least d+1. This leads to a contradiction.
Next, we prove a lemma about the intersection of affine schemes. If for each α ∈ {1, . . . , n}, we can find an element a ∈ k such that X ∩ H(T α = a) is not a proper intersection, which means that we have dim(X ∩ H(T α = a)) = d by definition directly.
The set X ∩ H(T α = a) is a closed subset of X and H(T α = a) by the definition of topological space. By Lemma 3.6, there is no proper closed subset of X whose dimension is d since the scheme X is irreducible and dim(X) = d. From the fact dim(X ∩H(T α = a)) = d, we have X = X ∩H(T α = a). So we obtain X ⊆ H(T α = a).
From the above hypothesis, for all α ∈ {1, . . . , n}, there exists an element a ∈ k such that X ⊆ H(T α = a). For every α ∈ {1, . . . , n}, we choose one of these elements in k, noted by a n . Then we have X ⊆ H(T 1 = a 1 ) ∩ · · · ∩ H(T n = a n ). The scheme H(T 1 = a 1 ) ∩ · · · ∩ H(T n = a n ) is the rational point in A n k whose affine coordinate is (a 1 , . . . , a n ), so we have X ⊆ (a 0 , . . . , a n ). This contradicts the fact that d 1. So we prove the result. Now we prove a proposition of counting integral points in affine schemes. Before doing this, we introduce a definition of Z-points of a Q-scheme.
Let φ : X ֒→ A n Q be an arbitrary affine subscheme of A n Q , then we have the following diagram: Definition 3.8. -With the above construction, we denote by X φ (Z) the subset of X(Q) of the ξ ∈ X(Q) (considered as Q-morphisms from Spec Q to X) whose composition with the canonical immersion morphism φ : X ֒→ A n Q gives a Z-point of A n Z having the value in Q which comes from a Z-point of A n Z . In other words, we define X φ (Z) = X(Q) ∩ π −1 (A n Z (Z)). Instead of X φ (Z), we denote this set by X(Z) if there is no confusion of the morphism φ. Proof. -We can suppose that X is irreducible, else we can count it component by component.
We reason by induction on d to prove this lemma. If d = 1, by Lemma 3.7, there exists an index α ∈ {1, . . . , n} such that X intersects the hyperplane defined by T α = a properly. Let H a denote this hyperplane. Then we have Next, we suppose that d 2. In this case, by Lemma 3.7, we can find an index α ∈ {0, . . . , n} such that X intersects the hyperplane defined by T α = a properly for any a ∈ Q. Let H a denote this hyperplane. Then we have For every a ∈ Z above, the scheme X ∩H a has dimension at most d− 1. By Lemma 3.5, we have the inequality where C(X ∩ H a ) is the set of irreducible components of X ∩ H a .
By the induction hypothesis, we have On the other hand, by the relation (3.8), we have H a ; B), B 1.
There are at most 2B + 1 integers whose absolute values are smaller than B. So we have the result from the induction hypothesis. 3.4. Varieties of degree larger than 1. -Let X ֒→ P n K be an integral closed projective scheme over the number field K, whose dimension is d and degree is δ. In Theorem 3.3, we give the optimal response to the case of δ = 1. Actually, the scheme X is isomorphic to P d K in this case. We consider the cases which the degree is greater than or equal to 2. In [13, Conjecture 2], D. R. Heath-Brown conjectured that, if d, δ and n are integers such that d 2, δ 3, n 4, and B 1 be a real number. Then for any ǫ > 0, the estimate (3.9) N (X; B) ≪ n,ǫ,K B d+ǫ or a weaker one (3.10) N (X; B) ≪ n,ǫ,K,δ B d+ǫ hold uniformly for every integral closed subscheme X of P n K of degree δ and dimension d. He has also given a proof for the case of δ = 2. In order to solve this conjecture, T. Browning, D. R. Heath-Brown and P. Salberger have published several papers on this topic, see [1,2,3,4,21] for their works on this subject and [5, Chapter 3] for a survey. In their former work, they imposed some technical conditions of X. When we consider the conjecture (3.10), another important issue is to consider the order of δ in this estimate. This estimate will be useful for this multiplicity-counting problem, see §4.3.
Estimate of multiplicities in a hypersurface
In order to study the multiplicities in a projective hypersurface, first we introduce some facts about the multiplicity of a point in a hypersurface. In fact, the scheme X is a pure dimensional closed subscheme of P n k and it is a hypersurface in it. We can prove that X is of degree δ (cf. [12, Proposition 7.6, Chap. I]).
Let α ∈ [0, δ]∩N. We denote by T α (f ) be k-vector space spanned by all the partial derivatives of f of order α which are of the following form 0 · · · ∂T in n for I = (i 0 , · · · , i n ) ∈ N n+1 with |I| = i 0 + · · · + i n = α. These elements are homogeneous polynomials of degree δ − α.
The following proposition is an explicit criterion in determining the multiplicity of a point in a hypersurface. 4, [17]). -Let k be a arbitrary field of characteristic 0, X ֒→ P n k be a hypersurface defined by an arbitrary non-zero homogeneous polynomial f of degree δ, η ∈ X be an arbitrary point, and α be an arbitrary integer in [0, µ η (X) − 1]. Then for every non-zero g ∈ T α (f ), the point η is contained in the hypersurface X ′ defined by g. On the contrary, there exists a non-zero element g ′ ∈ T µη(X) (f ), such that η is not contained in the hypersurface defined by g ′ .
4.2.
Construction of intersection trees. -By virtue of Proposition 4.1, we can construct a family of intersection trees to solve the multiplicity-counting problem. First, we introduce the following proposition to construct the roots of these intersection trees. . . , g n−s−1 ∈ T 1 (f ) of f , such that the equality dim(V (f ) ∩ V (g 1 ) ∩ · · · ∩ V (g n−s−1 )) = s is verified. In the other words, V (f ) ∩ V (g 1 ) ∩ · · · ∩ V (g n−s−1 ) is a complete intersection.
Let K and f be the same as in Proposition 4.2. We denote by in the following argumentation. We denote by X reg the regular locus of X, and by X sing the singular locus of X. Following the notation and conditions in Proposition 4.2, we denote by X i the hypersurface V (g i ) for simplicity below, where i = 1, . . . , n− s − 1. By the Jacobian criterion (cf. [18,Theorem 4.2.19]), we have X sing ⊆ X ∩ X 1 ∩ · · · ∩ X n−s−1 .
For every integral closed subscheme M of X, we denote by M (a) the locus of the points ξ in M whose multiplicities µ ξ (X) are equal to µ M (X), and by M (b) the locus of the points ξ in M whose multiplicities µ ξ (X) are greater than or equal to µ M (X) + 1. In addition, let L/K be an extension of fields, and we denote by M (a) (L) Next, we construct a family of intersection trees {T C }, where C ∈ C(X · X 1, · . . . · X n−s−1 ). The root of the intersection tree T C is C.
In order to construct those vertices whose depth are equal to or larger than 1, let M be a vertex which is already constructed in these intersection trees {T C }. We regard M as an integral closed subscheme of X. Next, we consider the set M (K). If For the construction that follows, all the mentioned labels are of dimension n − 1, hence all the vertices in C w are of dimension s − w, where 1 w s is an integer. The construction terminates in finite steps.
The following lemma is a property of the set Z * (see Definition 2.4), which will be useful in the proof of Theorem 4.5. ). -With all the notation and construction above, for every ξ ∈ X sing (K), there exists at least one Z ∈ Z * such that ξ ∈ Z (a) (K).
is verified for all reduced hypersurfaces X of P n K of degree δ, whose dimension of singular locus is s. In this inequality, S(X; D, B) is defined in (3.1), N (X; D, B) is defined in (3.3), and Z t is defined in Definition 2.4 following the construction in §4.2.
If Z t = ∅ for some 0 t s, we define max Proof. -Suppose that a family of intersection trees {T C } whose roots are the elements in C(X · X 1 · . . . · X n−s−1 ) has already been constructed via procedures introduced in §4.2. First, we have since for every ξ ∈ X reg , we always have µ ξ (X) = 1. By Lemma 4.3, for each ξ ∈ X sing (K), we can find a Z ∈ Z * such that ξ ∈ Z (a) (K). By Proposition 4.1, for every Z ∈ Z * , we have the inequality for all i = 1, . . . , n − s − 1. So we get the inequality
By Proposition 2.6 and the inequality (4.3), we have
for each t = 0, . . . , s, since all the labels in C ′ * are of degree equal to or smaller than δ − 1.
Combine the inequalities (4.2) and (4.4), we obtain that s t=0 Z∈Zt ξ∈S(Z (a) ;D,B) By the inequalities (4.1), (4.2) and (4.5), we prove the result. is verified for all funtions f satisfying the above conditions, where D ∈ N + and B 1.
If we do not care about the constant depending on the above function f (x), for this kind of counting multiplicities problem, it is enough to consider the counting function f (x) = x(x − 1) n−s−1 only, which is considered in Theorem 4.5.
If we consider another increasing counting function g : N + → N which is asymptotic to a polynomial whose degree is smaller than n − s − 1, and we do not suppose the condition g(1) = 0 any longer. Then we have (g(µ ξ (X)) − g (1)) .
We consider the sum ξ∈S(X;D,B) (g(µ ξ (X)) − g(1)) by the above discussion, and consider the term g(1)N (X; D, B) as the classical problem of counting algebraic points or rational points (when D = 1). By the fact that X reg and X are birational equivalent, this estimate is appropriate.
holds uniformly for reduced hypersurfaces X of P n Q of degree δ whose singular locus is of dimension s, where S(X; B) is defined in (3.2).
Proof. -By the argumentations in Theorem 3.3, we obtain the estimate holds uniformly for Z ∈ Z t , where t = 0, . . . , s following the construction in §4.2.
Combine the above inequality with the estimate in Theorem 4.5 and the fact that dim(Z) < n and s < n, we obtain the result.
-Let X ′ ֒→ P 2 Q be a reduced plane curve of degree δ, which is defined by the homogeneous equation f (T 0 , T 1 , T 2 ) = 0. Suppose that X ′ has a Qrational point of multiplicity δ. We consider f to be a homogeneous polynomial of degree δ in Q[T 0 , . . . , T n ] for an integer n 3. Then f defines a hypersurface in P n Q , denoted by X. Without loss of generality, we suppose that [1 : 0 : 0] is the projective coordinate of this singular point of X ′ . Then we have , where all singular points of X are of multiplicity δ, and X sing is considered to be a reduced closed subscheme of P n Q . By the equality (3.6), we have Then we have the following asymptotic estimate for each hypersurface X satisfying the above conditions. From this example, the order of δ in Corollary 4.7 is optimal when dim(X sing ) = n − 2 and n 3. More generally, if X sing contains a linear locus of multiplicity δ in X, we can get the maximal order of δ and max{B, δ − 1} in the estimate of Corollary 4.7 when B δ − 1.
Let K be a number field, and X ֒→ P n K be a fixed hypersurface of degree δ. To attack this kind of counting multiplicities problem by applying Theorem 4.5, the key point is the uniform estimate of the term F (X, Y, T 0 , . . . , T n ) = Y δ + Xf (T 0 , T 1 , · · · , T n ), where f (T 0 , . . . , T n ) is an irreducible homogeneous polynomial of degree δ − 1 which defines a smooth hypersurface in P n K , noted by Z ′ this hypersurface. The polynomial F (X, Y, T 0 , . . . , T n ) is irreducible. By [18, Exercise 2.4.1], we obtain that the hypersurface Z is integral. By [3, Theorem 1, Corollary], for all integers δ 3 and n 2, and for any ǫ > 0, the estimate N (Z ′ ; B) ≪ n,δ,ǫ B n−1+ǫ , B 1 holds uniformly for every smooth hypersurface Z ′ .
We follow the construction in §4.2, where the proper intersection of Z, V ∂F ∂X and V ∂F ∂Y generates the only root of the intersection tree, and it has no descendent. Then we apply Theorem 4.5 to this case directly, and we obtain the inequality holds uniformly for all hypersurfaces defined by the method in (4.6), (4.7) and all ǫ > 0. In this case, the above estimate gives a better dependance on B than that given in Corollary 4.7. But in this estimate, we have no description of the order of δ, since we cannot control the order of δ in the above estimate to the extent of our current knowledge.
Similar to [17,Conjecture 5.13], we propose the following conjecture. µ ξ (X)(µ ξ (X) − 1) d−s ≪ n,K δ d−s+1 B s+1 holds uniformly for all reduced pure dimensional closed subschemes X of P n K of dimension d and degree δ, whose dimension of singular locus is s, where S(X; B) is defined in (3.2).
|
2017-12-12T15:11:33.000Z
|
2017-07-22T00:00:00.000
|
{
"year": 2017,
"sha1": "566d3e99e16039c0ae94d9349c91cb6f33c5fee2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.07183",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "566d3e99e16039c0ae94d9349c91cb6f33c5fee2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
123227339
|
pes2o/s2orc
|
v3-fos-license
|
Fault detection in non-linear systems based on GP-EKF and GP-UKF algorithms
In this paper, two new fault detection methods are proposed for non-linear systems. The proposed methods are based on combining an extended Kalman filter (EKF) and an unscented Kalman filter (UKF) with Gaussian processes (GPs). One of the major advantages of these algorithms is that they do not need the system's model while they have an accurate and fast operation in fault detection. In order to show the promising performance of the proposed algorithms, they are applied to an aeroplane tracking system with a highly non-linear dynamics. Superiority of the GP-UKF over GP-EKF in fault detection is also shown based on the simulation results.
Introduction
Advances in science and technology have resulted in larger and more complex systems in which fault occurrence is inevitable. Since proper and faultless operation of these systems is the main concern of industrial system designers, a fault detection and diagnosis unit is an important part of these systems. Nowadays, detection of faults in the shortest time and with the least cost is vital in industrial systems. After detecting a fault, a proper control operation can be done for fault diagnosis.
The methods of fault detection in dynamic systems can be divided into two general categories. First, methods in which a system model is not required and the process of fault detection is carried out just using the measured data. These methods need a large data set for proper operation, which is their main weak point. Statistical and expert methods belong to this category (Isermann, 2006;Safarinejadian, Ghane, & Monirvaghefi, 2013). Second, methods in which a mathematical model of the system is required for fault detection (Isermann, 2006). In these methods, the output of the system is compared with the output of a reference faultless system and the resulting residual signal is used for fault detection. Since estimation algorithms are usually used in these methods to estimate the states of the system, having a proper model of the system and its states is required for fault detection.
Various methods of state estimation have been proposed for dynamic systems up to now. In 1960, Kalman filter (KF) was introduced as an optimal solution for state estimation in linear systems (Kalman, 1960;Safarinejadian & Mozaffari, 2013). Due to the high computational * Corresponding author. Email: safarinejad@sutech.ac.ir demand of KF in some situations, information filter (IF) was proposed afterwards (Anderson & Moore, 1979). The main difference between KF and IF is that KF propagates the state vector and its corresponding covariance matrix using the system dynamics, whereas IF propagates the information vector which is the inverse of the covariance matrix. The computational demand of the IF is less than that of the KF when the number of outputs is larger than the order of the system (Simon, 2006). However, these algorithms are limited to linear systems, although most of the practical systems are non-linear in nature. Since KF cannot estimate the states of a non-linear system, suboptimal estimation approaches should be used instead. Extended Kalman filter (EKF) is a well-known suboptimal method in which process and measurement equations are linearized (Ljung, 1979). In this filter, probability density function (pdf) of the states is assumed to be Gaussian. Linearization errors of EKF may cause the estimation accuracy to become low or even may result in filter divergence. Furthermore, due to the Jacobian matrix computation that is required for the linearization process, computational load of EKF is relatively high. Extended IF (EIF) was proposed to solve this problem of EKF (Mutambara, 1998). However, large linearization errors of EKF and EIF, especially in highly non-linear systems, have caused deficiency of these filters. To solve these problems, another suboptimal approach named unscented KF (UKF) was proposed in which a collection of sample points (sigma points) is used for mean and covariance propagation (Julier, Uhlmann, & Durrant-Whyte, 2000). Using these sigma points decreases the computational volume (Julier & Uhlmann, 2004).
The aforementioned filters use a parametric model of the system for state estimation. Even though the performance of these filters is acceptable, reaching an accurate parametric model of the system is difficult. Furthermore, since considering all aspects of modelling is difficult in practice, a simple parsimonious model is usually used. Therefore, the prediction capability of these models will also be limited. In order to overcome the aforementioned limitations of the parametric state estimation methods, Gaussian process (GP) regression models were proposed for process and measurement equations learning (Rasmussen, 2006). GP was used for dynamic system identification and prediction in Rasmussen (2006). Recently, by combining parametric and non-parametric methods, new algorithms are proposed for state estimation in non-linear systems (Deisenroth, Huber, & Hanebeck, 2009;Ko & Fox, 2011;Reece & Roberts, 2010).
GPs have also been used for fault detection in dynamic systems (Juricic & Kocijan, 2006;Juricic, Ettler, & Kocijan, 2011;Serradilla, Shi, & Morris, 2011). Due to the aforementioned problems in the modelling of non-linear systems and the need for accurate state estimates for fault detection in non-linear systems, a residue is required that is very sensitive to fault presence and instantaneously detects the fault occurrence.
In this paper, two new methods of fault detection will be proposed for non-linear systems. The proposed methods are based on combining EKF and UKF with GPs. The main advantage of combining the GPs and the existing filters such as EKF and UKF is that EKF and UKF need the exact system's model, while the proposed approaches do not need any model of the system. In other words, the existing filters such as EKF and UKF are model-based methods, whereas the two proposed methods (GP-EKF and GP-UKF) are non-model-based approaches that do not require an accurate model of the system while their generated residue results in an accurate fault detection performance.
It should be noticed that the algorithms proposed in this paper provide a new non-parametric approach for fault detection. Artificial neural network (ANN) can be considered as the main class of traditional non-parametric fault detection methods in which an input-output data set is the only information required about the dynamic system (Isermann, 2006).
ANN is a black-box system identification method in which the unknown system is approximated by an ANN. In this method, the output of the trained ANN is only a deterministic value corresponding to its input data (Isermann, 2006). However, using the proposed GP-EKF and GP-UKF, the model approximated for the unknown function provides a Gaussian pdf as an output in which the mean and covariance are determined by the GP regression method.
Furthermore, using ANN for fault detection is rather difficult in practice. Selecting the structure of the ANN, the number of its neurons, the activation function, the learning rate, etc., make using ANN difficult in practice. Moreover, there is not any systematic method for selecting these items. However, using the GP regression method, a specified Gaussian distribution is chosen for the unknown function in which the mean and covariance are determined by the trained data set (Rasmussen, 2006).
The objectives of this paper include (1) proposing two novel non-parametric methods in fault detection that provide accurate residue signal and (2) applying these methods for fault detection in an aircraft tracking system as a strictly non-linear system.
The rest of the paper is organized as follows. In Section 2, problem formulation is proposed. EKF, UKF and GPs are introduced briefly in this section. In Section 3, the proposed algorithms are given and how they can be used in fault detection is discussed. Section 4 is devoted to the simulation results. Finally, Section 5 concludes the paper.
Problem formulation
In this section, GP regression, EKF and UKF are introduced briefly.
Gaussian regression model
GPs are powerful non-parametric tools for learning unknown functions using a training data set. This data set includes input-output pairs and the GPs give a mapping between inputs and outputs. The main characteristic of GPs is their flexibility in modelling, since it can model the system's behaviour in the presence of uncertainties. Furthermore, GPs can estimate the input noise and smooth the parameters using the training data set (Boyle, 2007;Rasmussen, 2006;Shi & Choi, 2011).
Definition A GP is a collection of random variables, any GP finite number which has a joint Gaussian distribution (Rasmussen, 2006).
In the following, GP will be expressed in brief. Consider a training data set D f = X , y in which X = {x 1 , x 2 , . . . , x n } contains d-dimensional inputs x i and y = {y 1 , y 2 , . . . , y n } contains observations or targetsy i . It is assumed that the training data set D f is obtained from where ε is a zero mean Gaussian noise with variance σ 2 n , that is ε ∼ N (0, σ 2 n ). GP regression tries to approximate the function f using the training data set D f . Just like a random variable with a Gaussian distribution that can be characterized solely by its mean and covariance, it is sufficient to find a proper mean and covariance function for modelling an unknown function with GP.
The mean function that is used for GP based on a desired input x * is given as The variance function of GP with the same input can also be defined as where K is an n × n matrix of kernel functions between the training inputs. A kernel function type is a designer's choice but it usually has a square exponential form added with a noise term as where σ 2 f is the variance of the signal that controls the prediction uncertainty in the regions in which the density of the training data is low. W is a diagonal matrix which contains length scales of process such that W = diag[1/l 2 1 , 1/l 2 2 , . . . , 1/l 2 d ]. δ denotes the Kronecker delta function and σ 2 n controls the process noise. In Equation (3), . k * is a vector defined by the kernel values between x * and the training inputs X , given in the following equation:
Learning of the hyperparameters
In order to obtain a proper GP model that optimally approximates the unknown function f , the parameters of the GP, that is, the parameters of the kernel function, should be obtained optimally. For this purpose, a hyperparameter vector θ = [W, σ f , σ n ] is considered that is determined by maximizing the log marginal likelihood of training outputs given inputs as follows (Quinonero-Candela, Rasmussen, & Williams, 2007): The logarithmic term of Equation (6) can be written as This optimization problem can be solved using different methods such as conjugate gradient. The partial derivatives that are required for optimization are given as
Extended Kalman filter and unscented Kalman filter
In the procedure of non-linear system state estimation, there are non-linear integrals that have no closed-form solution. Therefore, researchers have tried to use suboptimal methods for this problem (Anderson & Moore, 1979;Ljung, 1979). Linearizing the non-linear systems and numerical integration are two suboptimal methods that have been considered for this problem and have resulted in the proposal of EKF and UKF, respectively. These two filters will be discussed in Sections 2.2.1 and 2.2.2 in brief.
Extended Kalman filter
In the case that the state and measurement equations are non-linear, an approximate solution is to linearize these equations using the Taylor series expansion around the mean of the Gaussian random variable (GRV). Afterwards, the standard KF is applied to this linearized model. The resulting filter is called extended KF (EKF). EKF is not an optimal filter and its performance depends on the linearization accuracy. Process and measurement noises are assumed to be Gaussian and the accuracy of this filter is of the first order. In the EKF, process and measurement equations are in the general form of where f and h are non-linear functions, x k is the state vector and v k−1 and w k are process and measurement noises, respectively. Partial differentiation is used for linearization as follows: The EKF equations are given as where the Kalman gain is defined as
Unscented Kalman filter
One of the suboptimal methods in state estimation is based on numerical integration. In this method, a minimum set of weighted sigma points is selected and then the integral is approximated using these sigma points. The sigma points are generated based on the a priori mean and covariance of the random variables. As these points pass through the non-linear system function, the a posteriori mean and covariance of the random variables are obtained. In this filter, the position of the sigma points and their weights are calculated based on the fact that the selected system sigma points should represent the main statistical characteristics of an a priori random variable (Julier & Uhlmann, 2004). In order to reach this goal, UKF utilizes the unscented transform (Julier et al., 2000). The number of sigma points is r = 2n x + 1, where n x represents the number of state variables (or the dimension of the state vector) (Van Der Merwe, 2004;Julier & Uhlmann, 2004). The UKF algorithm is given briefly in Figure 2.
Considering the non-linear system represented by Equations (10) and (11), the sigma points and their corresponding weights are given by where λ = α 2 (l + k) − n x is the scaling parameter. α has a small positive value (usually 10 −3 ≤ α ≤ 1) and represents the spread of sigma points aroundx. k is the second calibration parameter and usually has a value of 0 or 3 − n x . β is a scalar parameter used to incorporate any extra prior knowledge of the distribution of x (for a normal distribution, β = 2 is optimal, Wan, Van Der Merwe, & Nelson, 1999). ( √ (n x + λ)P x ) i is the ith weighted column of square root of the covariance matrix P x .
The proposed algorithms for fault detection
As discussed earlier, the main disadvantage of the Bayesian filters is that they require the state and measurement equations to be known. To solve this problem, two algorithms will be proposed by combining EKF and UKF with GPs. Afterwards, these algorithms will be used for fault detection.
GP incorporated with EKF
A new GP-EKF algorithm is proposed here for fault detection in non-linear systems. In this method, process and measurement equations f and h are approximated separately using GPs and then the GP models obtained for these functions are incorporated with the EKF algorithm. Eventually, a proper estimate of the system states will be revealed. Covariance matrices of the process and measurement noises, Q and R, are obtained using the training data set. The process model maps the state and input variables (x k−1 , u k ) into a state transition x k = x k+1 − x k . The measurement model maps the state x k into a measurement z k . In order to obtain these two models, we need two separate training data sets such as where X is a matrix of state and X = [ x 1 , x 2 , . . . , x n ] is the matrix containing states transition. Z is a matrix of the observed inputs. The GP approximation of the f and h functions can be denoted by GP f andGP h , respectively. In Bayesian filters, the aim of the prediction step is to determine the pdf of x k , that is, p(x k |z 1:k−1 ), which is done by means of the following equation: where p(x k−1 |z 1:k−1 ) is a normal pdf computed at the previous iteration of the filter. p(x k |x k−1 , u k−1 ) can be obtained using Equation (10). It has been assumed in this paper that f k−1 can be approximated by a GP in which the mean and covariance are represented by Equations (2) and (3), respectively. Therefore, ( 27) Thus, p(x k |z 1:k−1 ), computed by Equation (26), can be approximated by a normal pdf. In consequence In the update step of the Bayesian filter, the aim is to determine p(x k |z 1:k ) that can be computed using the Bayes rule as where p(z k |z 1: The priori distribution p (x k |z 1:k−1 ) is computed at the previous step of prediction.
Using GPs regression, the function h k can also be approximated by a normal pdf with the mean and covariance represented by Equations (2) and (3), respectively. Therefore, the distribution of h k can be described as Therefore, Generally, because of using GPs, the integral of Equation (30) approximately produces a Gaussian pdf, and therefore, the Bayes rule of Equation (29) will produce a Gaussian distribution.
Finally, merging Equations (28) and (31) results in the following equation: where GP f μ [x k−1 , u k−1 ] , D f is the GP mean function of f that maps [x k−1 , u k−1 ] to x k by using the training data set D f , and GP h μ (x k , D h ) is the GP mean function of h that maps x k to z k by using the training data set D h . Furthermore, GP f [x k−1 , u k−1 ] , D f and GP h (x k , D h ) are GP covariance functions of f and h, respectively. This GPs regression model will be substituted with process and measurement functions in EKF. The resulting GP-EKF is provided in Figure 1.
Gaussian process incorporated with UKF
In this subsection, a new GP-UKF algorithm is proposed in which, instead of using the exact process and measurement models, their equivalent GP regression is used. Therefore, combining Equation (32) with the UKF is the basis for the GP-UKF algorithm, as shown in Figure 2.
The flowchart of GP-EKF and GP-UKF algorithms in fault detection is shown in Figure 3.
Fault detection using GP-EKF and GP-UKF algorithms
Assume that the occurrence of a fault in the system's state equations can be modelled by where f k denotes the fault occurred at time k and L k is a known matrix of fault. In other words, an additive fault was considered in the sensors. One of the well-known methods in dynamic system's fault detection is to compare the system's output with a faultless counterpart. Afterwards, the resulting signal (the residue signal) will be used for fault detection. In general, the procedure for fault detection can be expressed in two steps: (1) residue generation and (2) residue evaluation.
Residue generation
The first step in fault detection is to generate a signal that is sensitive to fault presence and can show the fault occurrence as soon as possible. A simple residue signal can be defined as where r k is the generated residue, z k is the system's output in the faultless condition andẑ k is the estimated output.
Residue evaluation
The second step in fault detection is to define proper functions for the generated residue evaluation so that fault occurring in the system can be detected correctly. In the ideal case, if no fault occurs, the residue will be zero and otherwise, it will be non-zero. In this case, residue evaluation can be defined as r k = 0 fault free, r k = 0 faulty.
But in many systems, the residue might be non-zero even though no fault has occurred; therefore, the evaluation function of Equation (35) will not be proper. For this purpose, a statistical evaluation function can be defined as ⎧ ⎨ ⎩ r k <r k − λσ r k faulty, r k − λσ r k ≤ r ≤r k + λσ r k fault free, r k >r k + λσ r k faulty, and the parameter λ should be chosen based on a trade-off between maximizing the probability of fault detection and minimizing the probability of wrong fault alarm. Assuming that r is a GRV with the meanr k and covariance σ r k , choosing λ = 1, p(r k − σ r k ≤ r ≤r k + σ r k ) = 0.683, from the probability theory. In other words, ifr k − σ r k ≤ r ≤r k + σ r k , one is 68.3% sure that no fault has occurred. Furthermore, for λ = 2 and λ = 3, the probability with regard to the evaluation function will be p(r k − 2σ r k ≤ r ≤r k + 2σ r k ) = 95.8%, p(r k − 3σ r k ≤ r ≤r k + 3σ r k ) = 98.4%.
It should be noticed that the computational burden of the proposed methods is more than that of the EKF and UKF because of the GPs regression method used in GP-EKF and GP-UKF. In other words, in the GP-EKF and GP-UKF, the system's model is substituted with the GPs and therefore, the computational complexity of adding the GP is the difference. In addition, due to the Jacobian matrix computation in the GP-EKF algorithm, the computational burden of GP-EKF is more than that of GP-UKF (Ko & Fox, 2009).
Simulation results
To show the performance of the two proposed algorithms in fault detection, they were applied to an air-traffic tracking and control system. The aeroplane movement dynamics is defined by a non-linear equation as where T denotes the sampling period and x is the state vector that contains position and velocity in the x-and y-axis directions and also aeroplane's angular speed (x 5 = ω).
The radar system used to give the measurements can also be modelled by the following equation: In these simulations, 50 random points have been chosen to create the training data set. The Monte Carlo method (with 50 runs) has been used in order to provide trustful results.
To illustrate the performance of the GP-UKF, the mean and the variance of the first estimated state have been shown in Figures 4 and 5, respectively. Since other estimated states have similar results, only the first state has been shown. In Figure 4, a comparison has been made between the mean of the first estimated state and the mean of the actual state. Figure 5 shows the mean and uncertainty of the first state at each sample time.
The occurrence of three types of faults (abrupt, incipient and intermittent) was considered in the system's output which are assumed to be additive. In these simulations, L k = [11] and f k was considered to be of the following three different types: Here, GP-EKF and GP-UKF algorithms are used for fault detection in the air-traffic tracking system. As can be seen in Figures 6-8, fault occurrence is clear in the residue signal. The ideal residue evaluation function of Equation (35) is used here. As shown in these figures, just as the fault occurs, it affects the residual signal. However, the main problem of using EKF is that it has a high computational demand and it can be used only in systems with a few number of state variables. Figures 6-8 show the residue signal generated by GP-UKF and GP-EKF algorithms due to three fault types: abrupt, incipient and intermittent. As it is seen, the residue signals have a non-zero value when a fault occurs and they are zero in the faultless moments. The residue signal of the GP-UKF algorithm has a better quality in comparison with that of the GP-EKF algorithm since the faultless moments are clearly distinguishable from the faulty moments. In general, the value of the GP-UKF residue signal is larger than the GP-EKF counterpart in the case of a fault occurrence, which is a major criterion for fault detection. Furthermore, the computational complexity of the GP-UKF is less than that of the GP-UKF, which makes it the proper choice for use in high-dimensional systems. The major advantage of both these algorithms is that they do not require any model of the system and the fault can be detected using only the input-output data set, while the Bayesian methods used so far for fault detection require such a model.
In another simulation, the evaluation function described in Equation (36) is used for residue evaluation in Figures 9-14, with λ = 3. Comparing Figures 9-11 with Figures 12-14 shows the superiority of the GP-UKF algorithm. In these figures, the probability of fault occurrence equals 98%, when the residual signal is out of the given interval.
Conclusion
In this paper, two new methods of fault detection (GP-EKF and GP-UKF) were proposed for non-linear systems. Each of these methods was considered separately and their performance was studied using a practical highly non-linear system. The main advantage of the proposed methods in comparison with the previous methods is that they do not need an accurate model of the system while they generate accurate residue signal. Finally, a comparison was made between these two methods and the superiority of GP-UKF was shown since it generated more accurate residue signals and also had less computational volume.
|
2019-04-20T13:12:58.525Z
|
2014-08-26T00:00:00.000
|
{
"year": 2014,
"sha1": "085f0829e0b500325bef3a826d165dfb4f60d076",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21642583.2014.956843?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "5ad01fcd7de0fe9ea49d5271a58a20c6c586b52f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
253882335
|
pes2o/s2orc
|
v3-fos-license
|
Albuminuria testing and nephrology care among insured US adults with chronic kidney disease: a missed opportunity
Background In chronic kidney disease (CKD), assessment of both estimated glomerular filtration rate (eGFR) and albuminuria are necessary for stratifying risk and determining the need for nephrology referral. The Kidney Disease: Improving Global Outcomes clinical practice guidelines for CKD recommend nephrology referral for eGFR < 30 ml/min/1.73m2 or for urinary albumin/creatinine ratio ≥ 300 mg/g. Methods Using a national claims database of US patients covered by commercial insurance or Medicare Advantage, we identified patients with CKD who were actively followed in primary care. We examined receipt of nephrology care within 1 year among these patients according to their stage of CKD, classified using eGFR and albuminuria categories. Multivariable logistic regression was used to examine odds of receiving nephrology care by CKD category, adjusting for age, sex, race/ethnicity, diabetes, heart failure, and coronary artery disease. Results Among 291,155 patients with CKD, 55% who met guideline-recommended referral criteria had seen a nephrologist. Receipt of guideline-recommended nephrology care was higher among those with eGFR < 30 (64%; 11,330/17738) compared with UACR ≥300 mg/g (51%; 8789/17290). 59% did not have albuminuria testing. Those patients without albuminuria testing had substantially lower adjusted odds of recommended nephrology care (aOR 0.47 [0.43, 0.52] for eGFR < 30 ml/min/1.73m2). Similar patterns were observed in analyses stratified by diabetes status. Conclusions Only half of patients meeting laboratory criteria for nephrology referral were seen by a nephrologist. Underutilization of albuminuria testing may be a barrier to identifying primary care patients at elevated kidney failure risk who may warrant nephrology referral. Supplementary Information The online version contains supplementary material available at 10.1186/s12875-022-01910-9.
Despite its critical role in risk stratification and management, albuminuria testing remains widely underutilized [11][12][13]. Importantly, the underutilization of albuminuria testing may hamper identification of highrisk persons with CKD who may benefit from nephrology care. Timely referral to nephrology care may allow for more aggressive management to prevent CKD progression and is associated with several clinical benefits, including improved vascular access planning, reduced hospitalizations, and greater likelihood of initiating home dialysis [14,15]. This study aimed to examine receipt of nephrologist care by eGFR and albuminuria categories in a large population of US adults with CKD actively followed in primary care, with a focus on the association between albuminuria testing and likelihood of receiving nephrology care.
Methods
We performed a cross-sectional analysis using the Optum Labs Data Warehouse, which includes deidentified claims and laboratory results from commercially insured and Medicare Advantage enrollees throughout the US. We assembled a study population of adults age ≥ 18 who had at least two primary care visits from January 1, 2015 to December 31, 2019 with laboratory evidence of CKD. CKD was defined by two outpatient eGFR values < 60 ml/min/1.73m 2 separated by ≥90 days or two outpatient UACR values ≥30 mg/g separated by ≥90 days [5]. We applied the 2021 CKD-Epidemiology Collaboration equation to calculate eGFR because it is now the recommended by the joint task force of the American Society of Nephrology and National Kidney Foundation, and although not contemporary to the study period, its use establishes a baseline pattern of health care use for future comparison [16]. Because urine protein/creatinine ratio (UPCR) is frequently obtained as an alternative to UACR, we estimated additional UACR results using a validated conversion from UPCR [17]. The date of the second qualifying eGFR or UACR defined the index date for each patient. We excluded patients who previously received dialysis or kidney transplantation.
We determined the proportion of patients receiving nephrology care, defined as having at least one outpatient nephrology encounter within 12 months following the index date, according to KDIGO-based CKD categories. We used multivariable logistic regression to examine associations between albuminuria category and nephrology care, stratified by eGFR category, adjusting for age, sex, race/ethnicity, diabetes, heart failure, and coronary artery disease.
Results
Our study population included 291,155 patients (mean age 72 ± 10 years; 58% female) with CKD. Table 1 describes characteristics of the study population.
When stratified by diabetes status, we found that 25% (n = 42,185/166,608) of patients without diabetes had albuminuria testing, compared with 63% (n = 77,984/124,547) among patients with diabetes. The proportion of patients receiving nephrology care was higher in more severe eGFR and albuminuria categories for both patients with and without diabetes (Fig. 2). Prevalence of guideline-recommended nephrology referral was 61% (7472/12,292) among patients without diabetes, compared with 51% (9825/19,398) among patients with diabetes.
With respect to CVD, the proportion of patients with available albuminuria testing was similar in patients without CVD (42%) and patients with CVD (40%). The proportion of patients receiving nephrology care by CVD status is shown in Fig. S1. Guideline-recommended nephrology care was 51% (9722/18,921) among patients without CVD, compared with 61% (7575/12,369) among patients with CVD.
In multivariable-adjusted models, more severe albuminuria was consistently associated with higher odds of nephrology care within each given category of eGFR (Fig. 3). Missing albuminuria was consistently associated with lower odds of nephrology care.
Discussion
In a national cohort of adults with CKD, only half of patients meeting guideline-recommended referral criteria based on eGFR and albuminuria were seen by a nephrologist. More severe albuminuria was associated with greater likelihood of receiving nephrology care. However, over half of patients were missing albuminuria measures; these patients were substantially less likely to receive nephrology care for any given eGFR category. Because this study was limited to a population with consistent access to care based on continuous enrollment in insurance with primary care visits, rates of recommended nephrology care may be even lower in other settings.
Our results showing low UACR testing among patients with laboratory evidence of CKD complement prior work by Alfego et al. finding widespread UACR underutilization among patients at risk for CKD, i.e., those with hypertension or diabetes [11]. In addition, Alfego et al. found low rates of CKD diagnosis, even among patients whose testing confirmed CKD in a high-risk KDIGO category. The present study identified care gaps extending beyond underdiagnosis, as we found many patients with high-risk CKD did not receive guideline-recommended nephrology care. Together, these findings underscore the need for increased awareness of the indications for UACR testing as well as identification of CKD and appropriate referral based on the test results.
Reasons for low UACR testing are likely multifactorial. Higher UACR testing rates among patients with diabetes compared to those without diabetes has been consistently documented [11,12], and may relate to national quality metrics and clinical practice guidelines from the American Diabetes Association which recommend annual UACR testing for patients with diabetes [18,19]. In contrast, recommendations for UACR testing among patients with hypertension have been less consistent. The 2017 American College of Cardiology/ American Heart Association hypertension guidelines include UACR in a list of "optional" testing; however, in the same guideline, the choice of antihypertensive therapy depends on presence/absence of albuminuria [20]. Since these guidelines were published, the availability of therapies, such as SGLT2 inhibitors that have shown overwhelming kidney and cardiovascular benefits in albuminuric CKD, has made UACR testing even more imperative irrespective of diabetes status [9]. Detection of albuminuria by UACR testing affords early detection of CKD and thus early initiation of these therapies, when their preventive benefit can be maximized. Of note, the majority of patients with CKD solely defined by albuminuria category are managed in the primary care setting with the dual goal of optimizing therapy to prevent CVD and CKD progression. Consequently, increasing primary care awareness of the prognostic and therapeutic implications of UACR testing is essential for optimal CKD care and preventing adverse cardiorenal outcomes. Efforts to improve awareness and evidence-based care delivery for CKD are underway. In the US, the Advancing American Kidney Health Executive Order outlined goals for prevention, detection, and treatment of CKD in addition to a CKD awareness campaign to improve public knowledge of CKD and its risk factors [21]. There is also increasing recognition of a role for well-designed quality metrics relevant to CKD care, as most existing metrics for nephrology relate to dialysis care [22,23]. Updated clinical practice guidelines may also increase awareness of the need for UACR testing. For example, the 2021 National Institute for Health and Care Excellence (NICE) CKD guideline recommends risk-based nephrology referral using the Kidney Failure Risk Equation (KFRE), a prediction model that requires both eGFR and UACR as input variables [1,24,25]. A study of current practice in the US examining KFREpredicted risk and nephrology care found nearly half of patients with identifiably high kidney failure risk had not been seen by a nephrologist [26]. However, in that study, the KFRE could not be calculated in nearly 75% of patients with CKD due to missing UACR. Thus, strategies to improve UACR testing among at-risk patients are also needed to facilitate health services research and care delivery surveillance efforts.
Strengths of our study include the large, multi-year population of patients with CKD in primary care from across the US. The use of claims rather than electronic health record data allows capture of nephrology encounters across different health systems. Limitations include our inability to identify referrals to nephrology that were requested but had not yet occurred. Generalizability of commercial and Medicare Advantage data to other populations may be limited. We used the 2021 CKD-EPI equation for eGFR, which does not necessarily reflect eGFR values available to clinicians during the study period, when both the 2009 CKD-EPI and Modification of Diet in Renal Disease equations were in widespread use by different laboratories [27]. Causal relationships between albuminuria and nephrology care cannot be ascertained due to the observational design.
Conclusions
In a large population of primary care patients with CKD, only half of patients meeting laboratory criteria for nephrology referral were seen by a nephrologist. Underutilization of albuminuria testing may be a barrier to identifying primary care patients at elevated kidney failure risk who may warrant nephrology referral. Odds Ratios and 95% Confidence Intervals for Nephrology Care by Albuminuria Category Stratified by eGFR Category. Odds ratios are adjusted for age, sex, race/ethnicity, diabetes, heart failure, and coronary artery disease. Abbreviations: CI = confidence interval; eGFR = estimated glomerular filtration rate; OR = odds ratio; UACR = urine albumin/creatinine ratio
|
2022-11-26T14:44:44.730Z
|
2022-11-24T00:00:00.000
|
{
"year": 2022,
"sha1": "7d5d127d04b512a2738f97cfac984fb3ec717457",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "7d5d127d04b512a2738f97cfac984fb3ec717457",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254342680
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptomic and Metabolomic Analyses Reveal That Fullerol Improves Drought Tolerance in Brassica napus L
Carbon nanoparticles have potential threats to plant growth and stress tolerance. The polyhydroxy fullerene—fullerol (one of the carbon nanoparticles) could increase biomass accumulation in several plants subjected to drought; however, the underlying molecular and metabolic mechanisms governed by fullerol in improving drought tolerance in Brassica napus remain unclear. In the present study, exogenous fullerol was applied to the leaves of B. napus seedlings under drought conditions. The results of transcriptomic and metabolomic analyses revealed changes in the molecular and metabolic profiles of B. napus. The differentially expressed genes and the differentially accumulated metabolites, induced by drought or fullerol treatment, were mainly enriched in the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways related to carbohydrate metabolism (e.g., “carbon metabolism” and “galactose metabolism”), amino acid metabolism (e.g., “biosynthesis of amino acids” and “arginine and proline metabolism”), and secondary metabolite metabolism (e.g., “biosynthesis of secondary metabolites”). For carbohydrate metabolism, the accumulation of oligosaccharides (e.g., sucrose) was decreased, whereas that of monosaccharides (e.g., mannose and myo-inositol) was increased by drought. With regard to amino acid metabolism, under drought stress, the accumulation of amino acids such as phenylalanine and tryptophan decreased, whereas that of glutamate and proline increased. Further, for secondary metabolite metabolism, B. napus subjected to soil drying showed a reduction in phenolics and flavonoids, such as hyperoside and trans-3-coumaric acid. However, the accumulation of carbohydrates was almost unchanged in fullerol-treated B. napus subjected to drought. When exposed to water shortage, the accumulation of amino acids, such as proline, was decreased upon fullerol treatment. However, that of phenolics and flavonoids, such as luteolin and trans-3-coumaric acid, was enhanced. Our findings suggest that fullerol can alleviate the inhibitory effects of drought on phenolics and flavonoids to enhance drought tolerance in B. napus.
Introduction
Carbon-based nanomaterials such as fullerene, graphene, single-walled carbon nanotubes, and multi-walled carbon nanotubes are the most commonly used nanomaterials [1]. The unique physical, chemical, and mechanical properties of carbon nanotubes can provide solutions to various biological problems, particularly in the fields of biotechnology, medicine, pharmaceuticals, and agriculture [1,2]. The extensive production and application of carbon-based nanomaterials increases the chances of their release into biological cycles. Plants are a prominent part of the ecosystem and may act as a potential path for the uptake, translocation, and accumulation of nanoparticles into food chains; the environment is considered to comprise a large biomass that encounters released engineered nanomaterials [3]. Therefore, understanding plant responses to carbon nanomaterial exposure could open up new frontiers in agriculture, where continuous innovation is highly needed to guarantee global food security, and address environmental challenges. 2 of 20 Drought is considered as the most important environmental factor that limits crop growth and productivity worldwide [4]. It is important to raise environmental awareness and improve plant drought tolerance to sustainably enhance crop quality. A number of carbon nanomaterials are being investigated for use in agriculture to increase crop productivity, and protect crops from drought stress; one of the most investigated carbon nanomaterials is fullerene [5]. Some studies have reported positive effects associated with the application of fullerene under osmotic stress on plant growth in crop plants [5][6][7]. Fullerol (one of the water-soluble derivatives of fullerene) treatment at a concentration of 14 mg L −1 enhanced root growth in barley under 75 mM NaCl [6]. Fullerol treatment increased the leaf and root fresh weight in drought-treated sugar beets [8]. Exogenous fullerol administration by seed priming or foliar application stimulated growth in waterstressed Brassica napus [9].
B. napus is an important oilseed crop worldwide, and drought can impair its growth and grain yield [10]. Exploring the use of chemicals to increase drought tolerance is vital for the production of B. napus. Our previous work found that fullerol could promote drought tolerance in B. napus at the physiological level [9]. However, the effects of fullerol on drought resistance in B. napus at the molecular and metabolic levels are still unknown. RNA sequencing (RNA-seq) is a critical and suitable tool for gene expression analysis, using deep-sequencing technologies with high accuracy and sensitivity [11]. It is broadly applied to track transcriptomic variation in plants, in response to abiotic and biotic stresses. Moreover, metabolomic analysis provides valuable information on system-wide changes in plant metabolism, and allows for the identification of compounds with key roles in plant stress tolerance [12][13][14].
In this study, fullerol was applied to the leaves of seedlings subjected to drought stress in B. napus. We combined transcriptomic and metabolomic analyses to identify differences in gene transcript levels and metabolites, between non-fullerol-treated and fullerol-treated groups under water deficit conditions. We hypothesized that substantial differential gene expression and accumulation of differential metabolites existed in the fullerol-treated group, in comparison with the control group, under drought conditions. The aim of the present study was to determine whether fullerol affected drought tolerance at the molecular and metabolomic levels in B. napus.
Aboveground Biomass and Leaf Relative Water Content
Our previous work showed that water shortage significantly decreased the aboveground dry weight, as well as leaf relative water content (RWC) [9]. The drought-triggered decrease in aboveground biomass and leaf RWC were dramatically reversed by foliar application of fullerol with different concentrations (1, 10, and 100 mg L −1 ) [9]. Of these, the most effective concentration of fullerol was 100 mg L −1 [9]. Compared with leaves subjected to drought alone, those subjected to drought supplement with 100 mg L −1 fullerol treatment showed 35% and 25% increase in the aboveground dry weight and leaf RWC, respectively [9] (Figure 1). Because the most effective impact of fullerol on B. napus seedling subjected to soil drying was at the concentration of 100 mg L −1 , we chose the leaves of B. napus treated with 100 mg L −1 fullerol to conduct transcriptomic and metabolic analyses.
Transcriptomic Analysis 2.2.1. Analysis of Differentially Expressed Genes (DEGs)
Leaf tissues from B. napus under check (CK, sufficient water condition), drought (D), and drought with fullerol (D + F) treatments were obtained to construct three libraries for sequencing. From each of the three libraries, 54 to 64 million raw reads and 52 to 62 million clean reads were produced (Table S1). Approximately 84% of high-quality reads for each sample were mapped to a reference genome. Moreover, more than 46,000 transcripts with FPKM > 1 were identified in each library.
Analysis of Differentially Expressed Genes (DEGs)
Leaf tissues from B. napus under check (CK, sufficient water condition), drought (D), and drought with fullerol (D + F) treatments were obtained to construct three libraries for sequencing. From each of the three libraries, 54 to 64 million raw reads and 52 to 62 million clean reads were produced (Table S1). Approximately 84% of high-quality reads for each sample were mapped to a reference genome. Moreover, more than 46,000 transcripts with FPKM > 1 were identified in each library.
As shown in Figure 2, the comparison of different treatments identified 11,920, 7031, and 1222 DEGs in pairs of D vs. CK, D + F vs. CK, and D + F vs. D, respectively. Among them, 5917, 3494, and 529 genes were down-regulated and 6003, 3537, and 693 genes were up-regulated, respectively. In addition, 5968 DEGs were commonly regulated in D vs. CK and D + F vs. CK. A heat map generated from the hierarchical clustering of DEGs is shown in Figure 3. The up-regulated and down-regulated genes between water and/or fullerol treatments are indicated by hierarchical clustering analysis. The expression pattern of DEGs in the D + F group was very similar to that of the D group, especially in the middle region of the heat map. In contrast, the expression pattern of DEGs in the D + F group was similar to that of the CK group at the end region of the heat map, which indicated that fullerol treatment reversed the inhibitory effect of drought on B. napus at the transcript level. Values are the means of three replicates ± standard error. The different letters in each subfigure indicate significant differences between water or fullerol treatments. Data were adapted from Xiong et al. [9].
As shown in Figure 2, the comparison of different treatments identified 11,920, 7031, and 1222 DEGs in pairs of D vs. CK, D + F vs. CK, and D + F vs. D, respectively. Among them, 5917, 3494, and 529 genes were down-regulated and 6003, 3537, and 693 genes were up-regulated, respectively. In addition, 5968 DEGs were commonly regulated in D vs. CK and D + F vs. CK. Figure 1. Aboveground biomass (a) and leaf relative water content (b) in leaves of B. napus treated with different fullerol treatments (0 and 100 mg L −1 F) and water gradients (CK: check, sufficient water condition; D: drought). Values are the means of three replicates ± standard error. The different letters in each subfigure indicate significant differences between water or fullerol treatments. Data were adapted from Xiong et al. [9].
Analysis of Differentially Expressed Genes (DEGs)
Leaf tissues from B. napus under check (CK, sufficient water condition), drought (D), and drought with fullerol (D + F) treatments were obtained to construct three libraries for sequencing. From each of the three libraries, 54 to 64 million raw reads and 52 to 62 million clean reads were produced (Table S1). Approximately 84% of high-quality reads for each sample were mapped to a reference genome. Moreover, more than 46,000 transcripts with FPKM > 1 were identified in each library.
As shown in Figure 2, the comparison of different treatments identified 11,920, 7031, and 1222 DEGs in pairs of D vs. CK, D + F vs. CK, and D + F vs. D, respectively. Among them, 5917, 3494, and 529 genes were down-regulated and 6003, 3537, and 693 genes were up-regulated, respectively. In addition, 5968 DEGs were commonly regulated in D vs. CK and D + F vs. CK. A heat map generated from the hierarchical clustering of DEGs is shown in Figure 3. The up-regulated and down-regulated genes between water and/or fullerol treatments are indicated by hierarchical clustering analysis. The expression pattern of DEGs in the D + F group was very similar to that of the D group, especially in the middle region of the heat map. In contrast, the expression pattern of DEGs in the D + F group was similar to that of the CK group at the end region of the heat map, which indicated that fullerol treatment reversed the inhibitory effect of drought on B. napus at the transcript level. A heat map generated from the hierarchical clustering of DEGs is shown in Figure 3. The up-regulated and down-regulated genes between water and/or fullerol treatments are indicated by hierarchical clustering analysis. The expression pattern of DEGs in the D + F group was very similar to that of the D group, especially in the middle region of the heat map. In contrast, the expression pattern of DEGs in the D + F group was similar to that of the CK group at the end region of the heat map, which indicated that fullerol treatment reversed the inhibitory effect of drought on B. napus at the transcript level.
Functional Analysis by Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG)
In drought vs. well-watered condition, 2687 GO terms were enriched in B. napus plants. Of these, the up-regulated genes induced by water deficit were significantly assigned to GO terms such as "peptide biosynthetic/metabolic process", "organic substance biosynthetic process", and "nitrogen compound metabolic process" (Figure 4a). The down-regulated genes induced by drought treatments were markedly assigned to GO terms such as "protein serine/threonine kinase activity", "protein kinase activity", and "transport" (Figure 4b).
Functional Analysis by Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG)
In drought vs. well-watered condition, 2687 GO terms were enriched in B. napus plants. Of these, the up-regulated genes induced by water deficit were significantly assigned to GO terms such as "peptide biosynthetic/metabolic process", "organic substance biosynthetic process", and "nitrogen compound metabolic process" (Figure 4a). The down-regulated genes induced by drought treatments were markedly assigned to GO terms such as "protein serine/threonine kinase activity", "protein kinase activity", and "transport" (Figure 4b).
In the drought addition with the fullerol group vs. the well-watered group, DEGs were enriched in 2287 GO terms. The up-regulated genes triggered by drought with fullerol treatment were dramatically assigned to GO terms such as "ribosome", "peptide biosynthetic process", "structural molecule activity", and "macromolecule biosynthetic process" (Figure 4c). The down-regulated genes caused by the drought with fullerol treatment were significantly enriched in GO terms such as "phosphorylation", "protein serine/threonine kinase activity", "protein phosphorylation", and "phosphate-containing compound metabolic process" (Figure 4d). In the drought addition with the fullerol group vs. the well-watered group, DEGs were enriched in 2287 GO terms. The up-regulated genes triggered by drought with fullerol treatment were dramatically assigned to GO terms such as "ribosome", "peptide biosynthetic process", "structural molecule activity", and "macromolecule biosynthetic process" (Figure 4c). The down-regulated genes caused by the drought with fullerol treatment were significantly enriched in GO terms such as "phosphorylation", "protein serine/threonine kinase activity", "protein phosphorylation", and "phosphate-containing compound metabolic process" (Figure 4d).
The DEGs between the D + F group and the D group were analyzed and assigned to 1099 GO terms. Among them, the up-regulated genes caused by fullerol treatment under drought stress were mainly assigned to GO terms such as "organic cyclic compound catabolic process", "cellulose synthase activity", "glutamine biosynthetic process", and "phenylpropanoid metabolic process" (Figure 4e). The down-regulated genes were assigned to GO terms such as "amino acid kinase activity", "glutamate-5-semialdehyde dehydrogenase activity", "proline metabolic process", "carbohydrate metabolic process", and "single-organism metabolic process" (Figure 4f).
Compared with well-watered conditions, 119 KEGG pathways were enriched under drought conditions. The up-regulated genes were significantly enriched in KEGG pathways such as "biosynthesis of amino acids", "2-Oxocarboxylic acid metabolism", "carbon fixation in photosynthetic organs", and "arginine and proline metabolism" ( Figure S1a). The down-regulated genes were enriched in KEGG pathways such as "amino sugar and nucleotide sugar metabolism", "starch and sucrose metabolism", and "plant hormone signal transduction" ( Figure S1b). The DEGs between the D + F group and the D group were analyzed and assigned to 1099 GO terms. Among them, the up-regulated genes caused by fullerol treatment under In the drought with fullerol treatment vs. sufficient water condition, 116 KEGG pathways were enriched in the leaves of B. napus seedlings. The up-regulated genes were significantly assigned to KEGG pathways such as "biosynthesis of amino acids", "galactose metabolism", "arginine and proline metabolism", and "tryptophan metabolism" ( Figure S1c). The down-regulated genes were significantly enriched in KEGG pathways such as "phosphatidylinositol signaling system", "fatty acid biosynthesis", "arginine and proline metabolism", and "starch and sucrose metabolism" ( Figure S1d).
The DEGs induced by fullerol under water stress were assigned to 93 KEGG pathways, compared to drought alone. Among them, the up-regulated genes were assigned to KEGG pathways such as "starch and sucrose metabolism", "biosynthesis of amino acids", "biosynthesis of secondary metabolism", "flavonoid biosynthesis", and "phenylalanine metabolism" (Figure S1e). The down-regulated genes were enriched in KEGG pathways such as "starch and sucrose biosynthesis", "biosynthesis of amino acids", "arginine and proline metabolism", and "biosynthesis of secondary metabolism" ( Figure S1f).
Quantitative Real-Time (qRT)-PCR
We conducted qRT-PCR to validate the RNA-seq data and analyze gene expression changes of randomly selected genes. These selected genes that were orthologous to genes in Arabidopsis thaliana were mainly associated with carbohydrate metabolism (GAPC, PME3, NADP-ME2, PGL1, BXL5, SPS2, and GAE6) and drought response (GPX1, GSTF3, APX1, GLN1-1, GLN1-4, P5CS1, and P5CS2) ( Figure 5). Although the expression levels of selected genes in D vs. CK, D + F vs. CK, or D + F vs. D were different between RNA-seq and qRT-PCR, the expression patterns of DEGs obtained from RNA-seq were similar to those of genes obtained from qRT-PCR ( Figure 5). Linking tentatively identified metabolites to biochemical pathways can aid in targeting key changes, as the constituents of a given pathway are likely to be co-regulated. Metabolites were mainly enriched in KEGG pathways such as "biosynthesis of secondary metabolites", "biosynthesis of antibiotics", "ABC transporters", "biosynthesis of amino acids", and "carbon metabolism" in all three groups (D + F vs. D, D + F vs. CK, and D vs. CK) (Supplementary Materials Files S4-S6).
Data Integration/Comprehensive Networks of Transcripts and Metabolites
With the aim of characterizing the progression of fullerol in B. napus in response to drought, we performed transcriptomic and metabolic data integration based on common KEGG pathways. Data integration detected the most important biological processes on the basis of KEGG pathway under water stress or fullerol treatments in B. napus.
Using combined transcriptomic and metabolomic data, we found that DEGs and differentially accumulated metabolites were commonly enriched in 48, 43, and 36 KEGG pathways in D vs. CK, D + F vs. CK, and D + F vs. D, respectively (Supplementary Materials Files S7-S9). Among them, KEGG pathways such as "biosynthesis of secondary metabolites", "biosynthesis of amino acids", "carbon metabolism", and "galactose metabolism" were identified to have the most genes and metabolites (Supplementary Materials Files S7-S9) in D vs. CK, D + F vs. CK, and D + F vs. D groups. We also aimed to investigate flavonoid metabolism associated with antioxidant ability, in response to drought. Therefore, modifications at the transcriptomic and metabolomic levels were analyzed in detail for the following biochemical processes: carbohydrate metabolism, amino acid metabolism, and secondary metabolite metabolism. Partial genes related to these biochemical processes such as GLN1-1, GLN1-4, P5CS1, and P5CS2 were validated by qRT-PCR ( Figure 5).
Metabolic Analysis
A separation trend was observed among the sufficient water condition (CK), drought (D), and drought combined with fullerol (D + F) treatments using principal component analysis (PCA), indicating that drought and fullerol had an impact on the B. napus metabolism. The CK group and D group (Figure S2a), the CK group and D + F group ( Figure S2b), and the D group and D + F group ( Figure S2c) were distinguished. In addition, 77, 74, and 62 metabolites were significantly identified to be responsible for the separation in D vs. CK, D + F vs. CK, and D + F vs. D, respectively (Supplementary Materials Files S1-
Carbohydrate Metabolism
Under drought stress, the accumulation of oligosaccharides was inhibited, whereas that of monosaccharides was promoted. In the "starch and sucrose metabolism" pathway, the contents of oligosaccharides, including sucrose and maltose, were decreased in drought-treated plants compared to well-water-treated plants ( Figure 6). Genes encoding the sucrose biosynthetic enzymes, including probable sucrose-phosphate synthase (e.g., SPS1/SPS4) and sucrose synthase (e.g., SUS1/SUS5/SUS6), were partially down-regulated, while genes encoding the sucrose catabolic enzymes, including β-fructofuranosidase (e.g., CWINV5) and acid β-fructofuranosidase (e.g., BFRUCT4), were partially up-regulated by water deficit (Figure 6). The transcript abundances of enzymes involved in maltose biosynthesis, such as β-amylase (BAM2/BAM3) and 1,4-α-glucan-branching enzyme 2-1 (SBE2.1), were decreased by drought stress (Figure 6). In the "galactose metabolism" pathway, the galactinol and raffinose (oligosaccharide) contents were decreased by drought ( Figure 6). Among them, galactinol is the mediate product of oligosaccharides. Galactinol synthase (GOLS) can catalyze the conversion of UDP-galactose into galactinol, and the gene encoding galactinol synthase (e.g., GOLS3) was partially down-regulated in drought-treated plants ( Figure 6). The expression of genes encoding raffinose biosynthetic enzymes, including probable galactinol-sucrose galactosyltransferase (e.g., RFS5) and α-galactosidease 1 (AGAL1), was partially down-regulated by drought ( Figure 6). Here, both the "starch and sucrose metabolism" and "galactose metabolism" pathways belong to carbohydrate metabolism. Other oligosaccharides including maltotriose, lyxose, and fucose were also decreased in plants undergoing water stress. The genes BAM2 and BAM3 related to maltotriose biosynthesis, as well as the gene encoding GDP-mannose 4, 6 dehydratase (MUR1) associated with fucose biosynthesis, were down-regulated in drought-exposed plants ( Figure 6). In contrast, water deficit improved the accumulation of monosaccharides and their derivates in B. napus. Drought caused increases in the contents of mannose and glycerate (monosaccharide). The expression of genes encoding hexokinase (HXK1 and HXK3), related to mannose catabolism, were down-regulated by drought ( Figure 6). The expression of genes encoding glycerate dehydrogenase (e.g., HPR2) and D-3-phosphoglycerate dehydrogenase (e.g., PGDH2) associated with glycerate catabolism were partially down-regulated in B. napus under drought ( Figure 6). Water deficit led to an increment in the content of gluconate, which is a derivate of glucose (monosaccharide). Myo-inositol, a monosaccharide-like substance, is a kind of soluble sugar alcohol whose content was elevated by drought. The phosphatase IMPL1 can catalyze the conversion of myo-inositol phosphate into myo-inositol, and the expression of the gene IMPL1 was up-regulated under drought ( Figure 6).
Drought with fullerol treatment decreased the accumulation of oligosaccharides (e.g., sucrose, maltotriose, raffinose, fucose, lyxose, and galactinol) and increased the accumulation of monosaccharides (e.g., glucose, glycerate, mannose, and myo-instiol), compared to sufficient water conditions ( Figure 6). At the transcript level, for oligosaccharides, drought addition with fullerol up-regulated the expression of gene encoding alpha-glucosidase (GAA) associated with sucrose catabolism, and down-regulated the expression level of gene (GOLS3) related to galactinol synthase-in comparison with well-watered condition ( Figure 6). For monosaccharides, the expression of genes (HXK1 and HXK3) related to mannose catabolism, and the expression of the gene encoding myo-Inositol oxygenase 2 (MIOX2), which catalyzes myo-inositol catabolism, were down-regulated in the D + F vs. CK group ( Figure 6).
In drought supplementation with the fullerol group vs. drought alone (D + F vs. D), the accumulation of monosaccharides was changed, while the accumulation of oligosaccharides showed almost no change ( Figure 6). The contents of the derivate of galactose, UDPgalactose, and the derivate of glucose, gluconate, were reduced in the fullerol treatment in B. napus under water deficit, where the galactose and glucose belong to monosaccharides ( Figure 6). Galactinol synthase 3 (GOLS3) can catalyze the conversion of UDP-galactose into galactinol, and drought with fullerol triggered a high expression of gene GOLS3, compared to drought alone ( Figure 6). In contrast, another galactose derivative, galactarate, was increased by fullerol in drought-treated B. napus. However, the glycerate content was lower, and the transcript level of the gene PGDH2 associated with glycerate catabolism was higher, in fullerol-treated plants than in non-fullerol-treated plants under drought treatment ( Figure 6). encoding hexokinase (HXK1 and HXK3), related to mannose catabolism, were down-regulated by drought ( Figure 6). The expression of genes encoding glycerate dehydrogenase (e.g., HPR2) and D-3-phosphoglycerate dehydrogenase (e.g., PGDH2) associated with glycerate catabolism were partially down-regulated in B. napus under drought ( Figure 6). Water deficit led to an increment in the content of gluconate, which is a derivate of glucose (monosaccharide). Myo-inositol, a monosaccharide-like substance, is a kind of soluble sugar alcohol whose content was elevated by drought. The phosphatase IMPL1 can catalyze the conversion of myo-inositol phosphate into myo-inositol, and the expression of the gene IMPL1 was up-regulated under drought ( Figure 6). In the tricarboxylic acid (TCA) cycle (involved in carbohydrate metabolism), water stress decreased the content of cis-aconitate and increased the content of malate, compared to well-watered conditions ( Figure 6). The expression of biosynthetic genes ACO1/ACO2/ACO3 (encoding aconitase) and the expression of catabolic gene CICDH (encoding isocitrate dehydrogenase) for cis-aconitate were inhibited by drought ( Figure 6). The expression of the gene encoding malate dehydrogenase 1 (MDH1) which catalyzes the conversion of oxaloacetate to malate, and the expression of gene encoding NADP-malic enzyme 1 (NADP-ME1) which can degrade malate, were up-regulated by water deficit (Figure 6). In addition, in comparison with sufficient water conditions, drought addition with fullerol treatment caused a reduction in the content of cis-aconitate and down-regulated its biosynthetic genes ACO2/ACO3. There was no significant difference in the malate content between well-watered treatment and drought with fullerol treatment (Figure 6).
Compared with the drought alone, in the TCA cycle, the citrate content was increased, and the malate content was decreased in the drought supplementation with fullerol treatment ( Figure 6). Fullerol application increased the expression of the gene (ACLB-2) encoding ATP-citrate synthase beta chain protein 2, related to citrate biosynthesis, under drought stress ( Figure 6). The expression of the genes MDH1 and NADP-ME1 associated with malate synthesis was repressed by fullerol treatment under drought ( Figure 6).
Amino Acid Metabolism
Under drought stress, several amino acids derived from the shikimate pathway (mainly in "biosynthesis of phenylpropanoids" pathway), including phenylalanine and tryptophan, were decreased in the leaves of drought-treated B. napus seedlings in comparison with well-water-treated plants ( Figure 6). The arogenatedehydratase genes (e.g., ADT4/ADT5) associated with phenylalanine biosynthesis were partially down-regulated under drought ( Figure 6). For tryptophan biosynthesis, plants exposed to a water deficit partially down-regulated the expression of the tryptophan synthase gene (e.g., TSB2) ( Figure 6).
The biosynthesis of amino acids from the 2-oxoglutarate pathway (mainly in the "arginine and proline metabolism" pathway) had different patterns in response to drought: the levels of glutamate and proline were increased, while the ornithine content was decreased ( Figure 6). For glutamate, the gene encoding glutamate synthase 1 (GLT1), associated with glutamate biosynthesis, was up-regulated, while the genes encoding glutamine synthetase cytosolic isozyme (GLN1-1/GLN1-3/GLN1-4), related to glutamate catabolism, were down-regulated by drought-when compared to sufficient water conditions ( Figure 6). For proline, two delta-1-pyrroline-5-carboxylate synthase genes (P5CS1/P5CS2), related to proline synthesis, were increased, while genes (PRODH1/PRODH2) encoding proline dehydrogenase, associated with proline catabolism, were suppressed by water stress (Figure 6). In terms of ornithine biosynthetic genes, one acetylornithine deacetylase gene (argE) and one arginine biosynthesis bifunctional protein gene (ArgJ) were suppressed, while one aminoacylase-1B gene (Acyb1) was increased by drought ( Figure 6). Water shortage up-regulated the expression of the ornithine catabolism gene encoding ornithine carbamoyltransferase (OTC) ( Figure 6).
In drought with fullerol treatment vs. sufficient water condition (D + F vs. CK), the accumulation of amino acids in the leaves of B. napus seedlings was changed. For amino acids derived from the shikimate pathway, drought addition with fullerol treatment did not change the content of phenylalanine, but showed a reduction in tryptophan content as well as partially repressed the expression of the tryptophan synthetic gene (e.g., TSB2), when compared to well-watered conditions ( Figure 6). For amino acids derived from the 2-oxoglutarate pathway, drought with fullerol treatment had almost no effects on the contents of glutamate and proline, but increased the ornithine content and up-regulated the expression of gene Acyb1, related to ornithine biosynthesis, in comparison with sufficient water conditions ( Figure 6). For amino acids derived from phosphoribosyl pyrophosphate, in the D + F vs. CK group, the content of histidine was decreased and the expression of the gene encoding histidinol dehydrogenase (HDH), which oxidizes histidinol to histidine, was down-regulated ( Figure 6).
Compared with drought alone, drought with fullerol treatment had no impacts on amino acids from the shikimate pathway, except for N-acetyl-phenylalanine ( Figure 6). The N-acetyl-phenylalanine level and its biosynthetic gene CORI3 were suppressed by fullerol under water stress ( Figure 6). The contents of amino acids (glutamate, proline, and arginine) derived from the 2-oxoglutarate pathway were decreased in fullerol-treated plants, in comparison with non-fullerol-treated plants, under drought. At the transcript levels, for glutamate, one catabolic gene encoding probable glutamate dehydrogenase 3 (GSH3) and two catabolic genes (GLN1-4/GLN1-1) were up-regulated in fullerol-treated plants under drought. With regard to proline, two biosynthetic genes P5CS1/P5CS2 were suppressed by fullerol in B. napus subjected to soil drying ( Figure 6). However, exogenous fullerol decreased the amino acids from phosphoribosyl pyrophosphate, including histidine and histamine, under drought ( Figure 6). The transcript level of the gene encoding ATP phosphoribosyltransferase 1 (HISN1A), related to histidine synthesis, was lower and the transcript level of gene encoding serine decarboxylase (SDC), associated with histidine catabolism, was higher, in fullerol-treated plants than non-fullerol-treated plants under water shortage conditions ( Figure 6).
Secondary Metabolite Metabolism
Under drought stress, plants can produce a variety of secondary metabolites; of these, phenolics and flavonoids are substances with antioxidant capacity. B. napus plants undergoing drought stress had a reduction in most detected phenolics and flavonoids, including hyperoside, peonidin 3-O-glucoside cation, quercetin 3 -methyl ether, 3-hydroxy-4methoxycinnamic acid, and trans-3-coumaric acid, compared to sufficient water conditions (Supplementary Materials File S1 and Figure 6). Of these, trans-3-coumaric acid can be mapped to "phenylalanine metabolism" of the KEGG pathway (Supplementary Materials File S7 and Figure 6).
Compared with the well-watered conditions, drought with fullerol treatment also decreased the contents of phenolics and flavonoids, including peonidin 3-O-glucoside cation, quercetin 3 -methyl ether, 3-hydroxy-4-methoxycinnamic acid, and trans-3-coumaric acid (Supplementary Materials File S2 and Figure 6). The values of log 2 fold change in most of the detected phenolics and flavonoids were higher in the D + F vs. CK group than in the D vs. CK group (Supplementary Materials Files S1 and S2).
Drought addition with fullerol led to an increase in the accumulation of most detected phenolics and flavonoids, including luteolin, rutin, chlorogenic acid, trans-3-coumaric acid, and 3-hydroxy-4-methoxycinnamic acid, compared to drought stress alone (Supplementary Materials File S2 and Figure 6). Among them, chlorogenic acid and luteolin can be mapped to "flavonoid biosynthesis" of the KEGG pathway (Supplementary Materials File S9 and Figure 6). The gene encoding flavonoid 3 -monooxygenase (CYP75B1), associated with luteolin biosynthesis, was up-regulated by fullerol in B. napus subjected to water deficit ( Figure 6). Foliar application of fullerol elevated the expression of genes encoding phenylalanine ammonia-lyase (PAL1/PAL2) for trans-3-coumaric acid biosynthesis in drought-treated B. napus ( Figure 6).
Discussion
B. napus is a critical oil crop grown worldwide, and water deficit poses a threat to its growth and yields. Fullerol is a small-sized carbon nanoparticle with high amounts of polyhydroxy fullerenes, exhibiting positive effects on B. napus under drought stress at the physiological level. However, the mechanisms of fullerol at the molecular and metabolic levels in B. napus in response to drought remain unclear. In this study, we used transcriptomic and metabolomic analyses to identify differentially expressed genes and differentially accumulated metabolites caused by drought or fullerol; consequently, the molecular and metabolic mechanisms of B. napus subjected to fullerol under drought were investigated.
When exposed to soil drying, plants can maintain basal metabolic activities through a series of molecular and biochemical adaptations. In this study, RNA-seq analysis showed that the expression profiles of a large number of DEGs in B. napus were altered by drought. Functional enrichment of these DEGs presented that drought triggered KEGG pathways such as "biosynthesis of amino acids", "carbon fixation in photosynthetic organs", "argi-nine and proline metabolism", and "starch and sucrose metabolism". In addition, plants can dramatically accumulate metabolites under drought stress. Previous studies have shown that plants can accumulate several metabolites such as sugars, amino acids, organic acids, nucleotides and their derivatives, and phenolics and flavonoids to regulate intracellular osmotic pressure, and scavenge reactive oxygen species (ROS) in response to drought [15,16]. In the present study, metabolomic analysis revealed that drought stress induced a variety of metabolites such as sugars, amino acids, organic acids, and their derivatives, in B. napus subjected to drought. The KEGG pathway enrichment analysis revealed that the detected metabolites were mainly enriched in metabolic pathways related to "biosynthesis of amino acids", "biosynthesis of secondary metabolites", and "carbon metabolism". These results were consistent with those of previous studies conducted by Zhao et al. [15], Xiong et al. [17], and Vital et al. [18].
The foliar application of fullerol could induce genes and metabolites that were differentially expressed and differentially accumulated in B. napus under drought stress. The most enriched KEGG pathways from DEGs and metabolites induced by fullerol in B. napus under drought were similar to those of the drought control. These pathways were mainly concentrated in "starch and sucrose metabolism", "carbon metabolism", "galactose metabolism", "biosynthesis of amino acids", "arginine and proline metabolism", etc. Among them, "starch and sucrose metabolism", "carbon metabolism", and "galactose metabolism" were related to carbohydrate metabolism. KEGG pathways such as "biosynthesis of amino acids" and "arginine and proline metabolism" were associated with amino acid metabolism. Additionally, the antioxidant-related KEGG pathways such as "flavonoid biosynthesis" and "phenylalanine metabolism" were enriched. Therefore, by comparing the transcriptome and metabolome results, we concluded that fullerol mainly affected the KEGG pathways related to carbohydrate metabolism, amino acid metabolism, and secondary metabolite metabolism, at the molecular and metabolic levels in B. napus under drought. Here, we would explore the mechanisms of the effects of fullerol on drought adaptation in B. napus based on these three biochemical processes.
Carbohydrate Metabolism
Our previous studies indicated that dry matter (carbohydrate) accumulation in B. napus was reduced by drought [9]. The present study further revealed that water deficit decreased the contents of oligosaccharides (related to dry matter accumulation), and down-regulated the expression of genes associated with oligosaccharide biosynthesis (or up-regulated the expression of genes involved in oligosaccharide catabolism). Metabolomic analysis showed that the contents of oligosaccharides including sucrose, fucose, raffinose, and maltose were decreased by drought. Transcriptome analysis supported the metabolomic results and indicated that water shortage depressed the expression of several genes associated with the biosynthesis of oligosaccharides, such as sucrose synthase gene SPS1/SPS4 and raffinose synthesis gene RFS5/AGAL1. Several studies were consistent with our results. For example, Rahman et al. [19] showed that the contents of sucrose and raffinose in wheat were decreased under post-anthesis drought stress. Drought reduced the sucrose concentration in soybean [20]. In contrast, water shortage improved the accumulation of monosaccharides, and up-regulated the expression of genes related to monosaccharide biosynthesis (or down-regulated genes involved in monosaccharide catabolism). The contents of the monosaccharide, including mannose and glycerate, as well as the content of monosaccharide analogue (myo-inositol), were increased in leaves of B. napus under drought stress. Related genes such as the myo-inositol synthesis gene, IMPL1, were also increased by water deficit. Previous studies agreed with these results, and Mutwakil et al. [21] reported that a sharp increase in myo-inositol was found in Calotropis procera subjected to salt and drought stress. The glycerate level was elevated in both Ulmus minor Mill. and Quercus ilex L. seedlings under drought [22]. These findings were in accordance with Rodríguez-Calcerrada et al. [23], who stated that simple sugars and sugar alcohols presented a significant increase, whereas compound sugars (e.g., sucrose) de-creased or did not change under severe drought stress conditions. The oligosaccharides are energetic and structural substances that can serve as carbon sources for plant growth and development [24,25], while the monosaccharides can act as stress regulators for drought adaptation in plants [26,27]. For example, mannose can be involved in osmoregulation as a low molecular sugar in plants [26]. Myo-inositol can serve as an important stress regulator, both as a key metabolite to regulate osmotic balance and scavenge ROS [27]. Under drought stress, oligosaccharides can be broken down into monosaccharides with lower molecular weights to increase the osmotic potential of cells [24,25]. Therefore, we can speculate that B. napus exposed to drought may decompose oligosaccharides into low-molecular sugars such as monosaccharides, which can enhance osmotic adjustment capacity, and even scavenge ROS for adaptation to drought.
Under drought stress, exogenous application of fullerol resulted in almost no changes in the oligosaccharide contents. For monosaccharides, fullerol exhibited inconsistent changes in a few monosaccharides, including glycerate and the derivatives of galactose, in drought-treated plants. As an example, the changes in the derivatives of galactose caused by fullerol under drought were different: decreasing the content of UDP-galactose, and increasing the content of galactarate. These results implied that fullerol may not induce the accumulation of monosaccharides to enhance osmotic adjustment capacity in B. napus under drought.
Amino Acid Metabolism
Amino acid metabolism is the main component of nitrogen metabolism, and in this study, drought mainly affected the accumulation of amino acids derived from the 2-oxoglutarate and shikimate pathways. For the 2-oxoglutarate pathway, our study found that water deficit induced glutamate and proline accumulation, but decreased ornithine formation, which agreed with the findings of Hatzig et al. [28]. The expression of genes encoding proline synthesis (P5CS1/P5CS2) was up-regulated, and the expression of genes encoding proline catabolism (PRODH1/PRODH2) was down-regulated by drought. Water stress also up-regulated the expression of the ornithine catabolism gene (OTC). Among them, proline is an important osmotic adjustment substance and ROS scavenger, and its accumulation can help to maintain water in plants under osmotic stress, and regulate the redox status of cells [29,30]. In this study, the increase in proline may help to maintain water potential and scavenge ROS in B. napus, in response to drought. In contrast, ornithine was found to be decreased under drought. This may be due to the fact that glutamate is a common precursor for proline and ornithine synthesis. In comparison with ornithine, proline synthesis from glutamate is predominant under stress conditions, and the high requirements for proline synthesis may limit ornithine synthesis [28].
For the shikimate pathway, drought decreased the contents of phenylalanine and tryptophan. The genes ADT4/ADT5, which are associated with the biosynthesis of phenylalanine, and the gene TSB2, which is related to tryptophan biosynthesis, were downregulated in B. napus subjected to drought. Although most studies pointed out that the accumulation of phenylalanine and tryptophan can be enhanced by water deficit [31,32], several studies showed opposite results, and agreed with our findings. Khan et al. [33] reported that a decrease in phenylalanine level was found in chickpea exposed to drought. Here, phenylalanine is a biosynthetic precursor of phenolics and flavonoids, which can play an antioxidant role in the plant defense system [34,35]. The reduction in phenylalanine under drought stress implies a deficiency of the biosynthetic precursor of phenolics and flavonoids, which may lead to a decrease in phenolics and flavonoids. For tryptophan, we found that drought decreased tryptophan content. Ghorbanpour et al. [36] supported this result, and pointed out that tryptophan was reduced in barley under moderate and severe drought stress. Osmotic stress also decreased tryptophan levels in Arabidopsis [37]. Tryptophan is an important precursor for the biosynthesis of auxin in plants [37], and its reduction implies insufficient auxin secretion and growth restriction, consistent with our previous findings that drought reduced biomass accumulation in B. napus [9].
When exposed to drought, fullerol treatment had no effects on the accumulation of phenylalanine and tryptophan, but reduced the contents of glutamate, proline, and arginine in B. napus seedlings. This metabolic result was consistent with the RNA-seq result. For example, drought with fullerol treatment up-regulated the expression of glutamate catabolic genes GSH3/GLN1-4/GLN1-1, and down-regulated the expression of proline biosynthetic genes P5CS1/P5CS2, in comparison with drought alone. The reduction in proline, caused by fullerol under drought, is probably due to the fact that fullerol treatment can increase the leaf RWC, which means that plants do not need to biosynthesize proline for osmotic adjustment; thus plants can invest more in the photosynthetic response or other drought tolerance pathways. In addition, the contents of glutamate and proline in the exogenous fullerol with drought treatment were similar to those observed in the wellwatered treatment, suggesting that the application of fullerol may reduce the requirement for plants to synthesize proline in response to drought.
It is worth noting that our previous work reported that fullerol treatment enhanced the leaf RWC under drought stress [9]. However, in this study, we found that exogenous fullerol did not accumulate the monosaccharides and specific amino acids, such as proline, in response to drought. The improvement in leaf RWC by fullerol may be because fullerol is able to serve as an additional intercellular water supply, rather than because of the accumulation of monosaccharides and specific amino acids to maintain water potential in leaves of B. napus under soil drying [8,9].
Secondary Metabolite Metabolism
Drought stress induces oxidative stress in plants to produce ROS, leading to membrane lipid peroxidation, protein denaturation, and DNA damage. The plants can reduce free radical damage in cells by increasing antioxidants. The phenylpropane metabolic pathway is a key metabolic pathway for secondary metabolites in plants [38]. The phenolics and flavonoids produced by the phenylpropane metabolic pathway are typical natural antioxidants in plants that resist environmental stresses [39,40]. Some studies have shown that drought increased the accumulation of phenolics and flavonoids, which helped to reduce ROS in plants during drought [41,42]. However, in the present study, we found that water deficit decreased the contents of phenolics and flavonoids such as hyperoside, peonidin 3-Oglucoside cation, quercetin 3 -methyl ether, 3-hydroxy-4-methoxycinnamic acid, and trans-3-coumaric acid. Among them, trans-3-coumaric acid is involved in the phenylpropanoid metabolic pathway. Several studies supported this result and Hernández et al. [43] showed that flavanols, including epicatechin gallate and epigallocatechin gallate, were reduced by drought stress in tea. A reduction in the concentration of rutin (flavonoid) was found in leaves of Bupleurum chinense DC exposed to water deficit [44]. Our results also showed that phenylalanine, one of the biosynthetic precursors of phenolics and flavonoids, was inhibited by drought, which may lead to a reduction in phenolics and flavonoids. In this study, the reduction in phenolics and flavonoids may be because these substances, in the leaves, are consumed to maintain primary metabolic functions, such as monosaccharides or prolines during drought stress [44].
Under drought stress, exogenous application of fullerol increased the contents of phenolics and flavonoids such as luteolin, trans-3-coumaric acid, chlorogenic acid, and 3hydroxy-4-methoxycinnamic acid. Among them, chlorogenic acid and luteolin are involved in the "flavonoid biosynthesis" pathway. Additionally, fullerol elevated the expression of related biosynthetic genes under drought conditions. As an example, genes (PAL1/PAL2) encoding enzymes related to the synthesis of trans-3-coumaric acid were up-regulated by fullerol under drought. Furthermore, the levels of phenolics and flavonoids in the fullerol with drought treatment remained lower than in the well-watered treatment. These findings indicated that fullerol alleviated the inhibitory effects of drought on the accumulation of phenolics and flavonoids.
Plant Materials and Growth Conditions
This experiment was conducted in a controlled growth chamber with a 14 h photoperiod (07:00-21:00 h BST) and day/night temperature of 25/18 • C, in the Experimental Station of the Oil Crops Research Institute, Chinese Academy of Agricultural Sciences, Wuhan China. We selected uniform B. napus seeds of the Zhongshuang 11 genotype and surface-sterilized the seeds using 0.2% HgCl 2 for 10 min and washed them with distilled water. A mixture of a loamy clay soil and vermiculite (soil:vermiculite=2:1, v/v) (1 kg) was used to fill each plastic container. We sowed eight seeds and thinned them into four seedling plants in every pot. For the initial 20 days after sowing (DAS), all pots were watered daily by weight to maintain soil water content (SWC) at 75-80% field capacity (FC). Then, two water treatments were performed: 1) plants were maintained at 80% FC daily; and 2) pots were controlled at 80% FC during the initial 20 days, and then SWC was reduced to 30% FC at 25 DAS. A small amount (5 mL) of distilled water (0 mg L −1 fullerol) or 100 mg L −1 fullerol (C 60 (OH) 27 , purity >99.9%) was applied to leaves of seedlings in each pot every other day, during 21 to 25 DAS. Fullerol synthesized from fullerene C60 using the O 2 /NaOH approach, according to the method of Li et al. [45], was purchased from the Suzhou Dade Nanotechnology Co. Ltd., Suzhou, China. The treatment combinations were as follows: sufficient water condition + 0 mg L −1 fullerol (Check, CK), drought + 0 mg L −1 fullerol (D), and drought + 100 mg L −1 fullerol (D + F). The 3 rd leaves of seedlings in the three treatments were sampled for RWC, RNA-seq, and metabolomic analysis at 25 DAS. The aboveground tissues for all treatments at 30 DAS were collected for the measurement of biomass.
RNA Extraction and Quantification
Total RNA isolation from leaves was carried out using TRIzol reagent (Invitrogen, Burlington, ON, Canada). RNA degradation and contamination were determined on 1% agarose gels. The quantification and qualification of RNA were then checked using a Nano Photometer spectrophotometer (IMPLEN, Westlake Village, CA, USA), a Qubit RNA Assay Kit in a Qubit 2.0 Fluorometer (Life Technologies, Carlsbad, CA, USA), and an Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA). The cDNA library construction and sequencing were conducted at a commercial service company (Novogene, Beijing, China; http://www.novogene.com, accessed on 20 September 2017). Three biological replicates were used in RNA-seq experiments.
Data Processing
Raw reads were cleaned by removing low-quality sequences, and those containing adapter or poly-N sequences. The index of the reference genome was built using Bowtie 2.2.3. TopHat 2.0.12, which is a fast-mapping tool based on generating a database of splice junctions. This was used to align clean paired-end reads to the reference genome. The clean reads were mapped to the Brassica napus genome (http://brassicadb.org/brad/ datasets/pub/Genomes/Brassica_napus/, accessed on 8 October 2017) [46]. HTSeq 0.6.1 was used to summarize read counts mapped to each gene. The gene expression levels were quantified by fragments per kilobase of transcript per millions reads (FPKM) that eliminate the influence of gene lengths, and sequencing discrepancies.
DESeq R package (1.18.0) was used to identify the DEGs between two groups with three replications. For controlling the false discovery rate, the resulting p values were adjusted using Benjamini and Hochberg's procedure. Genes with an adjusted p value less than 0.05, identified by DESeq, were regarded as being differentially expressed [47,48]. The DEGs were then used to conduct functional annotation, including GO and KEGG analysis. GO seq R package was used to implement the functional enrichment analysis of GO. KOBAS software (KOBAS, Surrey, UK) was used to conduct KEGG pathway enrichment analysis, which can calculate the total number of DEGs involved in specific pathways.
qRT-PCR Analysis
To validate RNA-seq results, we selected 22 genes to explore their expression patterns using qRT-PCR analysis. Three independent biological replicates were conducted, and each biological group was repeated three times. Total RNA samples were isolated from leaves using TRIzol reagent (Invitrogen, Burlington, ON, Canada). One µg of total RNA was reverse-transcribed by RevertAid TM First Strand cDNA Synthesis kit (Fermentas, Burlington, ON, Canada). We used the Bio-Rad Real-Time System (BioRad, Hercules, CA, USA) to conduct qRT-PCR in 50 µL of reaction mixture, including: 25 µL 2× SYBR ® Premix Ex Taq TM II (Takara, Kusatsu, Japan), 2 µL of each PCR forward and reverse primer for the selected gene, 1 µL of 50 × ROX reference dye, 4 µL of cDNA template, and 16 µL of dd H 2 O. The amplification conditions of PCR were as follows: one cycle of 95 • C for 30 s, 40 cycles of 95 • C for 5 s, and 60 • C for 30 s. The transcript level of the β-actin gene was set as a control. We listed the primer sequences for the selected genes in Table S2.
Metabolite Extraction and LC-MS Conditions
Untargeted metabolomic profiling was conducted by Novogene company (Beijing, China). The leaf tissues (50 mg) were extracted in 1 mL of methanol:methylcyanide:water mixture (2: 2: 1, v/v/v) for 1 h at -20 • C, centrifuged at 13,000 rpm for 15 min at 4 • C to obtain the supernatant, freeze-dried, and stored at -80 • C. A total of 100 µL of methylcyanide:water mixture (1:1, v/v) was added to the dried sample, and the solution was then vortexed for 30 s and centrifuged at 14,000 rpm for 15 min at 4 • C. The supernatant was transferred into an LC vial for nontargeted global ultra-performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UPLC-Q-TOF-MS) analysis. For quality control, an equal mixture of all samples was taken. There were six biological replicates for each treatment using UPLC-Q-TOF-MS analysis.
We used Agilent 1290 Infinity LC (Agilent Technology, Santa Clara, CA, USA) equipped with an Acquity UPLC HSS T3 column of 2.1 mm × 100 mm (Waters, Milford, MA, USA) to separate the production. The temperature of the column was set at 25 • C and the flow rate was 0.3 mL min −1 . Mobile phase A was water/25 mM ammonium acetate/25 mM ammonia, and mobile phase B was acetonitrile. A gradient elution program consisted of 95% B at 0-0.5 min, 95-65% B at 0.5-7 min, 65-40% B at 7-8 min, 40% B at 8-9 min, 40-95% B at 9-9.1 min, and 95% B at 9.1-12 min. The separated components were detected in the positive and negative electrospray ionization modes using a Triple-TOP 5600 mass spectrometer (AB SCIEX, Concord, ON, Canada). Ion source gas 1 and 2 and curtain gas were set at 60, 60, and 30, respectively. The TOF MS scan m/z range was 60-1200 Da, and the MS/MS scan m/z range was 25-1200 Da. The accumulation time was 0.15 s/spectra for TOP MS, and 0.03 s/spectra for MS/MS. The source temperature was 600 • C and the IonSpray voltage was set at ± 5500 V.
Multivariate Data Processing
Multivariate methods including PCA, and partial least-squares discriminant analysis (PLS-DA) were used for normalized data analysis [49][50][51]. The inclusion/exclusion criteria of: (1) variable importance in projection (VIP) > 1.0; and (2) p-value < 0.05 were performed for the identification of the metabolites [52]. Metabolites that reached these criteria were marked as significantly differential metabolites. The significantly differential metabolites obtained from each comparison group underwent KEGG ID mapping, and were submitted to the KEGG website for relevant pathway analysis.
Statistical Analyses of Other Data
Statistical analyses for qRT-PCR data were carried out using one-way ANOVA, and Duncan's multiple range test was performed to compare the significant differences among treatments at p = 0.05 level.
Conclusions
In this study, we investigated the molecular and metabolic mechanisms induced by fullerol in enhancing drought tolerance in B. napus seedlings using the transcriptomic and metabolomic analyses. The results show a correspondence between profile changes in genes, and profile changes metabolites. The DEGs and differentially accumulated metabolites triggered by drought or fullerol were commonly enriched in KEGG pathways associated with carbohydrate metabolism, such as "carbon metabolism", amino acid metabolism such as "biosynthesis of amino acids", and secondary metabolite metabolism such as "biosynthesis of secondary metabolites". We analyzed the DEGs and differential metabolites in these KEGG pathways and found that B. napus seedlings subjected to soil drying exhibited a high accumulation of primary metabolites, and inhibited the accumulation of secondary metabolites. The accumulated primary metabolites, including monosaccharides (e.g., mannose and myo-inositol) and specific amino acids (e.g., proline), can promote the osmotic adjustment ability in leaves of B. napus seedlings in response to drought. The results further showed that fullerol treatment could reverse the inhibitory effects of drought on the accumulation of secondary metabolites, such as phenolics and flavonoids (e.g., luteolin and trans-3-coumaric acid), but had no impacts on the accumulation of osmotic adjustment substances (e.g., monosaccharides and specific amino acids) to enhance drought tolerance in B. napus. The common KEGG pathways based on metabolite-transcript integration using transcriptome and metabolome datasets in leaves of B. napus in a comparison between drought with fullerol treatment (D + F) and drought alone (D); Supplementary Materials File S10: The differentially expressed genes mentioned in Figure 6 with assay names of gene ID in leaves of B. napus in a comparison between check (sufficient water condition), drought, and drought addition with fullerol.
|
2022-12-07T16:28:45.550Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "fcdd233c96efb3d3f049f6debc027008c18a6881",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/23/15304/pdf?version=1670156753",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fdd694141639b6bfc2a092035b15e75e9bbfc2fc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
7659219
|
pes2o/s2orc
|
v3-fos-license
|
Projected climate change threatens pollinators and crop production in Brazil
Animal pollination can impact food security since many crops depend on pollinators to produce fruits and seeds. However, the effects of projected climate change on crop pollinators and therefore on crop production are still unclear, especially for wild pollinators and aggregate community responses. Using species distributional modeling, we assessed the effects of climate change on the geographic distribution of 95 pollinator species of 13 Brazilian crops, and we estimated their relative impacts on crop production. We described these effects at the municipality level, and we assessed the crops that were grown, the gross production volume of these crops, the total crop production value, and the number of inhabitants. Overall, considering all crop species, we found that the projected climate change will reduce the probability of pollinator occurrence by almost 0.13 by 2050. Our models predict that almost 90% of the municipalities analyzed will face species loss. Decreases in the pollinator occurrence probability varied from 0.08 (persimmon) to 0.25 (tomato) and will potentially affect 9% (mandarin) to 100% (sunflower) of the municipalities that produce each crop. Municipalities in central and southern Brazil will potentially face relatively large impacts on crop production due to pollinator loss. In contrast, some municipalities in northern Brazil, particularly in the northwestern Amazon, could potentially benefit from climate change because pollinators of some crops may increase. The decline in the probability of pollinator occurrence is found in a large number of municipalities with the lowest GDP and will also likely affect some places where crop production is high (20% to 90% of the GDP) and where the number of inhabitants is also high (more than 6 million people). Our study highlights key municipalities where crops are economically important and where pollinators will potentially face the worst conditions due to climate change. However, pollinators may be able to find new suitable areas that have the potential to improve crop production. The results shown here could guide policy decisions for adapting to climate change and for preventing the loss of pollinator species and crop production.
Animal pollination can impact food security since many crops depend on pollinators to produce fruits and seeds. However, the effects of projected climate change on crop pollinators and therefore on crop production are still unclear, especially for wild pollinators and aggregate community responses. Using species distributional modeling, we assessed the effects of climate change on the geographic distribution of 95 pollinator species of 13 Brazilian crops, and we estimated their relative impacts on crop production. We described these effects at the municipality level, and we assessed the crops that were grown, the gross production volume of these crops, the total crop production value, and the number of inhabitants. Overall, considering all crop species, we found that the projected climate change will reduce the probability of pollinator occurrence by almost 0.13 by 2050. Our models predict that almost 90% of the municipalities analyzed will face species loss. Decreases in the pollinator occurrence probability varied from 0.08 (persimmon) to 0.25 (tomato) and will potentially affect 9% (mandarin) to 100% (sunflower) of the municipalities that produce each crop. Municipalities in central and southern Brazil will potentially face relatively large impacts on crop production due to pollinator loss. In contrast, some municipalities in northern Brazil, particularly in the northwestern Amazon, could potentially benefit from climate change because pollinators of some crops may increase. The decline in the probability of pollinator occurrence is found in a large number of municipalities with the lowest GDP and will also likely affect some places where crop production is high (20% to 90% of the GDP) and where the number of inhabitants is also high (more than 6 million people). Our study highlights key municipalities where crops are economically important and where pollinators will potentially face the worst conditions due to climate change. However, pollinators may be able to find new suitable areas that have the potential to improve crop production. The results shown here could guide policy decisions for adapting to climate change and for preventing the loss of pollinator species and crop production.
Introduction
One of the key challenges addressed by the World Summit on Food Security is the necessity for countries to properly address the impact of climate change in order to achieve food security [1]. According to the FAO (Food and Agriculture Organization), food security exists when all people, at all times, have physical, social and economic access to address their dietary needs and food preferences for an active and healthy life [1]. Food security can be affected by climate change because it may change crop growth and production [2], impacting crop price and the food market and exacerbating hunger, land abandonment, migration and urbanization [3]. At the global scale, climate change is expected to lead to a 14% decline in per capita cereal production by 2030 [4], particularly affecting tropical areas [5]. In Africa and South Asia, 8% yield losses are expected across all crops by 2050 [6], with developing countries being more vulnerable [7], potentially enhancing the decline in crop productivity, particularly in countries that currently have a high prevalence of hunger [2]. Brazilian agricultural production is also expected to be affected by climate change. Between 2 and 5 billion US$ is the projected loss to be suffered by 2070, with coffee-growing areas showing a 30% decrease in the southeastern region [8].
An additional challenge to agriculture related to climate change is the loss of crop pollinators, with pollination being an ecosystem service that is important to maintain the production of the majority of crops [9]. Crops have different degrees of dependency on animal pollinators, and a global evaluation showed that 85% (91 of 107 crops) are pollinator dependent to some degree [10]. In Brazil, 60% of crops (85 of 141) are pollinator dependent, with another 39 crops that do not depend on animal pollination and 17 crops lacking data [11]. The area cultivated with pollinator-dependent crops has increased in recent decades [12], intensifying the need for pollinators and pollination. In addition, pollinator-dependent crops are important for human diet as a main source of micronutrients, such as vitamins A and C, calcium and folic acid [13], and a geographical equivalence was found between areas with a high vitamin A deficiency and pollinator-dependent crops that produce such vitamins [14]. These findings highlight the challenges for intensifying research on crop pollinator species and their interactions, emphasizing the urgent need for further research [15].
Climate change is ongoing, involving changes in precipitation and temperature regimes. According to the Intergovernmental Panel on Climate Change (IPCC), climate change will potentially lead to an average temperature increase of 2˚C to 4˚C by 2050 depending on the emission scenario [16]. Declines in pollinators as a result of climate change have already been suggested for honey bees [17] and bumblebees [18]. Other species have also shifted their distribution toward the poles [19,20] or to higher elevations [21] or have exhibited more complex responses [22], seeking milder habitat conditions. Climate change is also affecting the interaction between species [23], changing the structure of interaction networks [24], resulting in changes in phenological synchronization [25] and leading to mismatches in the geographic distribution of interacting species [26]. In addition, under climate change, the biota tends to show homogenization, with generalist species, which usually have broader abiotic requirements, becoming more prevalent [27], since species with narrow ecological niches or habitat preferences are likely to disappear [28].
The objective of this paper is to assess the impact of projected climate change on the geographic distribution of crop pollinators of 13 Brazilian crops and to estimate its relative impact on crop production. To this end, we analyzed pollinator shifts due to climate change for each crop using species distribution modeling. We evaluated the impact of climate change in each municipality where these crops are produced as well as the total crop production, the gross domestic product (GDP) per municipality and the number of inhabitants.
Materials and methods
The main pollinators of 13 Brazilian crops analyzed here were defined in another study, totaling 95 species [29] (S1 Table). The total number of Brazilian crops is still unknown, with most regional fruits and vegetables being produced by only local farmers. A recent assessment evaluated the pollinator dependency of 141 crops, but this is not the total number of Brazilian crops [11]. Moreover, there is a lack of information about the pollinator dependency of crops as well a lack of knowledge related to the main crop pollinators [11,29]. Additionally, the values of annual production per municipality are not available in the public data repository for all crops (Brazilian Institute of Geography and Statistics-IBGE), and such values are necessary for the estimations proposed here. Thus, this study analyzes 13 crops for which we have already determined the pollinator dependency and the main pollinators and for which we have the values of annual production per Brazilian municipality.
We assessed the impact of projected climate change on crop pollinators using species distribution modeling (SDM), a computational technique that determines potential areas of species occurrences and forecasts their future distribution [30]. We retrieved information about the occurrence points of each pollinator species from the speciesLink data portal (Centro de Referência em Informação Ambiental, CRIA) and from the Global Biodiversity Information Facility (GBIF) data portal. Both repositories contain biodiversity data deposited mainly in biological collections and museums (S2 Table).
In addition to the occurrence points, SDM uses environmental variables to determine suitable areas for species potential distribution. We used climatic variables obtained from Worldclim [31] with a resolution of 5 arc-minutes (approximately 10 x 10 km cell size at the Equator). From the 20 variables available under current climatic conditions, we calculated the nine least correlated variables [32]: altitude, mean diurnal range, isothermality, mean temperature of the driest quarter, annual precipitation, precipitation of the driest month, precipitation seasonality, precipitation of the warmest quarter, and precipitation of the coldest quarter. We used the same variables to forecast the future potential distribution projected by the Met Office Hadley Centre (HadGEM2-CC) and the University of Tokyo and collaborators (MIRO-C-ESM-CHEM) for the year 2050. We built the ensemble forecasting of both projections, which consists of a weighted sum scheme for merging the final models obtained [33]. We used the representative concentration pathways (RCP) set to 8.5, which specifies a likely global mean surface temperature increase by the end of the 21st century of 2.6˚C to 4.8˚C [16]. This scenario was chosen for projecting the greatest increase in the emission and, consequently, the most pronounced changes. For conservation purposes, it is important to detect the areas that are most suitable, even in more extreme scenarios, minimizing costs and maximizing the chance of effectively protecting the species. This approach was used in previous studies [34,35,36].
The Maximum Entropy algorithm (Maxent) [37] was used to perform the SDM. This algorithm is particularly useful because it can be applied to datasets using only presence records of species (as opposed to other algorithms that require absence data) [38]. We used the area under the curve (AUC) of the receiver-operator graph to estimate the accuracy of the modeling process in a test data set (20% of the total occurrence data) [39].
We analyzed the distribution of the pollinators of 13 Brazilian crops (S1 Table), with different dependencies for animal pollination: essential for pollination (acerola, annatto, passion fruit); great dependency (avocado, guava, sunflower, tomato); modest dependency (coconut, coffee, cotton); and little dependency (bean, mandarin, persimmon) (dependencies according to [29]). The values of annual production (tons) per crop were retrieved from the Brazilian Institute of Geography and Statistics (IBGE) website per municipality (year 2013, except for acerola, for which the figures correspond to 2006).
To evaluate the impact of projected climate change on pollinators in the Brazilian municipalities that produce a particular crop, we merged all models obtained for all pollinator species for each crop considering current conditions and did the same considering future forecasts (the ensemble of the Had and MIROC scenarios). This procedure resulted in one model for the potential distribution of all pollinators under current conditions and one model for the future conditions for each crop. In the subsequent step, we subtracted the values of the future potential distribution (represented as the occurrence probability) from the current one, also per crop. This final model represents the potential shift in pollinator occurrence per pixel and expresses an index that varies from -1 (100% decrease in pollinator occurrence, i.e., no pollinator will occur in that particular pixel) to +1 (100% increase in pollinator occurrence, i.e., all pollinators will occur in that particular pixel).
In the first step, we aimed to evaluate the impact of projected climate change on pollinators considering all of the abovementioned crops. To this end, we calculated the average potential shift in pollinator occurrence for the whole country. Moreover, we calculated the number of municipalities where each crop is produced that will potentially face an increase or decrease in pollinator occurrence considering the following: i) the GDP (the monetary value of all of the finished goods and services produced within a country's borders in a specific time period) per municipality, ii) the percentage of that particular crop in the GDP per municipality, and iii) the population (number of inhabitants). The value of the GDP and the population per Brazilian municipality were also retrieved from the IBGE website (year 2013).
In the next step, the potential shift in pollinator occurrence was plotted for the Brazilian municipalities where the particular crop is produced. For this step, we standardized the final scale to vary from -1 (representing the highest value of production and the highest decrease in pollinator occurrence probability) to +1 (representing the highest value of production and the highest increase in pollinator occurrence probability) to facilitate the interpretation of the results.
All the procedures were performed using Postgres (The PostgreSQL Global Development Group) for database management, the 'raster' package [40] of R (the R Project for Statistical Computing), and QGIS (Open Source Geospatial Foundation Project).
Results
The resulting mean potential shift in the pollinator occurrence probability for all analyzed crops (13 crops) shows that projected climate change will likely affect pollinators differently in different regions of Brazil (Fig 1). A slight decline in the probability of pollinator occurrence could occur in most areas (Fig 1-yellow color). A larger decline could occur mostly in the southern areas (orange and red colors). However, some areas, especially in the northern region, will likely experience a small increase in the probability of pollinator occurrence (green color), indicating an increase in the suitability of these regions. Considering all 13 crops, an overall 0.13 (SD = 0.11) average decrease in the pollinator occurrence probability was found as well as a high percentage of municipalities (88%) that will potentially face a decrease in the pollinator occurrence probability, contrasting with a 12% increase (S3 Table).
Most municipalities will likely face a decline in the probability of pollinator occurrence and also have the lowest GDP (less than R$1bi) (Fig 2A). Few municipalities will potentially face an increase in probability, also presenting low values of GDP (Fig 2A). However, almost 400 municipalities with higher GDP values (between R$1bi and R$250bi) will face a decline in the probability of pollinator occurrence (Fig 2A). Moreover, for most of the municipalities that will likely experience a decline in pollinator occurrence, crop production represents 10% of the GDP; however, for others, crop production represents higher numbers (20% to 90% of the GDP) (Fig 2B). The percentage of crop production in the total GDP is also low (less than 10%) in the few municipalities that will likely face positive shifts (Fig 2B). A decline in the probability of pollinator occurrence also correlates with most of the municipalities with fewer than 50,000 inhabitants but also includes some municipalities with the highest number of inhabitants (6 million people) (Fig 2C). However, positive potential shifts were also found in small municipalities (fewer than 40,000 inhabitants) as well as some populous municipalities (2.4 million people).
The potential shift in pollinator occurrence probability related to projected climate change in each Brazilian municipality varies strongly, as does the pattern for the different crops (Fig 3). The decline in the occurrence probability of the pollinators for each crop varies from 2% (persimmon) to 25% (tomato) ( Table 1A) and can potentially affect from 9% (for mandarin) Values vary from -1 (decrease of 100% in pollinator occurrence probability; red to yellow) to +1 (increase of 100%; green to blue). Crops have different levels of dependence on animal pollination (according to Giannini et al. 2015b). The list of pollinators for each crop can be found in S1 Table. https://doi.org/10.1371/journal.pone.0182274.g003 Climate change threatens pollinators and crop production to 100% (sunflower) of municipalities where each crop is produced (Table 1B). Most of the suitable future areas can be found in the northern areas, where almost all crops are produced (except sunflower and cotton). Some pollinators will also likely find potentially suitable areas in the southern areas of Brazil, where avocado, sunflower, tomato, bean, mandarin and persimmon are produced. However, most of the eastern region of Brazil, where many crops are currently produced, will likely experience a decrease in pollinator occurrence probability (except for persimmon). The areas where guava, tomato and coffee are produced will likely face the highest decrease in pollinators. Based on the shift in the pollinator occurrence probability and the crop production of each municipality, our results show that some municipalities with the highest value of production will potentially face important decreases in the pollinator occurrence probability (list of municipalities in S4 Table).
Discussion
Considering the importance of pollination in crop production, we investigated the potential impact of climate change on the distribution of pollinator bees for some Brazilian crops. We Table 1. Shifts for each crop analyzed considering A) the decrease and B) the increase in the pollinator occurrence probability and the number of municipalities potentially affected (scientific name of each crop can be found in S1 Table). Climate change threatens pollinators and crop production found that the pollinator occurrence probability will decrease in most of Brazil by the year 2050. The highest decrease will potentially be found in the southern areas. In contrast, some northern areas will show a slight increase in the occurrence probability of crop pollinators. The predicted effect of climate change will potentially reduce the occurrence of pollinators in most of the municipalities with the lowest GDP. This finding has important implications since it is expected that the effects of climate change will also cause a decline in crop productivity independent of the pollination deficits [8,2] or, additionally, change the dependence of crops on pollinators due to heat wave increases, as already discussed [41]. These joint effects could bring additional reductions in agricultural income in already poor municipalities, reinforcing their socioeconomic vulnerability. Worryingly, the highest decline in the probability of pollinator occurrence is projected to occur in the majority of municipalities (4000 municipalities) in which crop production accounts for the lowest percentage of the total GDP (10%), allowing few possibilities for future agricultural expansion if those areas with currently low production face a future reduction in pollinators. These cities are mainly found in the central and southern areas of Brazil. Municipalities with the highest GDP will also face a reduction in the occurrence of pollinators. However, the socioeconomic impact will be lower since the GDP is not so highly dependent on crop production. Considering this scenario, it is important to delineate strategies aiming to reduce the deficit of crop pollination and, at the same time, enhance crop productivity, promoting a better income for crop producers and helping to minimize further losses of natural areas for agriculture [42].
Number of municipalities
When analyzing each crop separately, we found that guava, tomato, coffee and mandarin pollinators will potentially be the most affected by pollinator loss. Of these, guava and tomato are greatly dependent on pollinators and may be greatly affected. Specifically, considering the impact on the pollinators of tomato, our results are corroborated by a previous study that analyzed five pollinator species of tomato in Brazil, showing reductions of between 10 and 70% in their distributional range [43]. Although coffee is modestly dependent on pollinators, it is mostly produced in the southeastern region (the most affected region according to our scenario) with very high values of annual production and the second highest economic value of pollination in Brazil (almost US$ 2 billion/year) [11]. Mandarin has little dependence on pollinators but is also highly produced in the southern areas of Brazil. Therefore, pollinator loss for these crops may cause high economic and social impacts. Interestingly, new suitable areas for the production of the abovementioned crops may be found in the northern regions, and the feasibility of increasing their cultivation in these regions should be investigated. Particularly in western Amazonia (northern Brazil), there are continuous areas of natural habitats, and many crops are still collected in an extractive manner [44,45] or produced on small farms. In contrast, in the southern region of Brazil, natural areas are fragmented and croplands are usually more extensive and homogeneous. Additionally, new habitat losses will likely continue to occur all over the country, and there is a debate in the literature regarding whether the necessity of expanding agricultural areas to compensate the decrease in crop yield due to climate change will increase [46], which could bring new challenges to pollinator protection. All of the abovementioned aspects are intricately associated and can affect agricultural production.
Future work should consider the impact of climate change on other aspects of Brazilian crop production, such as the reduction or deficiency in the availability of water for irrigation and possible phenology mismatches between flowering and pollinators. Moreover, the impact of climate change on the Brazilian crop itself should be taken into account since production is highly dependent on precipitation and temperature regimes. In addition, new studies could consider the effect of land use changes on agricultural areas in addition to climate. Moreover, climate change can potentially impact pollinator species in other ways, for example, changing the size of their populations, which could have additional impacts on Brazilian crop productivity. Another key factor is related to the quality of the data. More work is necessary to fill knowledge gaps about crops' effective pollinators and animal pollination dependence as well as data on annual crop production on Brazil. Notably, there are few data available about regional crops, whose production is often based on family farming and that are sources of income for the local economy.
Here, we propose a comprehensive methodology to analyze the impact of climate change on crop pollinators based on species distribution modeling that can be easily applied to other data, crops or regions. We show that climate change can affect pollinator species differently and that the relative impact on crop production needs to be considered when planning strategies that involve food production for medium-to long-term periods. Conservation strategies for crop pollinators are urgent and deepen the discussions about climate refuges and pollinator-friendly agricultural practices. Biodiversity data involving crop pollinators and production need to be gathered and shared to improve the accuracy of analyses that can ultimately be translated into effective strategies to guarantee the ecosystem services delivered by pollinator species.
Supporting information S1 Table. Shifts on pollinators' occurrence probability for all crops analyzed (13 crops) considering A) the overall average of decrease in probability; B) the number and percentage of municipalities that will potentially face decrease or increase on pollinators' occurrence probability (total number of municipalities analyzed equals to 4975). (DOCX) S4 Table. Municipalities that will potentially face the highest negative shift on pollinators' occurrence probability and that present the highest percentage of gross domestic product (GDP) associated to the analyzed crops. We considered 25% of the highest values of negative shift on pollinators and, from those, the 15 municipalities that present the highest percentage of GDP associated to the analyzed crop. Acerola was not included due to the lack of data. (DOCX) Formal analysis: Tereza Cristina Giannini, Wilian França Costa, Vera Lucia Imperatriz-Fonseca, Jacobus Biesmeijer, Lucas Alejandro Garibaldi.
|
2018-04-03T01:34:37.697Z
|
2017-08-09T00:00:00.000
|
{
"year": 2017,
"sha1": "ad493f4cfd858b425b327811e3e79ab0e937eb91",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0182274&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad493f4cfd858b425b327811e3e79ab0e937eb91",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
15099439
|
pes2o/s2orc
|
v3-fos-license
|
Long-Term Persistance of the Pathophysiologic Response to Severe Burn Injury
Background Main contributors to adverse outcomes in severely burned pediatric patients are profound and complex metabolic changes in response to the initial injury. It is currently unknown how long these conditions persist beyond the acute phase post-injury. The aim of the present study was to examine the persistence of abnormalities of various clinical parameters commonly utilized to assess the degree hypermetabolic and inflammatory alterations in severely burned children for up to three years post-burn to identify patient specific therapeutic needs and interventions. Methodology/Principal Findings Patients: Nine-hundred seventy-seven severely burned pediatric patients with burns over 30% of the total body surface admitted to our institution between 1998 and 2008 were enrolled in this study and compared to a cohort non-burned, non-injured children. Demographics and clinical outcomes, hypermetabolism, body composition, organ function, inflammatory and acute phase responses were determined at admission and subsequent regular intervals for up to 36 months post-burn. Statistical analysis was performed using One-way ANOVA, Student's t-test with Bonferroni correction where appropriate with significance accepted at p<0.05. Resting energy expenditure, body composition, metabolic markers, cardiac and organ function clearly demonstrated that burn caused profound alterations for up to three years post-burn demonstrating marked and prolonged hypermetabolism, p<0.05. Along with increased hypermetabolism, significant elevation of cortisol, catecholamines, cytokines, and acute phase proteins indicate that burn patients are in a hyperinflammatory state for up to three years post-burn p<0.05. Conclusions Severe burn injury leads to a much more profound and prolonged hypermetabolic and hyperinflammatory response than previously shown. Given the tremendous adverse events associated with the hypermetabolic and hyperinflamamtory responses, we now identified treatment needs for severely burned patients for a much more prolonged time.
Introduction
Despite significant advances in therapeutic strategies, e.g., improving resuscitation, enhancing wound coverage, appropriate infection control, and improving treatment of inhalation injury, severe burns remain a devastating injury affecting nearly every organ system and leading to significant morbidity and mortality [1]. Main contributors to adverse outcomes of severely burned patients are profound and complex metabolic changes in response to the initial burn [1,2]. Burns covering more than 30% total body surface area (TBSA) are associated with stress, inflammatory, and hypermetabolic responses that lead to hyperdynamic circulation, increased body temperature, glycolysis, proteolysis, lipolysis and futile substrate cycling [3][4][5]. These responses are present in all trauma, surgical, or critically ill patients, but the severity and magnitude is unique for burn patients [1]. Marked and sustained increases in catecholamine, glucocorticoid, glucagon, and dopamine secretion are thought to initiate the cascade of events leading to the acute hypermetabolic response with its ensuing catabolic state [3,[6][7][8][9][10][11][12][13].
Several studies have indicated that these metabolic phenomena post-burn occur in a timely manner, suggesting two distinct patterns of metabolic regulation following injury [14]. The first phase occurs within the first 48 hours of injury and has classically been called the ''ebb phase'' [14,15], characterized by decreases in cardiac output (CO), oxygen consumption, and metabolic rate as well as impaired glucose tolerance associated with its hyperglycemic state. These metabolic variables gradually increase within the first five days post-injury to a plateau phase (called the ''flow'' phase) associated with hyperdynamic circulation and the above mentioned hypermetabolic state. In the past, general understanding has been that these metabolic alterations resolve with complete wound closure or shortly thereafter [16]. Recent studies, however, are indicative that the hypermetabolic response to burn injury persists beyond wound closure, e.g., we have recently shown that alterations in insulin sensitivity persisted for three years after the initial burn injury [17]. In light of these findings, we hypothesized that a burn injury induces vast hypermetabolic and inflammatory alterations associated with physiologic changes that persist not only for 6 to 12 months post-burn but for three years. To test our hypothesis, we conducted a large prospective study in severely burned pediatric patients and determined hypermetabolic and inflammatory responses over a period of three years post-burn.
Demographics
Nine-hundred seventy-seven severely burned children were included in the present study. Characteristics of burn patients are depicted in Table 1. Patients were, on average, 7.5 years of age, 36% were females and 64% were males. Patients suffered from a severe thermal injury involving 50% TBSA burn and a third-degree burn of 37% TBSA. During acute hospitalization, length of hospital/ICU stay was 26 days which results in 0.5 days per percent TBSA burn. Patients were taken back to the OR every 7 th day and required on average 4 operations. During acute hospitalization, 32% of the patients suffered from inhalation injury, minor infections occurred in 43% of the patients, sepsis occurred in 10%, multi-organ failure in 16%, and 8% of our patients died (Table 1).
Hypermetabolism
Indirect calorimetry. Predicted REE increased significantly post-burn and then gradually decreased over time, but remained significantly elevated for two years following burn injury indicating marked hypermetabolism, p,0.05 (Fig. 1A).
On admission, the patient population fell within essentially the normal distribution pattern for both height and weight. Thirty-six percent of the patient population fell below the 50 th percentile (the mean) for height at admission, while the percentage of burned children that fell below the 50 th percentile for height was significantly greater for up to two years post-burn, indicating a profound growth delay in this patient population. Forty-two percent of the patients included this study were below the mean for weight at admission, while the percentage of burned children that fell below the 50 th percentile for weight was significantly greater for up to three years post burn, p,0.05. Data demonstrate that it takes approximately 1-2 years for pediatric burn patients to grow again and improve their height and weight percentile.
Organ changes. Analysis of CO, CI and HR revealed marked alterations in response to burn. CO increased immediately post-burn and remained significantly elevated over 12 months before gradually decreasing to values of non-burned controls (Fig. 1H). CI was also significantly elevated during the first 12 months post-burn before returning to values of non-burned patients for the remaining of the study (Fig. 1I). While HRs of severely burned pediatric patients vastly increased immediately after burn to values of 17366% with that of non-burned children, and remained significantly increased over 3 years post-burn, SV of these children was not significantly different from that of normal controls (Fig. 1J, K).
Analysis of liver size determined by ultrasounds demonstrated markedly increased liver size in response to the initial burn trauma for up to three years, p,0.05 (Fig. 1L). Throughout the time period studied, liver size of severely burned children was increased by an average of 75% compared to healthy non-burned children of similar age. Interestingly, we did not detect a decrease in liver size over the three-year study period.
Inflammatory and acute phase response
Urinary catecholamine and cortisol measurements. Urinary norepinephrine, epinephrine, and cortisol increased markedly immediately after burn trauma. Urinary norepinephrine increased 10-fold during the early phase post-burn and remained significantly elevated up to 540 days post-burn when compared with non-burned control patients, p,0.05 ( Fig. 2A). Urinary epinephrine levels significantly increased 4-to 5-fold post-burn and remained elevated for 60 days post-burn, p,0.05 (Fig. 2B). Total urine cortisol levels initially increased 8-to 10-fold and levels remained significantly increased 3 years post-burn, p,0.05 (Fig. 2C). Serum cortisol measurements displayed similar characteristics and levels were significantly increased up to 3 years post-burn, p,0.05 (Fig. 2D). Serum cytokines. We found that almost all cytokines measured within this study were significantly altered in response to burn injury ( Fig. 3A-Q). Dramatic changes were observed for serum IL-6, IL-8, G-CSF and MCP-1 (Fig. 3C, D, M, N). These cytokines demonstrated an up to 2,000-fold increase immediately upon burn trauma and remained significantly elevated throughout the time period studied when compared with the concentrations detected in non-burned controls, p,0.05. GM-CSF, INF-c, TNFa, IL-1b, IL-2, IL-5, IL-7, IL-10, and IL-17 significantly increased by 2-to 20-fold in response to burn injury and revealed relatively constant, but significantly increased levels for most of the threeyear period post-burn compared to non-burned patients (p,0.05). IL-12p70 and MIP-1b were not significantly altered in response to burn trauma when compared to controls.
Serum proteins. Serum acute phase proteins were significantly altered upon burn injury ( Fig. 4A-L). Serum complement C3 concentrations initially demonstrated significantly diminished levels compared to those of non-burned controls, before peaking at 29 to 90 days post-burn with significantly elevated levels and then rapidly decreased to basal levels for the remaining of the study period, p,0.05 (Fig. 4A). Serum a 2 -macroglobulin concentrations displayed significantly decreased values for up to 60 days post-burn before gradually increasing to levels of non-burned controls, p,0.05 (Fig. 4B). Serum haptoglobin, a 1 -acidglycoprotein, and CRP demonstrated a 2-to 12-fold increase immediately upon burn injury and remained significantly elevated for up to 90 and 270 days post-burn, respectively, compared to non-burned controls, p,0.05 ( Fig. 4C-E). Serum constitutive hepatic proteins retinol-binding protein, pre-albumin and transferrin markedly decreased by 2-fold immediately post-burn and remained significantly decreased for up to 90 days post-burn, p,0.05 ( Fig. 4F-H). Serum apolipoprotein A1 significantly decreased post-burn and remained significantly diminished for a period of 90 days, p,0.05 (Fig. 4I). Apolipoprotein B demonstrated a diminutive initial decrease then steadily increased to significantly elevated levels between 41 and 90 days before gradually decreasing to basal levels of non-burned patients, p,0.05 (Fig. 4J). Serum triglycerides gradually increased upon burn injury and demonstrated significantly elevated levels between 17 to 180 days post trauma (Fig. 4K).
Serum glucose significantly increased immediately upon burn injury to levels of 15662 mg/dl and remained significantly elevated for a period of 180 days before gradually decreasing to levels within the normal physiologic range, p,0.05 (Fig. 4L). Serum insulin levels also rapidly increased to significant levels in response to burn, before subsequently decreasing but remaining significantly elevated for the whole time period studied when compared with the serum concentrations detected in non-burned controls, p,0.05 (Fig. 4M).
Serum concentrations of Alanin-Aminotransferase (ALT) and aspartat-aminotransferase (AST) significantly increased immediately upon burn trauma and remained significantly elevated for the remaining of the study period, p,0.05 (Fig. 5A, B). Serum Albumin (ALB) concentrations demonstrated significantly decreased levels for the entire three-year period compared to nonburned controls, p,0.05 (Fig. 5C). Both, alkaline phosphatase (ALP) and gamma glutamyl transpeptidase (GGT) were significantly altered in response to burn. While ALP displayed significantly elevated levels starting 8 days post-burn injury, which remained significantly elevated for the remaining of the study, serum concentrations of GGT raised to significant values beginning eight days post-burn before rapidly decreasing to normal concentrations beginning 90 days post-trauma, p,0.05 (Fig. 5D, E). Serum calcium concentrations, however, displayed significantly decreased levels for the entire three-year period, p,0.05 (Fig. 5F).
Serum hormones. All serum hormones measured within this study demonstrated significant alterations in response to burn trauma. Serum levels of both, insulin growth factor (IGF)-I and insulin-like growth factor binding protein-3 (IGF BP-3), decreased significantly immediately post-burn and remained diminished for most of the remaining study period, compared to levels of nonburned controls, p,0.05 (Fig. 5G, H). Human growth hormone (hGH) gradually declined in response to burn injury over the time period studied and demonstrated significantly decreased values for several of the time points within the three-year period when compared to controls, p,0.05 (Fig. 5I). Serum parathormone (iPTH) decreased by 8-fold immediately post-burn and remained significantly decreased for three years post-burn, p,0.05 (Fig. 5J). Serum levels of osteocalcin also displayed significantly decreased values for a time period of 270 days before rapidly increasing to levels of non-burned controls, p,0.05 (Fig. 5K). Serum concentrations of estrogen (EST), testosterone (TEST) and progesterone (PROG) displayed diverse patterns post-burn injury. While serum estrogen decreased immediately post-burn and remained significantly decreased for the entire time period studied, p,0.05 (Fig. 5L), serum testosterone gradually increased upon burn trauma with significant levels at 8 to 10 days post-burn, respectively, before gradually decreasing to diminished levels beginning 60 days post-burn, p,0.05 (Fig. 5M). Serum progesterone concentrations displayed significantly elevated levels for the first two years post-burn before gradually decreasing to values of non-burned controls, p,0.05 (Fig. 5N).
Discussion
The importance of this study is that it clearly demonstrated that burn induced metabolic and inflammatory changes persisted for 3 years after the injury. The relevance of post-burn hypermetabolism and inflammation is that they induce insulin resistance for 3 years [17], 50-to 100-fold increase in fracture risk [1], 200% increase in liver-size [18,19], growth and development retardation for 2-3 years [3], increased cardiac work and develop cardiac dysfunction [20], impaired strength [3,20], muscle function, hormonal abnormalities [17,18], increased risk for infections and sepsis [3,20]. All the aforementioned can lead to morbidity and mortality of the patient. We now showed that this risk to die is not over when the patient is 95% healed; it persists for up to 3 years post-burn.
Even though the metabolic alterations after severe burn injury are similar to any major trauma, severe burns are characterized by a hypermetabolic response that is more severe and sustained than any other form of trauma [16]. Several studies have extensively delineated the complexity of the acute post-burn pathophysiologic response [14,15,18]; however, it is currently unknown how long these metabolic phenomena persist beyond the first 12 months after the initial event [6,13,21,22]. Marked and sustained increases in catecholamine, glucocorticoid, and glucagon secretion are thought to initiate the cascade of events leading to the acute hypermetabolic response with its ensuing catabolic state [3,[6][7][8][9][10][11][12][13]. Contrary to past understanding that these metabolic mediators resolve soon after complete wound closure [16], we could demonstrate catecholamines and stress hormones such as cortisol were elevated for up to 36 months post-burn accompanied by significant increases in REE indicative of vast hypermetabolism. Resting metabolic rates in burn patients have been shown to increase in a curvilinear fashion, ranging from near normal for burns less than 10% TBSA to twice that of normal in burns more than 40% TBSA. In patients with burn injuries greater than 40% TBSA, resting metabolic rate at thermally neutral temperature (33uC) reaches up to 180% of the basal rate during acute admission, 150% at full healing of the burn wound, 140% at six months after the injury, 120% at nine months after injury, and 110% after 12 months [3]. In this study, we could demonstrate that even three years after the initial trauma REE is still above normal, indicating a persistent hypermetabolic response. The exact cause of this complex response, however, is still poorly understood. IL-1 and -6, platelet-activating factor, TNF, endotoxin, neutrophil-adherence complexes, reactive oxygen species, nitric oxide and coagulation as well as complement cascades, all have been implicated in regulating this response to burn injury [23]. Here, we found marked alterations for 14 cytokines in response to burn injury. Particularly, serum IL-6, IL-8, G-CSF and MCP-1 displayed dramatic changes. These cytokines demonstrated an up to 2000-fold increase immediately post-burn and remained significantly elevated throughout the time period studied. Cytokines are the primary mediators of this inflammatory reaction to injury [24]. They constitute a group of proteins with autocrine and endocrine activities that provide communication among different types of cells, including those that mediate immune functions, angiogenesis, cell proliferation and apoptosis [24]. Inflammatory cytokines such as TNF, IL-6 and MCP-1 have been also shown to inhibit insulin action through modification of signaling properties of insulin receptor substrates, contributing to liver and skeletal muscle insulin resistance [25][26][27].
Persistently increased glucose and insulin levels as shown in this study are of serious clinical concern since hyperglycemia has been frequently linked to impaired wound healing [28], increased skin graft loss [29], increased muscle protein catabolism [30], increased incidence of infections [31,32] and mortality [2,[31][32][33][34][35]. Maintaining blood glucose at levels below 110 mg/dl using intensive insulin therapy has been shown to reduce mortality and morbidity in critically ill patients [36]; however, associated hypoglycemic events have led to the investigation of alternative strategies, including the use of metformin [37] and the PPAR-c agonist fenofibrate [38]. Other underlying factors for the observed elevated glucose and insulin levels may include the above mentioned prolonged increases in endogenous stress hormones, which have been causally associated with injury-induced insulin resistance [8][9][10][11][12][13]. Also, decreases in muscle mass, both during the acute and recovery phases following injury, may significantly contribute to this persistent insulin resistance, since skeletal muscle has been shown to be responsible for 70-80% of whole-body insulin-stimulated glucose uptake [39]. In contrast to starvation, in which lipolysis and ketosis provide energy and protect muscle reserves, burn injury considerably reduces the ability of the body to utilize fat as an energy source. Skeletal muscle is thus the major source of fuel in the burned patient, which leads to marked wasting of LBM within days after injury [1,40], as shown in our burned patients. Increased protein turnover, degradation, and negative nitrogen balance are common characteristics of severe burn trauma [41]. As a consequence, structure and function of essential organs such as skeletal muscle, skin, immune system, and cellular membrane transport functions may be compromised [42,43]. In 1998, Chang and colleagues [44] defined that a 10% loss of LBM may lead to impaired immune function, a 20% loss of LBM to impaired wound healing with an associated 30% mortality, a 30% loss of LBM to pneumonia and pressure sores with an associated 50% mortality, and a 40% loss of LBM may ultimately result in death in 100% of cases.
Other significant observations in this large prospective trial include substantially affected expression of acute phase proteins. Particularly haptoglobin, a 1 -acidglycoprotein, and CRP demonstrated significant increases for up to nine months post-burn. Serum constitutive hepatic proteins, in contrast, such as retinolbinding protein, pre-albumin and transferrin, were found to be significantly decreased for up to six months post-injury. This decrease could be due to decreased production, increased consumption or increased loss due to capillary leakage. These proteins represent commonly utilized markers for general homeostasis indicating the severity and intensity of the prolonged post-burn dysbalance [18]. Determinations of serum triglycerides revealed significant increases for nine months post-trauma, a finding which may help explain the commonly observed fatty infiltration of liver and other organs of burn victims. A recently demonstrated association between hepatomegaly with fatty infiltration and increased incidence of sepsis and mortality supports the importance of this observation [45]. After thermal injury, a variable degree of liver injury is present, and it is usually related to the severity of the thermal injury. Fatty changes, a very common finding, are per se reversible and their significance depends on the cause and severity of accumulation [45][46][47][48][49][50]. In this study, analysis of liver ultrasounds demonstrated markedly increased liver size in response to the initial burn trauma for up to three years. Other hepatic parameters utilized to determine liver function, including ALT and AST, were significantly altered for the entire study period, also indicating that liver damage is present for a prolonged period post-trauma. As described previously by our group, serum apolipoprotein A1 significantly decreased upon burn trauma and remained significantly diminished for three years, apolipoprotein B, in contrast, only demonstrated diminutive initial decreases before returning to normal values [18]. The exact role of these two proteins in this context, however, remains to be determined.
As recently demonstrated by Jeschke et al. [18] in the acute phase post-burn, several hormonal axes are affected by burn trauma. Overall, critical illness is characterized by marked alterations in the hypothalamic-anterior-pituitary-peripheralhormone axes, the severity of which is associated with a high risk of morbidity and mortality [51]. Within this study, we also found prolonged alterations in GH-IGF-I-IGFBP-3-axis, PTH-Osteocalcin axis, and sex hormones (testosterone, b-estradiol, progesterone). Particularly, serum levels of IGF-I and IGF BP-3 demonstrated a substantial decrease for up to three years postburn while measured levels of hGH were rather moderately decreased for the whole time period studied. Beneficial effects of recombinant human growth hormone (rhGH) in trauma patients have been demonstrated in various settings. Besides enhancing immune function [52,53], wound healing [54], and decreasing the overall hypermetabolic response after major surgery, trauma, sepsis or a thermal injury [55][56][57], rhGH stimulates protein synthesis and attenuates the nitrogen loss after injury and improves clinical outcomes [58]. Also, rhGH modulates the hepatic acute phase response by increasing constitutive hepatic proteins, decreasing acute phase proteins, modulating cytokine expression, and increasing IGF-I concentrations [59,60]. However, ever since rhGH administration in trauma patients has been shown to increase mortality in a prospective, randomized, double-blind study by Takala and colleagues [61], the use of rhGH has been restricted. Administration of IGF-I may thus represent a promising therapeutic alternative, since recent studies could demonstrate that IGF-I, in combination with its principle binding protein, improved muscle protein synthesis, hepatic acute phase and inflammatory response, and the immune system [62][63][64].
Determination of serum osteocalcin and parathyroid hormone levels also demonstrated significant decreases for nine and up to 36 months, respectively, associated with profound decreases in BMC and BMD. As shown by Klein and others [10,[65][66][67][68][69][70] using labeled tetracycline in order to determine bone turnover rates, severely burned patients are lacking overall bone formation and synthesis. Besides administration of pamidronate, which was recently shown to improve bone metabolism during the acute phase and longterm phase post-burn [69], sex hormone substitution may represent a potential therapeutic approach. Particularly, since estrogen levels were diminished for the whole time period studied and estrogen substitution has been shown to improve bone mineralization and metabolism [71].
Analysis of various cardiac parameters showed marked alterations in response to burn. Percent of normal CO as well as CI were significantly elevated during the first 180 days post-burn, accompanied by a massive tachycardia with 120-180% predicted HR for the whole time period studied. Elevated levels of plasma catecholamines instigate the cardiac stress post-burn. Plasma catecholamine levels are elevated up to nine months post-burn, but the derangements in cardiac physiology last up to three-years after the initial trauma. Elevated catecholamines increase myocardial oxygen delivery, myocardial oxygen consumption and cause focal degeneration of the myocardium and hypertrophy. In excess, they cause cardiac deficiency, local myocardial hypoxia, and cardiac death [72]. Thus, prolonged exposure to catecholamine levels 10-fold higher than normal is cause for clinical concern. However, the long-term ramifications of the cardiac stress seen post-burn is still unknown. Initially, it was thought that these derangements would subside shortly after the acute hospitalization or the initial resuscitation. More recent research has shown that these responses may last 9-12 months after the initial insult [16,73,74]. Here, we demonstrated a significant increase in cardiac work up to three years after the initial injury. This may be a profound detriment to our pediatric burn patients by increasing morbidity and leading to long-term cardiac exhaustion, or cardiovascular complications in the future. These findings support the use of an anti-catabolic agent to attenuate the effects of increased catecholamines, beta-adrenergic regulation, and the need for cardiovascular protection.
Therapeutic advancements in the acute phase post-burn, such as early excision and closure of the burn wound, more appropriate infection control and anti-catabolic therapeutic intervention, including beta-adrenergic blockade with propranolol, growth hormone, insulin-like growth factor, oxandrolone, testosterone, and insulin have substantially contributed to significant improvement of morbidity and mortality rates in burn patients during acute hospitalization. However, based on this study, we suggest that a severe burn is not an acute illness but rather a chronic health problem. We thus believe that burn patients should be carefully monitored for at least 3-4 years in order to reverse these complex metabolic alterations post-burn.
Patients
All thermally injured pediatric patients with burns covering more than 30% of their TBSA were admitted to our institution between 1998 and 2008, required at least one surgical intervention, and consented to the University of Texas Medical Branch Institutional Review Board-approved experimental protocol were included in this study. Patient, parent or legal guardian provided with written consent for participation in the study.
Admission Data
On admission, the extent and degree of burn was assessed and recorded on a standard Lund and Browder chart by the attending burns surgeon present. Information also recorded at the time of admission included burn related (date and mechanism) as well as demographic data (age and gender). All patients were treated in our pediatric burns intensive care unit according to standardized protocols.
Patients were resuscitated if needed according to the Galveston formula with 5000 cc/m 2 TBSA burned +2000 cc/m 2 TBSA lactated Ringer's solution given in increments over the first 24 hours. Within 24 hours of admission, all patients underwent total burn wound excision and the wounds covered with available autograft skin and any remaining open areas were covered with homograft. After the first operative procedure, it took 5-10 days until the donor site was healed and patients were taken back to the operation theater. This procedure was repeated until all open wound areas were covered with autologous skin material.
All patients underwent the same nutritional treatment according to a standardized protocol. The intake is calculated as 1500 kcal/m 2 body surface +1500 kcal/m 2 area burn or we assessed the need by measuring the resting energy expenditure (REE), multiplied by 1.4 with weekly adjustments as previously published [6,22].
Patient demographics
Patient demographics (age, date of burn and admission, gender, burn size and depth of burn) and concomitant injuries such as inhalation injury, sepsis, morbidity, and mortality were recorded. Minor infection was defined as a positive tissue culture with more than 10 5 colony forming units per gram tissue. Sepsis was defined as a positive blood culture or pathologic tissue culture identifying the pathogen during hospitalization or at autopsy, in combination with at least 3 of the following: leucocytosis or leucopenia (.12,000 or ,4,000), hyperthermia or hypothermia (.38.5 or ,36.5uC), tachycardia (.150 BPM in children), refractory hypotension (systolic BP ,90 mmHg), thrombocytopenia (platelets ,50,000/mm 3 ), hyperglycemia (serum glucose .240 mg/dl), and enteral feeding intolerance (residuals .200 cc/hr or diarrhea .1 L/day) as previously published [3,75]. Time between operations was determined as a measurement for wound healing/reepithelization. As demonstrated previously, we believe that time between operations may be indicative when donor sites were healed and thus allow determination of wound healing.
Time points
Results obtained during the three-year period were divided into fifteen different time phases: 0 to7, 8 to 10, 11 to 16, 17 to 22, 23 to 28, 29 to 34, 35 to 40, 41 to 60, 61 to 90, 91 to 180, 181 to 270, 271 to 365, 366 to 540, 541 to 730 and 731 to 1,100 days postburn. Data presented include a number of 31 to 307 different measurements at each time point. If any patient had more than one measurement performed during the respective time period, results were averaged to give a single mean result for each patient at each time period. One-hundred seven non-burned children, who consented for research studies and required blood and/or 24hour urine collections, were used as normal cohort.
Hypermetabolism
Indirect calorimetry. As part of our routine clinical practice, all patients underwent REE measurements weekly during acute hospitalization and during admissions for reconstructive operations for up to three years post-burn. REE was measured using a Sensor-Medics Vmax 29 metabolic cart (Yorba Linda, CA, USA) as previously published [6]. REE was calculated from the oxygen consumption and carbon dioxide production by equations described before [6]. For statistical comparison, measured energy expenditure was expressed as the percentage of the basal metabolic rate predicted compared with predicted norms based upon the Harris-Benedict equation and to body mass index (BMI) [6].
Growth measurements and body composition. Heights and weights were measured during acute hospitalization subsequent stays for reconstructive purposes and were plotted on standard growth charts [76] to obtain the individual height and weight percentiles for age and gender. Percentages of the population plotted within each percentile ranking were then calculated. Total lean body mass (LBM), fat, bone mineral density (BMD), and bone mineral content (BMC) were measured by dual energy x-ray absorptiometry (DEXA). A hologic model QDR-4500W DEXA (Hologic Inc, Waltham, MA) was used to determine body composition as previously published [7,21,77,78].
Organ changes. M-Mode echocardiograms were completed as follows: at the time of the study, none of the patients presented with or previously suffered from other concomitant diseases affecting cardiac function, such as diabetes mellitus, coronary artery disease, long-standing hypertension, or hyperthyroidism. Study variables included: resting CO, cardiac index (CI), stroke volume (SV), resting heart rate (HR) and left ventricular ejection fraction (LVEF). SV and CO were adjusted for body surface area and expressed as indexes. All cardiac ultrasound measurements were made with the Sonosite Titan echocardiogram with a 3.5 MHz transducer. Recordings were performed with the subjects in a supine position and breathing freely. M-Mode tracings were obtained at the level of the tips of the mitral leaflets in the parasternal, long axis position and measurements were performed according to the American Society of Echocardiography recommendations. Left ventricular volumes determined at end diastole and end systole were used to calculate EF, SV, CO and CI. Three measurements were performed and averaged for data analysis [21,78]. Liver size was determined by ultrasound as previously published [18].
Inflammatory and acute phase response
Urinary catecholamine and cortisol measurements. Twentyfour-hour urine collections were taken regularly throughout acute hospital stay and during admissions for reconstructive operations and rehabilitation services. These samples were collected and chilled by the bedside prior to transport to our clinical lab for processing using HPLC techniques as previously published [17]. Extraction of the catecholamines from acidified urine samples were performed using a Bio-Rad kit (Bio-Rad, Hercules, CA), according to manufacturer's instructions.
Serum cytokine, protein and hormone measurements. Blood was collected from the burn patients at the time of admission, preoperatively, and every Monday and Thursday at 6:00 AM, as well as during subsequent stays for surgical and rehabilitation services for serum cytokine and hormone analysis per hospital protocol. Blood was drawn in a serum-separator collection tube and centrifuged for 10 minutes at 1320 rpm; the serum was removed and stored at 270uC until assayed.
Ethics and statistics
The study was reviewed and approved by the Institutional Review Board of the University Texas Medical Branch, Galveston, Texas. Prior to the study, each subject, parent or child's legal guardian signed a written informed consent form. Analysis of variance (ANOVA) with post hoc Bonferroni correction, paired and unpaired Student's t-test, Chi-square analysis, and Mann-Whitney tests were used where appropriate. Data are expressed as means6SD or SEM, where appropriate. Significance was accepted at p,0.05.
Study oversight
This study was registered at www.clinicaltrials.gov: #NCT 00239668 and #NCT00673309. A steering committee consisting of academic investigators designed the study and monitored its conduct. Data were collected by the investigators and analyzed by scientists. All the authors had access to the data, participated in the data analysis and interpretation, and wrote the manuscript. All authors vouch for the accuracy and completeness of the data and the statistical analysis. All authors participated in the writing of the manuscript and approved the final manuscript before submitting it for publication.
|
2014-10-01T00:00:00.000Z
|
2011-07-18T00:00:00.000
|
{
"year": 2011,
"sha1": "ef2c04cdb882a5c44fafcb63c145172ce49e67f6",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021245&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4973c1327975cc17d5e21561588f7113f7f0b629",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221083425
|
pes2o/s2orc
|
v3-fos-license
|
QoS-Compliant 3D Deployment Optimization Strategy for UAV Base Stations
Unmanned aerial vehicle (UAV) is being integrated as an active element in 5G and beyond networks. Because of its flexibility and mobility, UAV base stations (UAV-BSs) can be deployed according to the ground user distributions and their quality of service (QoS) requirement. Although there has been quite some prior research on the UAV deployment, no work has studied this problem in a 3 dimensional (3D) setting and taken into account the UAV-BS capacity limit and the quality of service (QoS) requirements of ground users. Therefore, in this paper, we focus on the problem of deploying UAV-BSs to provide satisfactory wireless communication services, with the aim to maximize the total number of covered user equipment (UE) subject to user data rate requirements and UAV-BSs' capacity limit. First, we model the relationship between the air-to-ground (A2G) path loss (PL) and the location of UAV-BSs in both horizontal and vertical dimensions which has not been considered in previous works. Unlike the conventional UAV deployment problem formulation, the 3D deployment problem is decoupled into a 2D horizontal placement and altitude determination connected by path loss requirement and minimization. Then, we propose a novel genetic algorithm (GA) based 2D placement approach in which UAV-BSs are placed to have maximum coverage of the users with consideration of data rate distribution. Finally, numerical and simulation results show that the proposed approach has enabled a better coverage percentage comparing with other schemes.
Abstract-Unmanned aerial vehicle (UAV) is being integrated as an active element in 5G and beyond networks. Because of its flexibility and mobility, UAV base stations (UAV-BSs) can be deployed according to the ground user distributions and their quality of service (QoS) requirement. Although there has been quite some prior research on the UAV deployment, no work has studied this problem in a 3 dimensional (3D) setting and taken into account the UAV-BS capacity limit and the quality of service (QoS) requirements of ground users. Therefore, in this paper, we focus on the problem of deploying UAV-BSs to provide satisfactory wireless communication services, with the aim to maximize the total number of covered user equipment (UE) subject to user data rate requirements and UAV-BSs' capacity limit. First, we model the relationship between the air-to-ground (A2G) path loss (PL) and the location of UAV-BSs in both horizontal and vertical dimensions which has not been considered in previous works. Unlike the conventional UAV deployment problem formulation, the 3D deployment problem is decoupled into a 2D horizontal placement and altitude determination connected by path loss requirement and minimization. Then, we propose a novel genetic algorithm (GA) based 2D placement approach in which UAV-BSs are placed to have maximum coverage of the users with consideration of data rate distribution. Finally, numerical and simulation results show that the proposed approach has enabled a better coverage percentage comparing with other schemes.
I. INTRODUCTION
U NMANNED aerial vehicle (UAV)-assisted communications have recently gained fast popularity as an effective solution to complement traditional stationary base stations. Unmanned aerial vehicle base stations (UAV-BSs) have the rapid-deployment and reconfiguration advantages compared to terrestrial ones [1]. The roadmap of telecommunication infrastructure provider [2] and 3GPP technical reports [3] have demonstrated promising field trials results of wireless connectivity to the UAVs, and discussed the future ubiquitous mobile broadband coverage both on the ground and in the sky. The fifth generation (5G) and beyond wireless communications and Internet-of-Things (IoT) application scenarios can X. Zhong, Y. Huo be facilitated by the UAV communication or treat it as a critical integral part [4], [5], [6]. The advantages of high mobility and flexibility of UAVs as part of the high-performance wireless communications network can also potentially serve the broadcasting industry. For example, UAVs are used to conduct object-oriented tracking during aerial filming [7] and transmit the broadcasting streams simultaneously. Moreover, a mathematical model was proposed to overcome the problem of sport event filming with connectivity constraints in [8]. A taxonomy to formulate the concept of multiple-UAV cinematography was proposed to enable the autonomous UAV filming in [9].
Despite the benefits in enabling UAV-BSs in the broadcasting and communications industry, there are significant challenges in terms of UAV system design and deployment strategies. For example, finding suitable UAV-BSs' positions when deploying the UAV-BSs network is particularly difficult in terms of cost-efficiency. Since the life time of the battery powering one UAV-BS is limited and the number of available UAV-BSs is also constrained, UAV-BSs should be deployed in a method which maximizes the number of covered users in an energy-efficient way. Another critical challenge is that in practical situations, different user equipment (UE) may have different quality of service (QoS) requirements while each UAV-BS has limited data rate capacity. Therefore, the rational distribution of the radio resources needs to be considered.
Research on UAV-BSs development has focused on finding horizontal positioning [10]- [12] and altitude optimization [13]- [15]. In [10] and [11], an identical coverage radius is assumed for all UAV-BSs. The work in [10] proposes an efficient spiral placement algorithm aiming to minimize the required number of UAVs, while [11] models the UAV deployment problem based on circle packing theory and study the relationship between the number of deployed UAV-BSs and the coverage duration. In [12], the authors use a K-Means clustering method to partition the ground users to k subsets and users belonging to the same subset are served by one UAV. All these works have a fixed altitude assumption. The relationship between the altitude of UAV-BSs and the coverage area is studied in [13] and [14]. In [13], the method of finding the optimal altitude of a single UAV placement for maximizing the coverage is studied based on a channel model with probabilistic path loss (PL). Reference [14] formulates an equivalent problem based on the same channel model as [13] and proposes an efficient solution. Moreover, [15] studies multiple UAV-BS 3D placements with a given radius taking into account energy efficiency by decoupling the UAV-BS placement in the vertical dimension from the horizontal dimension. In recent years, artificial intelligence algorithms are exponentially developed and applied in various research fields.
In this paper, we investigate multiple 3D UAV-BS deployment with the aim to maximize the number of UAV-served UEs under realistic conditions where each UE has a QoS compliance including a maximum tolerated path loss and a unique data rate requirement and each UAV-BS has a limited sum capacity. The novelty and contributions are summarized as follows: • First, in order to consider a more practical deployment scenario, the QoS compliance of ground users is measured by taking into account the maximum allowed path loss and a unique data rate requirement. It is worth mentioning that the existing problems and results in the literature ignore the QoS requirements, while QoS compliance leads to different coverage radii of UAV-BSs. • Second, the 3D placement problem is treated as a 2D deployment by placing multiple circles of various sizes in the horizontal dimension and then determining the altitude of UAV-BSs, which simplifies the original problem without losing the accuracy. • Last, a new genetic algorithm (GA) based UAVs deployment strategy and framework is proposed and proved to provide an effective solution and performance in comparison. The remainder of this paper is organized as follows. Section II conducts the problem formulation and provides the system model. In Section III, we first analyze the 3D deployment problem and then decouple it into a 2D problem, followed by the determination of the UAVs altitudes. In Section IV, the GA algorithm is investigated and analyzed for solving the 2D placement problem. Section V presents the numerical results and discussions. Finally, Section VI concludes the entire paper with the future work. Fig. 1 shows a communication network model where many UEs are clustered to be served by multiple UAV-BSs. The objective is to find the optimal locations for UAV-BSs so that the ground users' coverage ratio and the coverage radii can be maximized. Let P be the set of all the UEs which are labelled as i = 1, 2, ... |P|. Each UE has a unique data rate requirement c i and all UEs have a maximum tolerated path loss P L max that serves the purpose to guarantee all the data rate requirements from UEs are feasible, for QoS compliance. Q denotes the set of available UAV-BSs labelled as j = 1, 2, ... |Q| and each UAV-BS has a data rate capacity C j . In our system, we assume that no ground base station is available but the locations and data rate requirement of all users are pre-known. Furthermore, in spite of the well-known interference issues in UAV-assisted networks, such as multicell co-channel interference [16], [17], this work does not take into account the said interference which can be mitigated by various techniques such as, frequency planning, multi-beam UAV communication scheme [18], mmWave multi-stream multi-beam beamforming [19], non-orthogonal multiple access (NOMA) technique [20], cooperative NOMA scheme [21], cooperative interference cancellation strategy [18], [22], and other interference cancellation techniques.
II. SYSTEM MODEL
The A2G channel modeling follows [13] where line-of-sight (LoS) occurs with a certain probability, which falls into our application scenarios. The probability of a LoS and non lineof-sight (NLoS) channel between UAV j at the horizontal position m j = (x j , y j ) and user i at the horizontal location u i = (x i ,ỹ i ) are formulated as [13] where H j is the altitude of UAV-BS j; a and b are environment dependent variables; r ij = (x j −x i ) 2 + (y j −ỹ i ) 2 is the horizontal euclidean distance between the i th user and j th UAV. Then the path loss for LoS and NLoS can be written as where f c is the carrier frequency, c is the speed of light and d ij denotes the distance between between the UE and UAV-BS given by d ij = H 2 j + r 2 ij . Moreover, η LoS and η N LoS are the environment dependent average additional path loss for LoS and NLoS condition respectively. According to (1), (2), the path loss (PL) can written as: where the coverage radius is a function of both, the altitude H and the P L max , by keeping the urban environment statistical parameter set as (9.61, 0.43, 0.1, 20).
III. UAV-BS 3D DEPLOYMENT PROBLEM
For the problem at hand, the 3D deployment of UAV-BSs can be decomposed into the 2D horizontal locations optimization and altitude determination. This is because the UAV altitude only impacts the cell radius and path loss experienced in the cell, while the horizontal location and a radius determine which UEs are covered by the UAV. As clearly seen in Fig. 2, for a given P L max , there is a maximum radius R max and a corresponding altitude H max . If the altitude is smaller or larger than H max , while maintaining the same radius, the path loss on the cell edge will be larger than the given P L max . Since the cell radius affects the total number of the covered UEs, we want the cell radius to be maximized in order to potentially cover more users. Hence the 3D deployment solution takes the procedure as follows. First, a maximum cell radius upper bound R max that guarantees the desired P L max requirement is derived. Second, the 2D placements of |Q| UAVs and their respective coverage radii bounded by R max that maximize the total number of UEs supported while satisfying the individual data rate requirements and the UAV capacity constraint are formulated and solved. Finally, given the actual coverage radius of each UAV obtained from the second step, the altitude that leads to the achieved minimum cell edge path loss is determined.
A. 2D UAV-BS Deployment Problem
Since we model the 2D deployment problem via placing multiple circles of different sizes, unlike authors in [23] who investigate a problem of solving for the least number of UAVs to cover users in a region, this problem is equivalent to finding the appropriate location and radius for each UAV-BS to cover as many UEs as possible while simultaneously satisfying the data rate requirements and the UAV capacity constraint.
A binary variable γ ij ∈ {0, 1} is used to indicate whether or not the user i is covered by UAV-BS j, 1 for service and 0 for no coverage. The necessary condition for user i to be covered by UAV-BS j is that the horizontal Euclidean distance between them is less than the coverage radius of UAV-BS j, R j , which can be written as m j − γ ij u i ≤ R j . Following [11], the constraint equation can be rewritten as where M is a large constant which is larger than the largest horizontal distance between a user and a UAV so the constraint holds in any condition. As defined earlier, m j = (x j , y j ) and it stands for the horizontal position of UAV j while u i = (x i ,ỹ i ) represents the horizontal location of user i.
If a user is within the serving area of a UAV-BS, the UAV-BS can allocate certain data channels to the user which has a unique data rate requirement c i . For simplicity, we assume that for any UE, the allocated data rate equals what it requires. Then the data rate allocation problem can be expressed as At this stage, the UAV deployment problem becomes a rucksack-like problem in combinatorial optimization, which is a NP-hard problem. It can be expressed as Our objective is to maximize the number of served users. First, C1 in (4), guarantees that a UE can be served by a UAV-BS, when the horizontal distance between the UE and the UAV-BS is less than UAV-BS's coverage radius. Then C2 regulates that the total data rate of all covered users served by one UAV-BS cannot exceed the data rate capacity of the UAV-BS. Furthermore, C3 ensures each user should be served by at most one UAV-BS. Last, Fig. 2 shows that the function of coverage radius respective to altitude for a given P L max is a concave function so there exists a maximum radius R max that any coverage radii R > R max does not have a feasible solution. Thus, C4 ensures that the radii of UAV-BSs are no larger than R max . A genetic algorithm to solve this optimization problem will be presented in the next section.
B. The Determination of UAV-BS Altitude
After Subsection III-A, the horizontal locations and coverage radii of UAV-BSs have been determined and all the coverage radii are less than R max . Therefore, for each UAV-BS, the range of altitude which results in the P L value less than P L max can be obtained from Fig. 2. The objective for this step is to find the optimal altitude for each UAV-BS which requires least transmit energy, ie., the minimum path loss, to provide service for the coverage range derived in step 1. As observed from (3), the path loss between a UAV-BS and UE is a function of the horizontal distance r and the altitude H, that is, P L = f (r, H). Also, from Fig. 2, for a given P L max , defining the elevation angle θ = H R , there exists an elevation angle θ max that maximizes the radius R by solving ∂R ∂H = 0. As derived in [13], θ max satisfies the following equation: π 9 ln(10) tan(θ max ) + abA exp(−b( 180 π θ max − a)) (a exp(−b( 180 π θ max − a)) + 1) 2 = 0 (5) where θ max is environment dependent so it is a constant in a given environment. It has been proven by [15] that this elevation angle provides the minimum P L of the users in the boundary which is equivalent to the P L of all the UEs within the covered range are minimized so the required transmit power of the UAV-BS is minimized. Therefore, once the actual coverage radius R of each UAV-BS is obtained in Subsection III-A, the UAV-BS altitude H opt is given by H opt = R tan(θ max ). Fig. 3 shows the relationship between P L and altitude for given radii. It can be observed that as long as the radius is fixed, a minimum value of P L always exists.
IV. GENETIC ALGORITHM BASED 2D PLACEMENT
In order to solve complex optimization problems, there have been a wide range of applications of swarm intelligence algorithms [24], [25], [26], [27], [28] and evolutionary-based metaheuristics [29], [30], [31]. For example, [24] presented a detailed analysis on evolutionary algorithm based real-life applications. In this section, we present a GA based UAV-BS deployment strategy to provide wireless services for a group of UEs. The objective is to solve the optimization problem (4). The genetic algorithm is an efficient solution to the complex optimization problems with multiple variables, widely appearing in real-life optimization problems of a variety of fields. For example, [32] introduced a method based on GA and deep learning to predict financial behaviours. Moreover, the genetic algorithm was indeed applied in the cellular communications related research field such as facilitating terrestrial base station placement and showed excellent efficiency [33].
GA works on a population which consists of some candidate solutions and the population size is the total number of solutions [34]. Each solution is considered to be a chromosome and each chromosome has a set of genes where each gene is represented by the features of the solutions. Then, each individual chromosome has a fitness value which is computed based on the fitness function representing the quality of the chromosome. Moreover, a selection method called roulette wheel method where the chromosome with higher fitness value has a higher chance to survive the population.
However, the selection process can only assure in each generation, a better solution has a higher chance to enter the next generation. In order to ensure the diversity of the solution to avoid falling into local optimal solutions, crossover and selection are applied after the selection process. In the crossover procedure, two chromosome are selected in a probability of crossover rate to exchange information so new chromosomes are generated. Also, in the mutation procedure, each chromosome has a probability of mutation rate to replace a set of genes with new random values. The basic GA process has various variables including population size, crossover rate and mutation rate. The population size determines the size of candidate solutions. The value of crossover rate and mutation rate represent the diversity of the candidate solutions throughout the iteration. The whole process repeats until the time step reaches an iteration limit. Fig. 4 illustrates the whole process of GA.
As illustrated in Algorithm 1, the horizontal location, and the coverage radius of each UAV-BS are treated as a gene in the GA model. Therefore, for UAV-BS j, the combination (x j , y j , R j ) is a gene. Placing genes for all the available UAV-BSs together, i.e., {x j , y j , R j } j∈Q makes a chromosome. The required inputs include K, D, P, Q, R max , {c i } i∈P , {u i } i∈P , θ opt , p m , p c where K is the number of iterations for finding the optimal result, D, p m and p c are the population size, mutation rate and crossover rate for GA respectively. The outputs are the horizontal locations, altitudes and coverage radii, denoted by O j , j = 1, 2, ... |Q|, of all the UAV-BSs.
First, |Q| empty lists are created and each of them is to store the covered UEs of the corresponding UAV-BSs. Also, two arrays r,r are created, respectively, to store the number of covered UEs in each UAV-BS and the total number of covered UEs of all UAV-BSs known as the fitness score. In step 3, the first population ν 1 is generated by creating D chromosomes where the horizontal locations of all UAV-BSs are initialized by assigning each of them with the equidistant point of 3 random UEs' locations, and the coverage radius is initialized by generating a random numbers in the range from 1 to R max .
Then, K iterations are executed to find the 2D deployment result from Step 4 to Step 20. In Step 5 and Step 6, if the horizontal distance between a UE and a UAV-BS is less than the coverage radius, the UE can be served by the UAV-BS. Also, if a UE is within the coverage range of more than one UAV-BS, it is assigned to the closest one. In the for loop from Step 7 to Step 16, calculate the sum data rate p∈Oj cp of all covered UEs for each UAV-BS. If the sum data rate is smaller than the data capacity C j , the number of covered UEs |O j | is stored to array r. Otherwise, a negative number is stored to arrayr and the algorithm breaks out the loop and goes back to Step 5, which means the fitness of this chromosome is negative. In Step 15, the fitness function of the chromosome is the total number of covered UEs and it is saved into arraŷ r.
In Step 17, the roulette wheel method is applied to update the current population νk. A random chromosome is selected within the current population to be the competitor. Comparing the fitness score of all the chromosomes with the competitor, the chromosomes with less fitness scores are replaced by the competitor. Afterward, in the crossover procedure, p c of chromosomes are randomly selected and paired. Each pair is considered to be the parent chromosomes. In each parent chromosomes, the first half genes of one chromosome and second half genes of the other chromosome are exchanged to produce children chromosomes. In Step 19, all the chromosomes have a probability of p m to perform mutation process in which one gene of the mutated chromosome is selected to be replaced by Finally, in Step 21 and Step 22, we can obtain the result of horizontal locations and coverage radii of UAV-BSs via choosing the chromosome with the maximum fitness score. Finally, the optimal altitudes are obtained by H opt = R tan(θ max ).
Since we model our problem to be a rucksack problem and put additional constraints on it, the computational complexity is proportional to the search space. In our proposed algorithm, Step 4 to Step 16 have complexity O(KLD) where K is the number of UAV-BSs, L is the iteration time, D is the number of chromosomes.
Step 17 to 19 perform the mathematical operation and cost O(1), and similarly, step 21 and 22 also cost O(1). Therefore, the time complexity of the GA UAV-BS deployment method is O(KLD).
V. NUMERICAL RESULTS
In our simulations, we consider the UEs are uniformly distributed in a 5000 m × 5000 m area. Referring to [13], the environment parameters are set up as followed: f c = 2 GHz, P L max = 110 dB, (a, b, η LoS , η N LoS ) is configured to be (4.88, 0.43, 0.1, 21), (9.61, 0.43, 0.1, 20), (12.08, 0.11, 1.6, 23), (27.23, 0.08, 2.3, 34) corresponding to suburban, urban, dense urban and high-rise urban environments, respectively. Also, we assume there are three different data rate requirements of all UEs, c 1 = 5 × 10 6 bps, c 2 = 2 × 10 6 bps and c 3 = 1 × 10 6 bps, and each UE has one of these three data rate requirements. Moreover, all the UAV-BSs have the same data rate capacity C = 1 × 10 8 bps. Fig. 5 illustrates the UE distribution and the GA deployment result with 100% coverage percentage. In our optimization problem, there are four variables which we need to set up, which are population size, iteration time, mutation rate and crossover rate. According to [35], the range of crossover rate and mutation rate are within [0.5, 0.8] and [0.01, 0.05], respectively. In our problem, in order to analyze how those two parameters affect the algorithm efficiency, in the simulation we fix the iteration size and the population size to be 10000 and 100 respectively, and deploy 10 UAV-BSs to cover 200 UEs. As a result, Fig. 7 and Fig. 8 show that in our optimization problem those two parameters hardly have impact on the efficiency of the convergence. Furthermore, the parameters (p m , p c ) are configured to be (0.01, 0.8). The time complexity of GA is related to the multiplication of population size and iteration size, as mentioned in Section IV. In other words, these two parameters have a significant impact on the algorithm efficiency where a smaller the multiplication results in a higher GA efficiency. Thus, we conduct an analysis of the relation between the population size and the minimum required iteration number. The minimum required iteration number is defined to be the number of iterations taken to get the fitness value converged to a certain number for 15% of the entire iteration number. Moreover, Table I shows that setting population size to be 100 paired with a iteration number of 17000 can make the GA demonstrate good performance in terms of efficiency. Therefore, the GA parameters set (K, D, p m , p c ) is configured to be (17000, 100, 0.01, 0.8). Iteration Multiplication 50 35380 1769000 75 23031 1727325 100 16875 1687500 150 15984 2397600 200 16036 3207200 300 15944 4783200 500 16015 8007500 Fig. 6 shows the average coverage ratios of 80 UEs by 10 available UAV-BSs with 15 realizations in four different environments when increasing the number of UAV-BSs. As seen from Fig. 6, the coverage ratio varies significantly in four deployment scenarios, particularly with high-rise urban one much more challenging than others.
By applying Shannon Capacity Theorem, the required SN R of each UAV-BS can be calculated through C = B log 2 (1 + Pr Pn ), where B is the bandwidth of the channel, P r and P n denote the required received power and average noise power, respectively. In our model, we assume that B = 1 × 10 7 Hz, P r = −74 dBm and P n = −100 dBm. Thus, we can obtain the minimum required power for each UAV-BS by P t = P r + P L(R j , H j ). Fig. 9 further depicts the average minimum required transmit power of all UAV-BSs when increasing the number of UEs, in the urban environment, with 15 available UAV-BSs, and 4 different approaches that determine altitudes. In the fixed altitude approach, all the UAV-BSs are deployed in the same altitude. In the random altitude approach, each UAV-BS is deployed at an altitude that is uniformly drawn from a feasible range. The altitudes from both fixed altitude and random altitudes are selected from the range where P L max requirement is met. As we can see, if the UAV-BSs are deployed in the altitude in the way we proposed, less average transmit power is required to provide wireless service. For further performance comparison, when given 10 available UAV-BSs in urban environment parameters, we test 5 algorithms to obtain the coverage percentage of UEs. In each algorithm test, we generated 15 times of uniform UE distributions of 80, 200 and 450 UEs respectively in the same square region. Besides the GA deployment strategy proposed, we have simulated four other schemes for comparison. The first one is random placement which randomly selects a location in a uniform distribution within the square region and a coverage radius. The second one is the K-means algorithm which partitions the UEs intoK clusters to be covered bŷ K UAV-BSs. The third one is a linear programming method called branch and cut [36] which breaks down each UAV-BS placing into sub-problems and optimizes each placement. The fourth one is called greedy search which does the UAV-BS placement one by one and maximizes the covered UEs in each placement. Compared with four other algorithms as shown in Table II., namely, K-Means, Branch and Cut, Greedy and Cut, Greedy Search, Random, GA has demonstrated the significant advantage of solving the optimization problem with many variables involved. It is observed that the result of GA based deployment has higher coverage percentage and this advantage is more pronounced when the number of UEs increases, at the cost of higher complexity and more computing resources. Furthermore, the proposed GA algorithm can be potentially applied to more application scenarios, such as, geoscience and remote sensing [37], cloud computing [38], performing tasks by self-sufficient autonomous robots [39], automatic voltage regulator system design optimization [40], etc. Furthermore, a wide range of real-time applications, such as biomedical wireless power transfer (WPT) [41], multi-core systems [42], real-time design of thinned array antennas [43], fault repair scheme [44], automatic mode-locked fiber laser [45], traffic surveillance [46], have been realized based on GA algorithms.
Last, in order to evaluate how the errors of detecting UEs' precise and exact locations affect the numerical results, a simulation is performed in the same area when the UEs' locations have a maximum localization error of 5 meters (a normal localization resolution of outdoor localization techniques, e.g., GPS), in uniform random directions. Consequently, the numerical results of the average coverage ratio are given after running the simulations 15 times, with UEs set to 80, 200 and 450, respectively. From Table III, it can be observed that the 5-meter localization error hardly affects the result. Therefore, our proposed method maintains reliable performance even in a practical and challenging application scenario.
VI. CONCLUSIONS
This research has proposed and evaluated a cost-efficient 3D UAV-BS deployment algorithm for providing real-life wireless communication services when all the UEs are randomly distributed with various data rate requirements. A novel and practical GA-based UAVs deployment algorithm has been designed to maximize the number of covered UEs while simultaneously meeting the UEs' individual data rate requirements under the capacity limit of UAV-BSs, which have not been considered in existing works. The proposed algorithm outperforms four conventional approaches in terms of the coverage ratio, with good tolerance to UEs' localization errors.
A possible future work is to extend the GA-based deployment algorithm to the applications when A2G interference model is involved. Also, a real experimental validation involving both UAVs and ground users will be interesting to implement to verify the original idea and algorithm proposed in this paper.
|
2020-08-10T01:00:28.620Z
|
2020-08-07T00:00:00.000
|
{
"year": 2020,
"sha1": "d9a91555edb7c2a7aec527fc421f61a778710aff",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.03125",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d9a91555edb7c2a7aec527fc421f61a778710aff",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
256254195
|
pes2o/s2orc
|
v3-fos-license
|
Stress and Displacement of Deep-Buried Composite Lining Tunnel under Different Contact Conditions
Stress and displacement of the composite lining are important factors to be considered during tunnel design. By the complex variable method, analytical solutions for stress and displacement of surrounding rock, primary support and secondary lining satisfying the interface continuity and boundary conditions under far-feld stresses are derived. Taking the railway composite lining tunnel as an example, the analytical distributions of stress and displacement along boundaries are given, which was in good agreement with the numerical solution calculated by fnite element software. Te results show that the maximum normal stress ratio (load sharing ratio) of the outer boundary between the secondary lining and the primary support is 0.74. Te radial displacement of the inner boundary of surrounding rock, primary support, and secondary lining change consistently. Te maximum settlement and uplift occur at the vault and bottom, respectively. Te tangential stress of secondary lining is compressive stress, while the tangential stress of primary support is tensile stress and compressive stress. Te maximum tangential stress of primary support and secondary lining is smaller than the allowable stress of concrete.
Introduction
Te complex variable function is the most efective method to solve deep-buried tunnel problems.Obtaining the conformal mapping function of the tunnel is the critical step when solving the mechanical response.Te conformal mapping functions of unlined tunnels can be obtained by diferent methods.Lu [1] and Wang [2] obtained the conformal mapping function of noncircular tunnels with a single lining.
For circular tunnels with single-layer lining, Wang and Li [14] obtained stress and displacement around a circular tunnel at great depth subjected to uniform internal pressure and unequal biaxial in situ stresses.Li et al. [15] deduced the elastic-plastic analytical solution for the stress and displacement of circular tunnel subjected to uniform internal and external pressure.Kargar [16] proposed analytical solutions around lined and unlined circular tunnels in viscous rock mass.Guo et al. [17] studied the stress, displacement, and stability of a deep lined circular pressure tunnel by combining the complex variable method with the Biot theory.For noncircular tunnels with single-layer lining, Kargar et al. [18,19] and Lu et al. [20] investigated analytical stress solutions of inverted U-shaped tunnels according to diferent contact conditions between rock mass and lining, respectively; analytical displacement solutions were also given by Lu et al. [21] and Wang et al. [22] in the isotropic and orthotropic rock mass, respectively.Li and Chen [23,24] and Liu et al. [25] obtained the analytical solutions for stress and displacement around horseshoe-shaped tunnels in the isotropic and orthotropic rock mass, respectively.Chen et al. [26] and Fang et al. [27] obtained analytic solutions of two circular and multiple noncircular tunnels at great depth considering the mutual interaction between linings and rock mass, respectively.
Zhou and Yang [28] and Zhou et al. [29,30] calculated the support loads of circular composite lining tunnels consisting of primary support and secondary lining in elastic and rheological rock mass, respectively.Li et al. [31] proposed the analytical stress solution of circular water conveyance tunnel with composite lining subjected to uniform internal pressure.Ramadan et al. [32], Maleska and Beben [33], and Embaby et al. [34] analysed numerically large-span culverts and soil-steel bridges.Maleska et al. [35], Maleska and Beben [36], Shen et al. [37], Jiang et al. [38], and Chen et al. [39] obtained numerical seismic response of soil-steel bridges, a shield tunnel, and subway stations.
For noncircular tunnels, it can be seen from the above literature that there are many theoretical research results for unlined tunnels and single-layer lining tunnels, while there are few research results for composite lining tunnels widely used in practice.In this paper, the analytical solutions of stress and displacement of surrounding rock, primary support, and secondary lining of the noncircular composite lining tunnel are derived by using the complex variable function method and verifed by fnite element software ANSYS, which provides a theoretical basis for safe and economical tunnel design.
Conformal Mapping Function of Composite Lining Tunnel
Figure 1(a) shows the load structure diagram of a composite lining tunnel with noncircular cross sections, where R, L 1 , and L 2 represent the area of surrounding rock, primary support, and secondary lining, respectively.According to the theory of elastic mechanics, deep-buried tunnels can be simplifed as a plane strain model with holes in infnite surrounding rock.Assuming that the buried depth is deep enough, the efect of gravity can be neglected.Te surrounding rock is subjected to far-feld stresses P and λP along Ox-and Oy-axes, respectively, and λ is the lateral pressure coefcient.Te conformal mapping function of the composite lining tunnel is where z � x + iy, ζ � ρ exp iθ , x and y are the rectangular coordinates in the physical z plane, and ρ and θ are the polar coordinates in the image ζ plane.Te three concentric circles with radius ρ � 1, R 1 , and R 2 in the image ζ plane are transformed into the outer boundary of primary support, the inner boundary of primary support (the outer boundary of secondary lining), and the inner boundary of secondary lining, respectively, as shown in Figure 1(b).m is the number of terms of the conformal mapping function.C k , R 1 , and R 2 are the coefcients related to the shape and size of the composite lining, which can be solved by Fan [40].
Basic Equations
Te stress and displacement components of surrounding rock and composite lining are obtained by stress functions.Te stress functions of surrounding rock, primary support, and secondary lining and the stress and displacement components in both Cartesian and curvilinear coordinates are given.
3.1.Stress Functions.Te stress functions φ 1 (ζ) and ψ 1 (ζ) of surrounding rock can be expressed as follows: where constants B, B ′ , and C ′ are related to far-feld stress P and λP, where 10n + 6 unknown real coefcients a j , b j , d j , f j , g j , h j , p j , q j , s j , and t j can be determined by the interface continuity and boundary conditions.
Stresses and Displacements.
Te stress components are σ x , σ y , and τ xy ; the displacement components u x and u y in the Cartesian coordinates xoy can be expressed as follows: 2 Mathematical Problems in Engineering where the notation Te stress and displacement components in the curved coordinates are as follows: where σ ρ , σ θ , and τ ρθ are the radial, tangential, and shear stress components, respectively; u ρ and u θ are the radial and tangential displacements, respectively.Te stress functions φ (ζ) and ψ (ζ) in equations ( 7)-( 12) are replaced by φ 1 (ζ) and ψ 1 (ζ) for surrounding rock, φ 2 (ζ) and ψ 2 (ζ) for primary support, and φ 3 (ζ) and ψ 3 (ζ) for secondary lining.Te shear modulus G and constant κ in equations ( 9) and ( 12) are replaced by shear modulus G 1 and κ 1 of surrounding rock, G 2 and κ 2 of primary lining, and G 3 and κ 3 of secondary lining.Ten, the stress and displacement components of surrounding rock, primary support, and secondary lining can be obtained.
Te relationship between surface force and stress functions is as follows: where f x and f y are the surface force components in x and y directions, respectively.
Continuity and Boundary Conditions
Te surface between surrounding rock and primary support ζ(� exp i θ � σ) is assumed to be full contact.Te corresponding stress and displacement components are equal, respectively [18,19,25].From equations ( 9) and ( 13), the interface continuity conditions can be written as follows: Because the waterproof layer between primary support and secondary lining cannot bear the shear force, this interface (ζ � R 1 σ) is assumed to be slip contact.Te corresponding normal displacement and stress are equal, respectively, and the shear stress is equal to 0. From equation (12), the displacement continuity condition can be expressed as follows: From equations ( 11) and ( 12), the stress continuity condition can be expressed as follows: Te inner boundary of secondary lining (ζ � R 2 σ) is free; the radial stress and tangent stress are equal to 0. From equation ( 12), the stress boundary condition is as follows:
Solution Process of Stress Function
By applying series solution to solve unknown coefcients a j , b j , d j , f j , g j , h j , p j , q j , s j , and t j of stress functions, the stress and displacement components can be obtained.
Substituting stress functions into continuity and boundary conditions and equaling multipliers of the same order of variables, 2n + 1, 2n + 1, n + 1, n, 2n + 1, and 2n + 1 equations are obtained from equations ( 14)- (19), respectively.Tus, a total of 10n + 5 equations is obtained.But there is a total of 10n + 6 unknown coefcients, and an equation must be added.
Te displacement of the inner boundary of surrounding rock caused by tunnel excavation can be written as follows [25]: 4 Mathematical Problems in Engineering Te displacement at infnity caused by tunnel excavation is equal to 0, that is, the constant term in the right side of equation ( 22) is 0. Since the minimum positive exponent term of ϕ 0 ′ (σ) is σ 2 , the maximum negative exponent term of ω(σ)/ω ′ (σ) is σ − 2 .Te supplementary equation is expressed as follows:
Results and Discussion
Taking the double-line railway composite lining tunnel in grade IV surrounding rock as an example, the analytical distributions of stress and displacement along boundaries are given and compared with the numerical distributions.Te primary support is made of C25 concrete with a thickness of 0.25 m, and the secondary lining is made of C35 concrete with a thickness of 0.45 m.Te material parameters of surrounding rock, primary support, and secondary lining are as follows: Young's elastic modules: +0.0290ζ 4 + 0.0105ζ 5 − 0.0152ζ 6 + 0.0083ζ 7 − 0.0024ζ 8 − 0.0002ζ 9 + 0.0001ζ 10 − 0.0004ζ 11 + 0.0009ζ 12 and R 1 � 0.9632 and R 2 � 0.8975.Te composite lining after transformation is shown in Figure 2.
Comparison of Analytical Solution with Numerical
Solution.In order to verify the correctness of the above analytical solution, the fnite element software ANSYS is used for numerical simulation, and the analytical solution of stress and displacement is compared with the numerical solution.Te composite lining adopts the transformed shape.Because the structure and load are symmetrical, the left half structure is adopted.Te size of the plane strain model is 180 m × 360 m, far greater than the size of the tunnel.Te upper, left, and lower boundaries are free, and normal constraints are imposed on the right boundary.Te upper and lower boundaries apply pressure of 0.148762 Mpa, and the left boundary applies pressure of 0.0297524 Mpa.Te above measures ensure that the model, boundary conditions, and loads in the numerical solution are consistent with those in the analytical solution, thus ensuring the comparability of the two solutions.Because the model is large and the mesh is dense, the fnite element mesh of surrounding rock near the tunnel is shown in Figure 3(a), with a total of 24516 units and 49573 nodes.Te primary support grid is shown in Figure 3(b), with 1140 elements and 2286 nodes.Te secondary lining grid is shown in Figure 3(c), with 1484 elements and 3333 nodes.Te contact surfaces of surrounding rock, primary support, and secondary lining are full contact and slip contact, respectively.Slip contact is realized by establishing contact pairs, which are composed of target surface and contact surface.Te contact type is nonseparation contact, and the friction coefcient is zero.
Figures 4(a) and 4(b) show the contours of stress σ x and displacements u x around the tunnel, respectively.It can be seen that stress σ x changes obviously in the composite lining and gradually approaches the applied load of 0.148762 Mpa at a distance from the tunnel in surrounding rock.Te displacement u x changes consistently.Te changes of stress and displacement of surrounding rock are mainly concentrated near the tunnel.
It is convenient to read the calculation results under the rectangular coordinate system in the fnite element software ANSYS.In order to clearly see the diference between the numerical solution and the analytical solution, as an example, Figures 5(a) and 5(b) show the numerical solution and the analytical solution of the stress and displacement of the inner boundary of secondary lining (ρ � 0.8975).A positive angle α turns counterclockwise from the positive xaxis to the positive y-axis (Figure 1(a)).Te analytical solution is calculated by taking n � 100 in this study.From Figure 5(a), it can be seen that the numerical solution and the analytical solution of stresses σ x is zero, satisfying the boundary condition.From Figure 5(b), it can be seen that the numerical solution and the analytical solution of the displacement u L 2 y are almost identical.While, there is a small rigid body translation for the displacement u L 2 x because the fnite element model has no constraints in the vertical direction.When α � 0 °and 180 °, the displacement u L 2 y is zero, satisfying the symmetry condition.
Analytical Distributions of Stress and Displacement along
Boundaries.Te analytical distributions of stress and displacement along boundaries are given, in order to verify the continuity conditions and boundary conditions and fnd the maximum stress and displacement.
On the interface between surrounding rock and primary support (ρ �1), the radial stresses σ R ρ , σ θ of the inner boundary of surrounding rock and the outer boundary of primary support are shown in Figures 6(a) and 6(b), respectively.It can be seen that the corresponding stress and displacement are equal to satisfy the full contact condition.Te radial stress is all compressive, and the maximum value of 0.146 MPa occurs at point A (Figure 2).
On the interface between primary support and secondary lining (ρ � 0.9632), the radial stresses σ From these results, it can be seen that the slip contact condition is satisfed.Te radial stress is compressive stress, and the maximum value of 0.108 MPa occurs at point B (Figure 2).Te load sharing ratio concerned in the design, that is, the maximum normal stress ratio of the outer boundary between the secondary lining and the primary support is 0.74.Mathematical Problems in Engineering displacement change trend is consistent.Te maximum settlement, uplift, and peripheral displacement occur at the vault, bottom, and waist, respectively.On the vertical axis, the shear stress and tangential displacement are equal to zero, satisfying the symmetry condition.
Te radial stress σ
Figure 9 shows the tangential stress of the inner boundary of surrounding rock (ρ �1), the inner and outer boundaries of primary support (ρ � 1 and ρ � 0.9632), and the inner and outer boundaries of secondary lining (ρ � 0.9632 and ρ � 0.8975), respectively.Te tangential stress σ R θ value of the inner boundary of surrounding rock is the smallest.For composite lining, the maximum compressive stress and tensile stress occur at the inner boundary of primary support.Te maximum compressive stress is 1.425 MPa at point C (Figure 2), and the maximum tensile stress is 0.472 MPa at the bottom.Te maximum values of compressive stress and tensile stress are much smaller than the allowable compressive design strength of 13 MPa and tensile design strength of 1.3 MPa of C25 shotcrete [41].Te tangential normal stress σ L 2 θ of secondary lining is compressive stress, and the maximum compressive stress is 1.338 MPa at point D (Figure 2) of the inner boundary of secondary lining, which is much smaller than the allowable compressive stress of 13 MPa of C35 concrete [41].Terefore, the thickness of primary support and secondary lining can be appropriately reduced.Mathematical Problems in Engineering
Conclusion
According to the complex variable function method, the analytical solutions for stress and displacement of composite lining tunnel are obtained, and compared with the numerical solutions obtained by fnite element software, the results are in good agreement.Te main conclusions are as follows: (1) On the contact surface between surrounding rock and primary support, the corresponding radial stress, shear stress, radial displacement, and tangential displacement of the inner boundary of surrounding rock and the outer boundary of primary support are equal, respectively, which satisfy the full contact condition.On the contact surface between primary support and secondary lining, the corresponding radial stress and displacement of the inner boundary of primary support and the outer boundary of secondary lining are equal, respectively, and the shear stress is zero, which satisfes the slip contact condition.Te radial stress and shear stress of the inner boundary of secondary lining are both zero, which satisfes the stress boundary condition.Te shear stress and horizontal displacement on the vertical axis are zero, satisfying the symmetry condition.(2) Te maximum normal stress ratio (load sharing ratio) of the outer boundary between the secondary lining and the primary support is 0.74.Te maximum compressive and tensile tangential stress occur at the inner boundary of primary support and the maximum compressive tangential stress of secondary lining occurs at the arch foot of inner boundary.Since they are far less than the allowable stress of concrete, the thickness of primary support and secondary lining can be appropriately reduced from an economic point of view.(3) Te radial displacement change trend of surrounding rock, primary support, and secondary lining is consistent.Te maximum settlement, uplift, and peripheral displacement occur at the vault, bottom, and waist, respectively.Terefore, during the design and construction of the tunnel, attention should be paid to these locations.
Subsequently, we will combine the on-site monitoring of specifc projects to verify the applicability of our solutions, so as to better guide the safe and economical tunnel design.
Mathematical Problems in Engineering
and the functions φ 0 (ζ) and ψ 0 (ζ) are analytic outside the unit circle in the ζ plane and can be written as follows: Te stress functions ϕ 2 (ζ) and ψ 2 (ζ) of primary support are analytic in the annular region with radius R 1 ≤ ρ ≤ 1 in the ζ image plane.Te stress functions ϕ 3 (ζ) and ψ 3 (ζ) of secondary lining are analytic in the annular region with radius R 2 ≤ ρ ≤ R 1 in the ζ image plane.Tey can be written as the following series expansions: are introduced for simple expression.ϕ(ζ) and ψ(ζ) are the stress functions.Constant κ � 3 − 4μ for plane strain problem, the shear modulus G � E/2(1 + μ), and E and μ is Young's modulus and Poisson's ratio, respectively.
Figure 1 :
Figure 1: Conformal mapping of the composite lining tunnel.(a) Schematic diagram of the tunnel in the physical z plane.(b) Tree concentric circles in the image ζ plane.
2 θ
of the inner boundary of primary support and the outer boundary of secondary lining are shown in Figures 7(a) and 7(b), respectively.Te corresponding radial stress is equal, while the shear stress is zero.Te corresponding radial displacement is equal, while Mathematical Problems in Engineering the corresponding tangential displacement is not equal.
Figure 6 :
Figure 6: Distributions on the interface between surrounding rock and primary support.(a) Stresses.(b) Displacements.
Figure 7 :Figure 8 :
Figure 7: Distributions on the interface between primary support and secondary lining.(a) Stresses.(b) Displacements.
|
2023-01-26T16:03:01.250Z
|
2023-01-23T00:00:00.000
|
{
"year": 2023,
"sha1": "f3604151940b8b00697e505047a2ad5ce6652561",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2023/7467290.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d621e5c728e6412b39cf2f98d1fad23e507083fc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
224997640
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between Contextual Factors, Psychosocial Factors and Hygienic Practices of Tribes in Liberia
This study determined the relationship between contextual factors, psychosocial factors and hygienic practices of the tribes of Liberia. Data were collected from six randomly selected tribes from three regions in Liberia. Convenient, stratified and random sampling techniques were employed to survey 390 household heads who were aged 15 years and above. A correlational design was used and data gathered were analyzed utilizing inferential statistics. Majority of the respondents were males aged 40 years and above and of the middle and high income category. Regression analysis revealed self-efficacy, social environment and cognitive factors as predictors of hygienic practices in terms of disposal of wastes, while self-efficacy, cognitive, policy implementation and cultural identity predicted handwashing. Additionally, social environment, self-efficacy, physical environment, cognitive and policy implementation predicted bathing practices. Respondents who had high income, high educational attainment and of the tribes C and E had a better practice on waste disposal. Those with high income and of the tribes D and C had a better handwashing practice, while those who had high income and of the A and E ethnic groups exhibited better bathing practices.
Introduction
Hygienic practices are critical to human health and well-being. Two aspects of hygienic practices that demand much attention are sanitation practices and personal hygiene practices. Improper hygienic practices cause infectious diseases which pose huge public health challenges particularly in low and middle-income countries. Notably, a substantial portion of the global infectious disease burden is due to poor hygienic practices (Aunger et al., 2016;Pruss-Ustun et al., 2014). Unless adequately addressed, the world's population will continually face the unchartered burden of hygiene-related diseases.
According to 2012 estimates, inadequate hygienic practices caused 842,000 diarrheal deaths, accounting for 1.5% of the global disease burden (Pruss-Ustun et al., 2014). An estimated 800,000 children under 5 suffer from diarrhea and India records a quarter with estimated 200,000 deaths (Kotloff et al., 2013). India accounts for about half of the global diarrhea burden among children under 5 (WHO, 2014).
One of the most effective elements of personal hygiene is hand washing. Crucially, only 19% of the world's population washes hands with soap following usage of a sanitary facility or contact with excreta. The rates of hand washing in middleincome and high-income countries following the above mentioned exposure are 14% and 43%, respectively . However, studies have shown that about 40% of diarrheal cases are prevented through hand washing with soap Ensink, 2015).
Despite efforts by African governments to improve sanitation to rural settings, sanitary conditions still remain deplorable among informal settlements. In the rural settings of Tanzania, Ethiopia and Sudan, 93%, 81% and 76% of residents respectively, lack access to improved sanitation. Moreover, around 50% of urban populations in Kenya, Mali and Liberia lack access to basic sanitation (UNICEF and WHO, 2012).
Global targets to provide adequate water, sanitation and hygiene (WASH) coverage were set under the Millennium Development Goals (MDGs) and the Sustainable Development Goals (SDGs) (Roche, Bain and Cumming, 2017). MDGs, which ended in 2015, met their target for access to safe drinking water in 2010, but not for sanitation. Sub-Saharan Africa (SSA) records one of the lowest rates of WASH coverage worldwide, with 32% of its population lacking access to improved water at the end of the MDG (JMP, 2015). Also, only 14% of the people in SSA wash hands with soap following defecation and before eating .
Contextual factors such as policy implementation can greatly influence hygiene behavior. In 2002, Ethiopia started its "Health Extension Program (HEP) "emphasizing preventive and curative primary health-care services through promoting sanitation and hygiene. The Ethiopian government, using salaried health workers and voluntary community health promoters was successful in motivating rural households to construct latrine and improve hygiene. The success story of HEP is mostly due to the implementation strategy employed (Newborne and Liisanantti, 2013).
Researchers have documented the impact of psychosocial factors on hygienic practices. In rural Bangladesh, although improved knowledge and awareness of health and environment-related issues enhance hygiene behavior, psychosocial factors such as traditional beliefs and lack of interest in attending cluster meetings influence safe hygiene behavior (Akter and Ali, 2014). Mukadi (2016) found cultural values to influence adoption of WASH practices among 4000 Kenyan households.
Knowledge and self-efficacy have been shown to influence WASH practices. Sonego and Mosler (2014) found self-efficacy to be a predictor of latrine ownership and cleanliness in rural Burundi. Another study found lack of education, under-use of sanitary facilities and rampant roaming of pigs to cause cysticercosis in most parts of Africa (Thys et al., 2016).
Liberia's scenario regarding WASH practices is alarming. In 2013, 35% of rural inhabitants and 65% of urban households had access to improved water. Considering sanitation, 12% of rural households had access to improved sanitation facilities, while only 40% was for urban households. Rural dwellers reported very low handwashing rates after handling rubbish (16%), before food preparation (9%) and following handling baby excreta or diapers (6%). As for urban dwellers, 19% wash hands after handling rubbish, before food preparation (11%) and after handling baby feces or diapers (6%). Due to inadequate hygienic practices, 39% of rural households in Liberia have diarrheal cases, with children aged 0-5 years mostly affected, while 23% of urban settlements have diarrhea (WASH Liberia Baseline Study, 2013).
This study therefore is concerned about access to and practices on sanitation and hygiene among the tribes of Liberia. Further, it aimed at seeking whether contextual and psychosocial factors predict hygienic practices of the tribes of Liberia. The 16year civil unrest that ended in 2005 and the Ebola outbreak in 2014 ravaged Liberia's health sector and economy, apparently tampering with access to and practices on sanitation and hygiene in the country. After these sinister eras, there is a need to do an extensive research on the determinants of hygienic practices of the tribes of Liberia. Hence, this study sought to determine which among the independent variables most significantly predicted hygienic practices and whether there was a significant difference in hygienic practices when income level, educational level, sex, age and tribe are considered.
Research methodology
This section presents the methodologies employed in gathering and analyzing the data for the study. The section includes the study design, population and sampling techniques, instrumentation, data gathering procedures, ethical considerations and analysis of data.
Study Design
This study employed a quantitative, correlational research design to determine the relationship between contextual factors, psychosocial factors and hygienic practices of the tribes. Quantitative design was used since the study orderly acquired quantifiable data, analyzed the data and described the associations among the variables. Relationship was described between the independent variabes and the dependent variable, and the moderating variables and the dependent variable. Correlational design was used to identify predictive relationships among variables, without manipulating the variables. To this end the research figured out which variables were related.
Population and Sampling Techniques
This study targeted the household heads of six (6) indigenous groups of Liberia. Household head surveyed included a father, a mother, a sibling or any relative who was in charge of the household. The study employed stratified sampling, random sampling and convenient sampling techniques. First, stratified sampling was used to group the tribes into three strata: northern, western and southeastern areas. Second, random sampling was used to select two tribes from each stratum, making the total of six tribes as the study population. The population in each stratum to be sampled was defined by the number of items. A total of 390 household heads, purposively chosen, were surveyed. The distribution of the sample size on the tribes was done using ratio and proportion. The selection of household heads from each of the six tribes was done using simple random sampling where the researcher chose one household after the other. This was done until the required number of respondents from each tribe was obtained. Each tribal community was conveniently chosen based on access to road.
In determining the sample size of the research, the Slovin's formula was utilized: The criteria for inclusion were: a) Household head of the six randomly selected indigenous tribes (Bassa, Gola, Grebo, Lorma, Mandingo and Vai), b) age 15 years and above, c) English-speaking, d) ability to give informed consent to participate.
Instrumentation
The questionnaire was designed based on review of literature and knowledge of the researcher as a doctoral student in Public Health. The questionnaire was also forwarded to experts for validation. After it was validated, the Adventist University of the Philippines (AUP) Center for Graduate Studies (CGS) gave the researcher letter of endorsement to conduct the pilot study. The pilot study was conducted on the Kpelleh tribe, the largest tribe in Liberia. After the reliability test confirmed data validity, the researcher received an endorsement from the Center for Graduate Studies to conduct the actual data gathering. The Kpelleh tribe was excluded from the actual data gathering.
The questionnaire had 125 items and was apportioned into three sections. The first section dealt with the demographic profile of the respondents including, income, educational attainment, sex, age, and tribe. The second section assessed the independent variables (contextual factors and psychosocial factors). The third section assessed the dependent variable-hygienic practices in terms of sanitation practices and personal hygiene practices (handwashing and bathing). 1 shows the distribution and retrieval of questionnaires from respondents where 97.5% of the questionnaires were retrieved. This indicates that the tribes did well in filling in the questionnaires. The E ethnic group recorded the highest percentage of questionnaires retrieved (100.0%), followed by the B tribe (97.8%), the F tribe (97.7%) and the C tribe (97.6%). Some respondents, due to busy schedule (farming) and other obligations were unable to fill in the questionnaires and to return them to the researcher.
Pilot Study
In order to establish reliability and validity of the research instrument, a pilot study was conducted on 75 respondents of the Kpelleh tribe in Sinyea Town, Suakoko District, Liberia. All respondents were chosen based on the set inclusive and exclusive criteria. Cronbach's alpha was utilized to determine the consistency of the instrument. Table 2 indicates the reliability results for each segment of the questionnaire.
Data Gathering Procedures
Upon receiving endorsement from the CGS, the researcher traveled to Liberia to begin the process of data gathering. Upon arrival, the researcher submitted the endorsement and proposal to the Research Department of the Ministry of Health (MOH), Republic of Liberia for approval to conduct the survey questionnaire. After reviewing the proposal, the MOH gave an approval letter but requested the researcher to submit the proposal to the Ethics Board of Liberia-the Institution Review Board (IRB) for ethical clearance. Afterwards, the researcher moved into the regions where the tribes resided. The researcher liaised with authorities to inform community members through a town crier about the study. Afterwards, the researcher was given permission to meet the respondents.
The researcher recruited and trained three research assistants to assist with the process of data gathering. The researcher explained to the research assistants the essence of the study. They were adequately oriented on the entire data gathering procedures. When instructions were given on how to answer the questionnaire, participants answered the questionnaire without pressure. After responding to the questionnaire, the answered questionnaire items were placed into envelopes. The process of data collection lasted for 10 weeks.
Ethical Considerations
The study observed ethics as the researcher obtained ethical clearance from the Institution Review Board (IRB) of Liberia for the conduct of the study. Before distributing the questionnaire to respondents, a written consent was secured from each participant. The researcher informed the respondents of anonymity, confidentiality and their limits. They were informed of their exclusive right to decline from the process at any point in time.
Because respondents were fully aware of the purpose, benefits and potential risk of the study, and had the right to decline from the study at will, there was no conflict of interest with the respondents or with any third party. The study was void of plagiarism and any other academic fraud, and did not conceal or misrepresent any facts or results discovered during the process of the study. In order to hide identity, participants were not required to write their names on the questionnaire and the tribal groups were assigned letters.
Analysis of the Data
Data analysis was done through the Statistical Package for Social Science (SPSS). Frequency distribution and percentage were used to describe the demographic profile of respondents. Multiple regression was utilized to determine if any of the independent variables predicted the hygienic practices. One-way ANOVA and T-test were used to examine if hygienic practices of the tribes significantly differed when the moderator variables namely age, sex, educational attainment, income level and tribal affiliation were considered.
Analysis and Results
This section presents the analyses, results and the interpretation based on the statistical output and related literature.
Socio-demographics of Respondents
As shown in table 3, this section presents the sociodemographics of the respondents in terms of age, sex, income level, educational level and tribal group. Of the total 390 respondents, the age range 39 years and below accounted for 181 (46.4%), while the age range 40 years and above accounted for majority (207, 53.1%) of the respondents surveyed. There were two missing data, accounting for 0.5% because two of the respondents did not indicate their age range. In terms of sex distribution, 61.8% (241) were males, 37.9% (148) were females, while there was one missing data constituting 0.3%. When the respondents were classified based on income level, more than half 228 (58.5%) earned 6,000 Liberian Dollars (LD) and above per month (middle and high income), while 159 (40.8%) earned less than 6,000 LD per month (low income). In Liberia, about 40% of female workers are low income earners (earning under 6,000LD per month), compared to 22% of their male counterparts who are low income earners (LISGIS, 2017). There were three missing data, making up 0.8% of the respondents.
Respondents' income level was classified into two groups because the other categories of income were not comparable. (13.8%) and Tribe B that recorded 44 (11.3%). Tribe E accounted for the lowest (7.9%).
Predictors of Hygienic Practices
This section answers the question regarding which of the independent variables predicted the hygienic practices. Regression analysis was used to determine which independent variables predicted the hygienic practices in terms of disposal of wastes, handwashing and bathing. Table 4 shows the variables that most significantly predicted the disposal of wastes. Three variables came into the regression model. These are selfefficacy, social environment and cognitive (knowledge) which contributed 30.8% (R 2 -Change of .308) to disposal of wastes. Moreover, the regression model shows that cognitive explains 3.8% (shown by R 2 -Change value of .038) of disposal of wastes. This infers that the better the cognitive, the better the disposal of wastes of the respondents. Sara and Graham (2014) agree with the finding when they found good knowledge to be associated with latrine use in Tanzania. A related study found improved knowledge and awareness of health and environmental hazards to lead to proper sanitation practices (Akter and Ali, 2014). Another study also found improved knowledge of mothers to be associated with safe disposal of child feces (Azage and Haile, 2015). Furthermore, in Malawi, Chirwa et al. (2017) investigated pit latrine fecal sludge management and found that most people were not willing to pay for emptying services, but households with improved knowledge in one of the study areas showed a higher demand of pit emptying at 84%. In fact, this area had the highest number of lined latrines among the three study areas. Table 5.
Predictors of handwashing
Self-efficacy is the first predictor of handwashing as shown in Table 5. It has an unstandardized coefficient of .935, t value of 13.243 and a significant value of p = 0.000. Of the four predictors, self-efficacy contributes the highest, 40.8% (indicated by R 2 -Change value of 0.408) to handwashing. This shows a significant positive relationship between self-efficacy and handwashing. The implication is that the higher the self-efficacy, the better the handwashing practice of respondents. According to Akter and Ali (2014), although improved knowledge and awareness of health and environment-related issues enhance hygiene behavior, psychosocial factors such as traditional beliefs, self-efficacy and lack of interest in attending cluster meetings influence safe hygiene behavior. Similar findings were reported in a study among 4,000 Kenyan households (Mukadi, 2016). Another study found handwashing practice to have a significant association with self-efficacy (Sarani, Balouchi, Masinaeinezhad, and Ebrahimitabs, 2014).
Elsewhere, a Malaysian study found low levels of knowledge, practice and self-efficacy to hinder proper hand washing. The study recorded a significant association between gender (p = 0.004), academic achievements (p = 0.038) and practices (p = 0.003) with self-efficacy in proper hand washing (Muhamad et al., 2017). As such, Oyibo (2012) has recommended increase in knowledge, practice and self-efficacy as a panacea to ensuring proper hand washing.
Moreover, a study investigated the determinants of hygiene habits of college students in New York. The results reveal that social norms and self-efficacy, rather than scientific knowledge, were predictors of hygiene habits among the students. Freshmen reported such behavior (80.4%) more than sophomores (71.9%), juniors (67.7%) or seniors (50%, p = .011) (Miko, Cohen, Conway, Gilman, Seward and Larson, 2012).
The next predictor of handwashing was cognitive. Table 5 shows that cognitive has a statistically significant positive relationship with handwashing (F= 103.022, t value= 6.134, p = 0.000). It contributes 5.9% (indicated by the R 2 -Change value of 0.059) to handwashing. The inference is that the higher the cognitive, the better the handwashing practice of the respondents.
The findings of Akter and Ali (2014) are in agreement with the result of this study. They found hygiene behavior of respondents to be mainly driven by improved knowledge and awareness of health and environmental hazards. Another study, Dobe, Mandal and Jha (2013) has indicated the level of education among other factors as significant predictor of handwashing practice. Additionally, a Ghanaian study has reported lack of knowledge on sources of contamination/cross-contamination to be associated with irregular hand washing during food preparation following coughing or sneezing (Kunadu, Ofosu, Abeogye and Tano-Debrah, 2016).
The third predictor that entered the regression model was policy implementation. Table 5 shows that policy implementation has a significant positive relationship with handwashing (F= 103.022, t value =5.682, p = 0.000, R 2 -change value = .042). As indicated by the R 2 -Change value, policy implementation contributes 4.2% to handwashing. This implies that the better the policy implementation strategies, the better the handwashing practice of the respondents.
Mukadi (2016) confirms this finding where he found out that community-level planning as part of implementation significantly influenced adoption of WASH practices (p-value 0.008<0.005) as well as multi-level policy implementation (p < 0.005). In Ethiopia, a sustained implementation of its "Health Extension Program (HEP)" motivated rural householders to construct latrine and improve personal hygienic practices such as handwashing (Newborne and Liisanantti, 2013).
Strategies to halt the spread of the Ebola virus included strict policy on hand washing with soap and alcohol, sanitary funeral practices, case isolation and contact-tracing with quarantine (Pandey et al., 2014). Nettey et al. (2016) concur with these findings where they reported strict implementation to improve hand hygiene among others measures, thereby combating the spread of Ebola virus disease.
The fourth variable that predicts handwashing is cultural identity. Table 6 shows the variables that significantly predict bathing. Five variables came into the regression model, namely: social environment, self-efficacy, physical environment, cognitive and policy implementation. There is a statistically significant relationship (F=51.087, p = 0.000, R 2 -Change value = 0.399) between the five variables and bathing. On the overall, they contribute 39.9% (indicated by R 2 -Change of 0.517) to bathing. The first variable that entered into the regression model is social environment. Table 6 shows that it has a statistically significant positive relationship with bathing (unstandardized coefficient = 0.537, t value= 8.419, p = 0.000). Social environment contributes 29.8% (indicated by the R 2 -Change of .298) to bathing. This infers that the better the social environment, the better the bathing practice. However, the researcher did not find any literature on the effect of social environment on bathing.
Predictors of bathing
The next predictor of bathing was self-efficacy. Table 6 shows that self-efficacy has a significant positive relationship with bathing (unstandardized coefficient =.332, t value =6.410, p = 0.000). As indicated by the R 2 -Change of 0.062, self-efficacy contributes 6.2% to bathing. The implication is that the higher the self-efficacy, the better the bathing practice of the respondents.
Physical environment is the third variable that predicts bathing. It has a significant positive relationship with bathing (unstandardized coefficient = 0.197, t value =3.380, p = 0.001). Furthermore, it contributes 2.2% (indicated by the R 2 -Change value of .022) to bathing practice. This implies that the better the physical environment (source of water), the better the bathing practice of the respondents.
The fourth variable that predicts bathing is cognitive. Table 6 shows that cognitive has a significant negative relationship with bathing (coefficient B = -.026, t value = -2.832, p = 0.000). As indicated by the R 2 -Change value of 0.011, cognitive explains 1.1% of bathing. This shows that the higher the cognitive, the lower or lesser respondents bathe. This further shows a gap between knowledge and practice.
Policy implementation is the last variable that entered the regression model. It has an unstandardized coefficient of .097, t value of 2.067 and significant p value of 0.039. As per the R 2 -Change value of 0.007, policy implementation explains 0.7% of bathing. The contribution seems small, but it is significant. These values indicate that policy implementation has a statistically significant relationship with bathing. This implies that the better the policy implementation strategies, the better the bathing practice. To the knowledge of the researcher, there is no literature on the relationship between policy implementation and bathing.
Difference in Hygienic Practices by Demographic Profile
This section determines whether there is a difference in hygienic practices when income level, educational level, sex, age and tribe are considered. Detailed results are presented using T-Test and Oneway ANOVA.
Income
The t-Test was conducted to identify the difference in hygienic practices by income category. Because the other categories of income level were not comparable, the researcher categorized the respondents' monthly income into two groups: Below 6,000 Liberian dollars (LD) as low income and 6,000 LD and above as middle and high income. Table 7 shows that income category has a significant effect on hygienic practices in terms of waste disposal with -4.502 t-value and 0.000 p-value. Specifically, respondents with high income have higher mean (3.92) and standard deviation (0.40) compared to those with low monthly income who have mean and standard deviation of 3.76 and 0.28, respectively. This indicates that respondents with high income are more associated with sanitary disposal of wastes compared to those with low income. This further shows that income is key to determining hygienic practices.
This result agrees with a study by Abubakar (2017) who reported significant relationship between type of sanitation facilities and household income (χ 2 = 23,467.4, p < 0.001). About 78.9% of those using modern sanitation facilities were the richest households. Similar finding was reported in a Ghanaian study which found the use of unimproved sanitation facilities and engaging in open defecation to increase with decreasing wealth (Adams, Boateng and Amoyaw, 2016). A previous study confirms this result when it found higher-wealth householders to have more than twice the tendency of using improved sanitation facilities compared to lowerwealth householders (odd ratio: 2.3) (Yohannes, Workicho and Asefa, 2014). En and Gan (2011) found higher socioeconomic status to influence the use of improved sanitation.
According to Azage and Haile (2015), wealthy household is associated with sanitary disposal of child feces. Their finding is consistent with a study that found household with higher wealth quintile to have decreased prevalence odds of cysticercosis due to sanitary disposal of feces (Carabin et al., 2015).
In the same vein, Akter and Ali (2014) found poverty to deprive many households of owning latrine, eventually causing them to engage in open defecation or sharing latrine with neighbors. This finding concurs with a study by Sara and Graham (2014) that found low income to impede household from upgrading sanitation facility in Tanzania. Similar trend was reported in a recent study where socioeconomic status significantly affected WASH practices (p < 0.01) (Raihan et al. 2017).
Table 7 also indicates that income category has a statistically significant effect on handwashing (tvalue = -5.919, p = 0.000). Respondents with high income had a higher mean (3.03) and standard deviation (0.62) than those with low income with 2.74 mean and 0.36 standard deviation. This implies that respondents with high income are associated with better handwashing practice compared to those with low income.
A report from the WASH Liberia Baseline Survey (2013) supports this finding. Accordingly, rural dwellers reported very low handwashing rates after handling rubbish (16%), before food preparation (9%) and following handling baby excreta or diapers (6%). As for urban dwellers, 19% wash hands after handling rubbish, before food preparation (11%), and after handling baby feces or diapers (6%). Rabbi and Dey (2013) agree with this finding where they reported higher per capita income as a significant predictor of handwashing. The study of also reported that the rates of hand washing in middle-income and high-income countries following the use of a sanitary facility are 14% and 43%, respectively .
Similarly, income category has a significant effect on hygienic practices in terms of bathing (t-value = -2.443, p = 0.015). Particularly, respondents with high income have higher mean (3.47) and standard deviation (0.38) than those with low monthly income with a mean and standard deviation of 3.38 and 0.31, respectively. This implies that respondents with high income tend to exhibit better bathing practices compared to those with low income.
Educational Attainment
T-Test was performed to identify the difference in hygienic practices by educational level. Educational attainment was placed into two major categories: elementary and high school levels. Table 8 shows that respondents' hygienic practice in terms disposal of wastes has a significant difference (t-value -2.512, p = 0.012) between household groups of those who finished elementary and high school levels. This indicates that there is a significant difference in waste disposal between respondents with elementary education and those with high school education. Further, it implies that those with high school education tend to exhibit better waste disposal compared to those with elementary education. A Kenyan study confirms this finding when it found type of sanitation facility and educational level to be related (Koskei, Koskei, Koske and Koech, 2013). Abubakar (2017) further confirms this result when he reported a significant relationship between type of sanitation facility and educational attainment (χ 2 = 7177.1, p < 0.01). The study by Azage and Haile (2015) found mothers having higher education to have (AOR=2.16, 95% CI: 1.25-3.72) times increased odds of practicing sanitary disposal of child feces when compared to mothers with no education. Sara and Graham (2014) concur with this finding when they found education to be significantly related to latrine use in Tanzania. However, En and Gan (2011) disagree, arguing that maternal literacy does not differ with the use of improved sanitation among children.
Sex
T-Test was performed to identify the difference in hygienic practices by sex. Table 8 indicates that the difference in hygienic practice of respondents in terms of disposal of wastes is not statistically significant (t-value = 0.285, p = 0.776). This means that there is no significant difference in disposal of wastes between male and female. In other words, males and females basically carry out the same practices on disposal of wastes. This result contradicts the finding of Adams, Boateng and Amoyaw (2016) who found gender and the use of type of sanitation facility to be related. Another study agrees with this result when it found older women more likely to use latrines than men (Jenkins, Freeman and Routray, 2014). Also, a study by Thys et al. (2015) found men to be more hesitant than women in neglecting open defecation. Men, by virtue of the patrilineal system, were responsible to build toilets, but mostly preferred open defecation.
Also, Table 9 indicates that hygienic practices in terms of handwashing with t-value = -0.446, p = 0.656 has no significant difference between male and female. This implies that there is no significant difference in handwashing practices between males and females. This finding is supported by Borchgrevink, Cha and Kim (2013) who reported that gender has no relationship with differences in handwashing rates. Similarly, Table 9 shows that hygienic practices in terms of bathing has no significant difference by sex (t-value = -0.360, p = 0.719). This means that there is no significant difference in bathing between male and female.
Age T-Test was conducted to identify difference in hygienic practices by age. Table 10 shows that respondent' waste disposal is not significant by age (t-value = -1.116, p = 0.265). This means that there is no significant difference in waste disposal between age group 39 years and below and 40 years and above. In other words, age group does not determine hygienic practices in terms of waste disposal.
This finding agrees with a Tanzanian study that found statistically non-significant association between age and latrine adoption (Sara and Graham, 2014). By contrast, a Nigerian study reported significant difference (p = 0.002) in age distribution in terms of adherence to hygiene and sanitation practices (Fafunwa et al. 2017). Moreover, mothers or caregivers whose child was 202 48-59 months of age (AOR=2.21, 95% CI: 1.82-2.68) were more likely to practice sanitary disposal of child feces than mothers or caregivers whose child was less than 12 months old (Azage and Haile, 2015). Also, table 10 shows that age does not have a significant effect on hygienic practices in terms of bathing (t-value = 0.100, p = 0.921). This means that there is no difference in bathing habits between respondents who fall in the age group 39 years and below and 40 years and above. In other words, age difference does not determine hygiene behavior in terms of bathing.
Tribe
One-way ANOVA was conducted to identify the difference in hygienic practices by tribe. As shown in Table 11, there is a significant difference in disposal of wastes considering tribal group (F= 13.189, p = 0.000). In particular, respondents from tribe C had the highest mean of 4.07 (SD = 0.32), followed by Tribe E with a mean of 3.93 (SD = 0.31), and Tribe F with a mean of 3.85 (SD = 0.27). Tribe D recorded the lowest mean of 3.61 (SD = 0.43). This means that Tribe C has the best practice on waste disposal followed by Tribes E and F. Also, it means Tribe D has the least practice on disposal of wastes. This finding is supported by a Nigerian study that reported a statistically significant relationship between household ethnicity and type of sanitation facility (Abubakar, 2017 Based on the results in Table 11, respondents' handwashing practice significantly differs among the tribes (F = 17.013, p = 0.000). Specifically, Tribe D has the highest mean of 3.14 (SD = 0.48), followed by Tribe C with a mean of 3.09 (SD = 0.53) and then Tribe B with a mean of 3.0 (SD = 0.55). Tribe F records the least mean of 2.44 (SD = 0.48) followed by Tribe E (Mean = 2.90, SD = 0.50). This means that Tribe D has the best practice on handwashing, followed by Tribes C and B. On the other hand, the least practice on handwashing was carried out by Tribe F and Tribe E.
Similarly, Table 11 presents a significant difference in respondents' bathing by tribal group (F =31.031, p = 0.000). Particularly, Tribe A and Tribe E had the highest means of 3.69 (SD = 0.30) and 3.69 (SD = 0.32), respectively. This was followed by Tribe F with a mean of 3.38 (SD = 0.30) and Tribe D with a mean of 3.33 (SD = 0.28). Tribe C recorded the least mean of 3.20 (SD = 0.38). This implies that Tribes A and E had the best practice on bathing followed by Tribe F. By contrast, Tribe C records the least practice on bathing. This result corresponds with an Indian study that found the Yanadi tribe to be engaged in good hygienic practices such as bathing (Dalibandhu, 2016).
Conclusions and Recommendations
The study shows that self-efficacy, social environment and cognitive factors predicted disposal of wastes, while self-efficacy, cognitive, policy implementation and cultural identity were predictors of handwashing. Additionally, social environment, self-efficacy, physical environment, cognitive and policy implementation predicted bathing practices. This indicates that the better the self-efficacy, social environment, cognitive factors, policy implementation and cultural identity, the higher the likelihood of engaging in improved hygienic practices. Therefore, there is need to improve the social and physical environments, selfefficacy, cognitive factors as well as policy implementation strategies in order to realize improved hygienic practices of the indigenous people in Liberia.
Gender and age did not differ significantly in hygienic practices. However, poor hygienic practices were seen among respondents of poorer households. Tribal affiliation differed significantly in hygienic practices, while educational attainment differed significantly only in disposal of wastes. Therefore, there is need to improve the socioeconomic status, to address unhygienic cultural practices and to conduct health promotion programs in order to create behavioral changes leading to improved hygienic practices.
|
2020-10-19T18:11:58.521Z
|
2020-09-17T00:00:00.000
|
{
"year": 2020,
"sha1": "299960ce6f5cfadf99c39eb77f1a0c18fd1ffd0a",
"oa_license": "CCBYNC",
"oa_url": "http://eajess.ac.tz/wp-content/uploads/2020/12/EAJESS-1-2-0035-updated.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "411e873eb52ddbe7c3006d3692c34d0d59f05887",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
215548423
|
pes2o/s2orc
|
v3-fos-license
|
Objective Bayesian analysis for spatial Student-t regression models
The choice of the prior distribution is a key aspect of Bayesian analysis. For the spatial regression setting a subjective prior choice for the parameters may not be trivial, from this perspective, using the objective Bayesian analysis framework a reference is introduced for the spatial Student-t regression model with unknown degrees of freedom. The spatial Student-t regression model poses two main challenges when eliciting priors: one for the spatial dependence parameter and the other one for the degrees of freedom. It is well-known that the propriety of the posterior distribution over objective priors is not always guaranteed, whereas the use of proper prior distributions may dominate and bias the posterior analysis. In this paper, we show the conditions under which our proposed reference prior yield to a proper posterior distribution. Simulation studies are used in order to evaluate the performance of the reference prior to a commonly used vague proper prior.
Introduction
Geostatistical data modeling (Cressie, 1993) has now virtually permeated all areas of epidemiology, hydrology, agriculture, environmental science, demographic studies, just to name a few. Here, the prime objective is to account for the spatial correlation among observations collected at various locations, and also to predict the values of interest for non-sampled sites. In this paper we will focus in a fully Bayesian approach to analyze spatial data, whose main advantage is that parameter uncertainly is fully accounted for when performing prediction and inference, even in small samples (Berger et al., 2001). However, elicitation of priors for correlation parameter in a Gaussian processes is a non trivial task (Kennedy and O'Hagan, 2001).
The problem of inference and prediction for spatial data with Gaussian processes using objective priors has received attention in the recent literature. It started with Berger et al. (2001) that develop an exact non-informative prior for unknown parameters of Gaussian random fields by using exact marginalization in the reference prior algorithm (See, Berger and Bernardo J., 1991). Further, Paulo (2005) and Ren et al. (2013) generalized the previous results to an arbitrary number of parameters in the correlation parameter.
After the precursor proposal of De Oliveira (2007) that allow the inclusion of measurement error for reference prior elicitation, other extension from this perspective were proposed in the literature, see for instance, Ren et al. (2012) and Kazianka and Pilz (2012).
In the context of the Student-t distribution, Zellner (1976) was the first to present a Bayesian and non-Bayesian analysis of a linear multiple regression model with Student-t errors assuming a scalar dispersion matrix and known degrees of freedom. An interesting result of this paper is that inferences about the scale parameter of the multivariate-t distribution can be made using an F-distribution rather than the usual χ 2 (or inverted χ 2 ) distribution. Later, Fonseca et al. (2008) developed an objective Bayesian analyses based on the Jeffreys-rule prior and on the independence Jeffreys prior for linear regression models with independent Student-t errors and unknown degrees of freedom. This procedure allowed a non-subjective statistical analysis with adaptive robustness to outliers and with full account of the uncertainty. Branco et al. (2013) introduced an objective prior for the shape parameter using the skew-t distribution proposed by Azzalini and Capitanio (2003).
More recently, Villa and Walker (2014) constructed an objective prior for the degrees of freedom of the univariate Student-t distribution when this parameter is taken to be discrete.
Even though some solutions have been proposed in the literature to deal with the problem of objective prior under the Student-t distribution, to the best of our knowledge there are not studies conducting objective Bayesian analyzes under the Student-t spatial regression model. Following Berger et al. (2001), we introduce a reference prior based on exact marginalization and we derive the conditions that it yields a valid posterior distribution.
Moreover, the independence Jeffrey and the Jeffrey-rule priors are derived and analyzed.
As in Berger et al. (2001), we show that the Jeffreys priors suffers many drawbacks while the proposed reference prior produces more accurate estimates with good frequentist properties.
The paper is organized as follows. In Section 2, we describe the Student-t spatial regression model as well as the family of covariance functions that will be considered. In Section 3, a general form of improper priors are presented and the reference prior is provided with the conditions of its validity. In Section 4, model selection criteria are presented in order to evaluate the competing Bayesian models. In Section 5, a simulation study is performed to assess the frequentist properties of the Bayesian estimates under different priors. Finally, a brief discussion is presented in Section 6.
The Student-t Spatial Regression Model
Let Y (s) denote the response over location s ∈ D s , where D s is a continuous spatial domain in IR 2 . We assume that, the observed data y(s) = (y(s 1 ), . . . , y(s n )) ⊤ is a single realization of a Student-t stochastic process, Y (s) ≡ y(s) : s ∈ D s (Palacios and Steel, 2006;Bevilacqua et al., 2020). Thus, if Y follow a multivariate Student-t distribution with location vector µ, scale matrix Σ and ν degrees of freedom, Y ∼ t n (Xβ, Σ, ν), the Student-t spatial regression (T-SR) model can be represented as where X is a n×p non-stochastic matrix of full rank with the ith row Equivalently, the model can be written as where µ(s i ) = j x j (s i )β j is the mean of the stochastic process for j = 1, . . . , p. Therefore, a realization of a Student-t process can be represented by setting ǫ ∼ t n (0, Σ, ν) with a valid covariance function for the scale matrix Σ. We concentrate on a particular parametric class of covariance functions such that the scale matrix is given by where in standard geostatistical terms: σ 2 is the sill; τ is the nugget effect; φ determines the range of the spatial process; R(φ) is an n × n correlation matrix; and I is the identity matrix. We assume that R(φ) is an isotropic correlation matrix and depends only the Euclidean distance d ij = ||s i − s j || between the points s i and s j . Thus, the likelihood function of the model parameters (β, σ 2 , φ, τ, ν), based on the observed data y, is given by where |A| denotes the determinant of the matrix A. In this work, we consider four general families of isotropic correlation functions in IR 2 , says, spherical, Cauchy, power exponential 3 The Reference Prior Palacios and Steel (2006) discuss that the derivation of a reference prior for non-Gaussian processes is not trivial. In this section, we introduce a reference prior for the T-SR model defined in (1) without a nugget effect, i.e., Σ = σ 2 R(φ). We obtain π(φ, ν) through the marginal model defined via integrated likelihood.
3.1 Prior density for (β, σ 2 , φ, ν) , consider the family of improper priors of the form: for different choices of π(φ, ν) and a. Selection of the prior distribution for φ and ν is not straightforward. Assuming an independence structure π(φ, ν) = π 1 (φ) × π 2 (ν), one alternative could be to select improper priors for these two parameters, nevertheless, it is necessary to be careful, since it is obligatory to show that such selection produces a proper posterior distribution. For φ, the use of truncation over the parameter space or vague proper priors are alternatives to overcome the improper posterior distribution problem, however, in both cases inferences are often highly dependent on the bounds used or on the hypeparamters selected for the vague distribution (Berger et al., 2001). And, for ν even when the parameter space is restricted, the maximum likelihood estimator may not exist with positive probability (Fonseca et al., 2008). Other choices of priors can be found in Geweke (1993); De Oliveira (2007) The expression for π(φ, ν|y) based on an arbitrary prior π(φ, ν) is presented in the following proposition.
Proposition 3.1 For a < ν /2 + 1 and different choices for π(φ, ν), the posterior density π(φ, ν|y) can be written as where y be the generalized least square estimator of β. So, given π(φ, ν), to guarantee propriety of the posterior density π(β, σ 2 , φ, ν|y) and the existence of the first two moments of the Student-t distribution, we have to ensure that
For the reference prior, θ * is the parameter of interest and we assume that β is a nuisance parameter. Now, factorizing the prior distribution π(β, θ * ) = π(β|θ * )π(θ * ) and choosing π(β|θ * ) = 1 as this is the reference prior in Equation (2), we have that where S * = S 2 /σ 2 and ν 1 = (n−p)+ν. It is possible to show that this expression converge to the normal case when ν → +∞. Using the prior reference method (Berger and Bernardo J., 1991), it is necessary to calculate Unfortunately, these conditional expectations have no analytical form for the Student-t case. One possible solution is to numerically compute these expressions by using Monte Carlo approximation which will demand a high computational cost making inference infeasible. For this reason, we suggest the use of the marginal expectations E in our prior proposal. This suggestion may result in a improper prior (Theorem 3.1), but lead to a proper posterior distribution (Theorem 3.2).
Theorem 3.1 Under the T-SR model defined in (1), for φ > 0 and ν > 4 + ε, for any ε > 0, the prior distribution obtained through the reference prior method is of the form (3), with a = 1 and π(φ, ν) ∝ BCD + 16 B 11 C 11 B 12 − BC 2 11 − 8B 2 12 C − where A short proof of this theorem can be found in Appendix 6.1. The following Lemma provides conditions to show the results of Theorem 3.2.
See Appendix 6.2 for the proof. The next Theorem 3.2 shows the conditions under which the reference prior introduced in Theorem 3.1 is proper.
Model selection
Let us start by setting up the model selection as an hypothesis testing problem (Banerjee et al., 2014;Berger et al., 2001). Thus, replace the usual hypotheses by a candidate parametric model, say m k , having respective parameter vectors θ m k . Under the prior density proposal in Equation (3) we compute the marginal density for a model m k as To compare q different models m k , with k = 1, . . . , q, we assign equal prior probabilities to the models. Therefore, the resulting posterior probability for the k-th model is defined by .
Under this criterion a model with the largest posterior probability is preferable. Another possibility to perform model selection is to choose the model with best prediction power.
Suppose that n 0 locations are separated as a validation set. The mean square prediction error (MSPE) of the k-th model is defined by
Simulation Study
The study of the frequentist properties of Bayesian inference is of interest to assess and understand the properties of non-informative or default priors (e.g., Stein, 1985;Berger, 2006;De Oliveira, 2007;Kazianka and Pilz, 2012;Branco et al., 2013;He et al., 2020). Therefore, a simulation study is performed to assess the performance of the proposal method and compare it with a vague prior (vague) of the form (3).
We propose two T-SR models with coordinates s = (x 1 , x 2 ) and R belonging to the Matérn family with κ = 0.5 (exponential correlation structure), σ 2 = 0.8 and φ = 2 to study the proposed priors. The first one (Scenario 1) is given by, y(s i ) = 10 + ε s i , i, . . . , n, with ε ∼ t n (0, σ 2 R, ν = 5). And, to illustrate the beyond a trivial intercept model, the second T-SR model (Scenario 2) is given by A total of K = 500 Monte Carlo simulations were generated for each scenario, the coordinates s were sampled at n = 100 locations of a regular lattice in D s = [0, 10] × [0, 10].
For the vague proper prior, we consider a = 2.1 and π(φ, ν) = π(φ) × π(ν) with π(φ) = U(0.1, 4.72) and π(ν) = T exp(λ; ν ∈ A), where π(λ) = U(0.01, 0.25), A = [4.1, +∞) and T exp(λ, ν ∈ A) denote the truncated exponential distribution. The distribution of λ is such that it allows the mean of the prior of ν to vary from 4 to 100. The prior exponential prior of ν is truncated above 4.1 to guarantee the existence of the Student-t process and the prior of φ allow that the distance such that the empirical range, corr(s i , s j ) < 0.05, varies from 0.30 to 14 (which is the minimum and maximum distance between the locations, respectively).
For the two scenarios, we compute the empirical equal-tailed 95% credible interval for all parameters, based on the two priors. We also compute the coverage probability for each parameter as the number of simulations in such the parameter is inside the credible limits, and the expected log length of each credible interval as the mean of the logarithm of the difference (Log-length) between the upper and lower credible limits for each simulation.
The bias of each parameter was estimated as Bias j = K k=1 (θ k j − θ j )/K, where θ j is the true parameter value andθ k j is the median posterior estimate for the j-th parameter in the k-th Monte Carlo simulation.
Discussion
In this paper we propose and recommend a reference prior for the spatial Student-t regression. For the proposed prior, the conditions under which it yields a proper posterior distribution were presented and discussed.
We show through simulations that the reference prior presents better performance than the vague prior . It shows small estimation bias and adequate frequentist coverage for all parameters. The OBASpatial R package is available at CRAN for download and allow practitioners to fit the proposed model with the different priors introduced in the manuscript.
As further studies, the inclusion calculation of Jeffrey's priors for the Student-t spatial regression is of interest Different generalizations of the family of correlation functions are also of interes.
|
2020-04-10T01:00:55.914Z
|
2020-04-09T00:00:00.000
|
{
"year": 2020,
"sha1": "1773c62ef6f1e9af3bee848755bc44b4083321e0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1773c62ef6f1e9af3bee848755bc44b4083321e0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
268748152
|
pes2o/s2orc
|
v3-fos-license
|
Dispute over the 1974 MoU Box between Indonesia and Australia: How MoU Legally Binding in Two Countries?
This study aims to delineate the legal authority of Memorandum of Understanding (MoUs) within the jurisdictions of Indonesia and Australia,
A. Introduction
Along with the progression of globalization, the evolution of international cooperation manifests through the exchange of ideas, goods, and multifaceted engagements.The augmentation of international cooperation is discernible through a notable surge in collaborations among nations, particularly in the realm of business involving companies from two or more countries.Furthermore, countries engage in cooperative endeavors, extending to political matters, with the potential to yield benefits for various stakeholders.These collaborations culminate in the formation of legal bonds as formalized through written agreements. 1 Crucially, the establishment of legal bonds through cooperative efforts is not confined to alliances between nations adhering to similar legal systems.On the contrary, it is a phenomenon observed frequently between countries with disparate legal frameworks.An exemplar of such collaboration is the agreement between Australia, characterized by its adherence to common law, and Indonesia, which follows a civil law system.This cross-legal system cooperation underscores the adaptability and efficacy of international collaboration in transcending legal divergences for mutually advantageous outcomes. 2ollaborative relationships between countries are not invariably formalized through permanent agreements; instead, they frequently take on a non-permanent or temporary nature.This approach stems from the recognition that crafting a lasting agreement entails extensive negotiations and meticulous preparation to ensure its effective implementation among the involved parties.Consequently, preliminary stage agreements are often articulated in the form of a Memorandum of Understanding (MoU).This practice serves to expedite the commencement of cooperation without impeding progress, acknowledging the complexities involved in the comprehensive negotiation and preparation required for a formalized, enduring agreement. 3emorandum of Understanding (MoU) is commonly employed in confidential business collaborations or intergovernmental relationships, serving as an initial phase preceding the establishment of a permanent and binding agreement.This strategic use of MoUs allows for ongoing negotiations even after the initial agreement, facilitating flexibility in reaching a comprehensive and enduring consensus.Within Indonesia's civil law system, MoUs are perceived as possessing moral ties or equivalency to legally binding agreements.This perspective places paramount importance on the substance of the agreement rather than the nomenclature, provided it aligns with the principles outlined in Article 1320 of the Indonesian Civil Code.In the realm of civil law, MoUs are regarded as binding agreements, compelling the involved parties to promptly adhere to and fulfill their obligations as stipulated within the MoU.
In the further context, the divergence in perspectives on MoUs between common law and civil law systems is evident in their respective interpretations.Nations following the common law legal system perceive MoUs as lacking the requisite strength to compel parties to adhere to their terms, deeming them nonlegally binding due to their designation as mere memoranda of understanding.This disparity in legal interpretation poses a potential international challenge for countries engaged in cooperation while operating under distinct legal systems, as exemplified by Indonesia and Australia.Private international law does not offer explicit guidance on whether MoUs between countries with divergent legal frameworks, such as Indonesia and Australia, possess legal force akin to general agreements.This lack of specificity raises complexities in determining the binding nature of MoUs in cross-jurisdictional collaborations.
An illustrative instance of a Memorandum of Understanding (MoU) between Indonesia and Australia is evident in the 1974 MoU Box. 4 This agreement aimed to delineate sea boundaries and address ownership concerns surrounding Pasir Island.This study delves into an in-depth examination of the legal implications of MoUs in both Indonesia and Australia.Additionally, it explores mechanisms for resolving disputes arising from MoUs between the two nations, drawing upon principles of private international law.
Furthermore, the dispute over the 1974 MoU Box between Indonesia and Australia raised questions about the legal binding nature of the agreement in both countries.To understand how the Memorandum of Understanding is legally binding in two countries, it is important to examine the elements and characteristics of an MoU, as well as the specific circumstances surrounding this particular agreement.An MoU is a formal agreement between two or more parties that outlines their 4 The phrase "1974 MoU Box" refers to a Memorandum of Understanding (MoU) that was entered into between Indonesia and Australia in 1974.The specific details and context of this MoU, commonly known as the "1974 MoU Box," involve agreements related to the demarcation of sea boundaries and the resolution of disputes regarding the ownership of Pasir Island.The MoU was likely established to facilitate cooperation and address contentious issues between the two countries in the specified areas during that period.In the context of international treaty law, the 1974 MoU Box is an agreement that aims to regulate traditional fishing rights.This agreement may encompass provisions related to the use and utilization of fisheries resources in a specific area between the involved parties, in this case, Indonesia and Australia.As an international agreement, the 1974 MoU Box may address issues related to the sustainable use of fisheries resources, the allocation of rights and responsibilities, and the resolution of disputes concerning traditional fishing in the agreed-upon region.See Maria Sari Awida, "Efektifitas MoU Box 1974 Terhadap Hak Perikanan Tradisional Nelayan Tradisional Nusa Tenggara Timur."Thesis (Yogyakarta: Universitas Atma Jaya Yogyakarta, 2016); Akhmad Solihin, "Konflik Illegal Fishing di Wilayah Perbatasan Indonesia-Australia." Marine Fisheries: Journal of Marine Fisheries Technology and Management 1, no. 2 (2010): 29-36; Hatta Agus Kurniawan Nasution, "Kebijakan Traditional Fishing Rights dalam MoU BOX 1974 (Kasus Daerah Papeta, Kabupaten Rote Ndao, Propinsi Nusa Tenggara Timur."Thesis (Bogor: Institut Pertanian Bogor, 2008).mutual understanding, intentions, and commitments towards a specific goal or objective.One of the key elements of an MoU is that it serves as a preliminary agreement, much like the main points set forth in a business contract. 5While MoUs are usually made in the form of an underhand agreement without any stamp duty and there are no compulsory obligations for more detailed agreements, they serve as a guideline or temporary guide for the parties involved.In the case of the 1974 MoU Box between Indonesia and Australia, it is important to note that both countries went through a complex process of negotiating their interests and establishing a common understanding of the intentions of the agreement.Once the parties obtained the MoU, they proceeded with a feasibility study to assess the level of feasibility and prospects of the business contract.
B. Method
The assessment of problems in this research uses normative juridical methods which are basically by examining norms, rules, principles, principles, doctrines, theories and legal literature to find answers to research6 .This research uses a case approach and then examines the subject matter based on the Law and secondary data or library materials.The source of this research is primary legal material consisting of laws and other regulations such as the Civil Code, Arbitration Law, Australian Contract Law, and others as well as secondary legal material consisting of books, journals, and articles related to legal literature in the civil field.Data analysis techniques using analytical descriptive methods are then interpreted so that solutions and answers to problems are found.
Memorandum of Understanding in Indonesia
MoU or Memorandum of Understanding in the Indonesian context is an agreement that is generally used in various situations.However, keep in mind that MoUs do not have a level of legal force equivalent to any other formal contract or agreement.Typically, MoUs have a lower legal status and are not legally binding, although they can sometimes have legal repercussions in certain situations7 .
The MoU is actually a response to agree to other agreements, whether formed or not, which can be stated in writing or only verbally.It can be concluded that the MoU is mostly an engagement as Article 1233 of the Civil Code which essentially explains that every engagement is realized because of consent.In an MoU, involving two or more people, it is similar to an engagement where the parties have rights and obligations according to the agreed portion.
When drafting Article 1234 of the Civil Code, the article explains that every engagement can take the form of giving, doing, or not doing something.It emphasizes the importance of fulfilling obligations in engagement, which can take the form of assigning duties, actions, or obligations not to do something8 .MoUs that are made legally will have full legal ties as the principles of Pacta Sunt Servanda so that its position is equivalent to binding laws and prioritizes the main matters agreed in the MoU9 .
There are 2 (two) understandings or opinions regarding the legal force of the MoU because there are still differences of opinion regarding the position of the MoU as follows:10 1) Gentlemen Agreement, namely the legal force of the MoU cannot be equated with an agreement in general even though it is agreed or made with the strongest supporting basis such as a notary deed though, although in practice the MoU is very rarely made notarially, and is still considered to have no power to bind the parties legally.2) Agreement is Agreement, this opinion is of the view that the juridical basis of the MoU has legal force like other agreements, namely Article 1338 paragraph (1) of the Civil Code which in essence explains that every matter agreed by the parties is applicable law so as to give rise to a legal bond.In addition, if referring to the principle of freedom of contract and consensual, every matter that is considered lawful according to law and has been agreed by the parties can be applied as the contract agreement generally applies and if it is stated in written form, it can be said to be a contract.Theories that support this view: a.The Lost Profit Theory is that if there is an agreement that can cause loss of profit if one party defaults, the agreement can be said to be a contract.b.Loss Trust Theory, where an agreement can be expressed as a contract if there is a material loss to the contract if there is a default by the contracting parties.c.The theory of Promisory Estopel is that if there is a bargaining process in an agreement then it can also be said to be a contract.d.Quasi-contract theory is that if an agreement has fulfilled the general terms of the contract, it has been considered a legally binding contract.
Memorandum of Understanding in Australia
Australia is a country with an understanding of the legal system common law Where in this country Mou It is considered that it does not have binding legal force formed with the aim that there is a negotiation between the parties so that the negotiation cannot be used as evidence or as Enforce at the trial.
Australian Contract Law, considers an MoU to meet six basic elements so that an MoU has binding legal force like a formal agreement.The six basic elements include: 1) Offer, Australian courts give a different understanding of initial negotiation to offer in formal law.Negotiation at the initial stage is interpreted as having no intention in creating an agreement with the force of law binding on the parties.2) Acceptance, is the stage of expression of both parties after agreeing or agreeing with the provisions at the offer stage.3) Consideration, in the case of the formation of the agreement there must be something that makes the parties need to agree to the agreement.The exchange rate between the parties to the Act is also called consideration.4) Mutuality of Obligation, in carrying out an agreement, there must be an obligation that must be fulfilled.This is to bind the parties to the content of the agreement so that there is no breach of obligation that results in the cancellation of the agreement.5) Competency and Capacity, the parties concerned in the agreement must have qualified knowledge to account for the obligations in the agreed agreement.6) A Written Instrument, Last and not least in an agreement there must be a written statement or instrument to prove the agreement of both parties.The similarity between Indonesia, Australia, and international law regarding MoUs lies in the binding force declared by the court when the MoU has a written instrument consisting of sanctions, rights and obligations of the parties and the results of agreement between both parties regarding the binding or non-binding force of their MoU.
Memorandum of Understanding Dispute Settlement Between Indonesia and Australia Based on Private International Relations
There are basically several settlement principles that can be used under the Private International Code in determining the laws applicable to contracts with foreign elements or linking cooperation between two countries.It can be started with the traditional approach through determining the primary link point to determine the presence of foreign elements in a dispute such as determining nationality, flag of a sailing ship, aircraft, domicile, place of residence, seat of legal entity, and choice of law internationally and then secondary link point consisting of the doctrine of lex loci contractus, namely law based on the domicile of making a contract, Lex loci solutionis the law by which the agreed contract is executed, the proper law of the contract is the determination of law based on the most link points, and the most characteristic performance is the law of the party who has the most characteristic obligations according to the type of contract. 11Lex rae sitae is a dispute resolution based on the place where the goods or objects are located 12 .
In fact, all contracts are not subject to national laws or regulations.The law that is the reference for the parties to the contract is the national law used in the contract made.Huala Adolf explained that in the choice of law the parties choose the rules of law of a particular country, not necessarily the country's court has the authority to adjudicate and vice versa.Parties involved in the contract have freedom in terms of determining the forum and laws in certain countries.Sudargo Gautama explained that choice of law is a freedom that gives related parties to determine or choose the law used in the contract.At least there is a solution in the issue of choice of law, namely based on the theory of lex loci solutions, this legal theory provides the basis that what applies to the contract is the law where the contract is executed.This provides an answer in finding a law that is used entirely for dispute resolution 13 .
In international contract settlement carried out by people with people from different countries, people with a legal entity from different countries, legal entities with legal entities from different countries, or countries with other countries which certainly have different legal systems cause conflicts for the parties involved in resolving disputes due to differences in legal systems.Law No. 30 of 1999 concerning Arbitration and Alternative Dispute Resolution in article 6 paragraph (1) explains that in essence, civil dispute resolution has another way, namely alternative dispute resolution based on good faith.
Arbitration, categorized as Alternative Dispute Resolution (ADR), emerges as a viable option for addressing international Memorandum of Understanding (MoU) disputes, exemplified by the 1974 MoU Box involving Indonesia and Australia.The United Nations Convention on the Law of the Sea (UNCLOS) delineates the arbitration procedure for alternative dispute resolution, employing a selfdetermination approach by arbitrators. 14n the application of treaty dispute resolution through international arbitration, three fundamental principles necessitate consideration: the Principle of Nationality, the Principle of Reciprocity, and limitations on foreign arbitral awards.The Principle of Nationality underscores the importance of national legal considerations in determining the eligibility of a judgment to be classified as foreign.Reciprocity, as a guiding principle, dictates that not all international arbitration awards can be automatically recognized and executed.To achieve recognition and execution, the state must maintain a reciprocal relationship with the country where the award was granted.Furthermore, the limitation on foreign arbitral awards dictates that the acknowledgment and execution of an international arbitral award may be permitted only if the award originates from a country with bilateral ties to the enforcing state.These principles collectively shape the framework for utilizing international arbitration as a method for resolving MoU disputes on an international scale. 15oreover, the protracted dispute concerning the ownership of Pasir Island could find resolution through recourse to the International Court of Justice.Previous negotiations between the two countries have failed to yield an agreement to address the persistent issues.This situation mirrors past disputes, such as the Sipadan and Ligitan Island disagreements between Malaysia and Indonesia, where Malaysia emerged victorious.The outcome was influenced by compelling evidence, including effective occupation demonstrated through administrative processes, conservation endeavors, and protective measures, underscoring the robust legal position of Malaysia. 16oncerning the dispute surrounding the 1974 MoU Box between Indonesia and Australia, it is noteworthy that the Australian government does not outright prohibit Indonesian fishermen from engaging in activities such as fishing, as long as they adhere to the terms agreed upon in the agreement.The formation of the 1974 MoU Box was mandated by Article 51 of the United Nations Convention on the Law of the Sea (UNCLOS), which addresses the recognition of traditional fishing rights for archipelagic countries sharing direct borders with others.However, these recognitions are contingent upon negotiations between the concerned countries. 17t is crucial to highlight that Indonesia has consistently maintained a stance of non-recognition of the ownership of Pulau Pasir/Ashmore Reef, considering it as part of Australian territory inherited from the United Kingdom, as affirmed in the 1957 Djuanda Declaration.Australian government arrests of Indonesian fishermen are based on clear reasons, primarily stemming from violations such as the use of inappropriate fishing gear like tiger trawls that can harm the marine ecosystem, and the operation of motorized vessels not in compliance with the terms outlined in the 1974 MoU Box. 18 In this context, the lack of awareness among fishermen regarding the existence of the 1974 MoU Box and subsequent agreements, such as the 1989 Agreed Protocol, is attributed to inadequate government socialization of the boundary agreement.Additionally, the absence of clear sea boundaries for operational areas available to Indonesian fishermen further complicates the matter. 19
D. Conclusion
Finally, this study highlighted and concluded that in Indonesia, there exist two contrasting perspectives regarding the legal standing of Memoranda of Understanding (MoUs).The first viewpoint, characterized as the Gentlemen Agreement, posits that the legal force of MoUs cannot be equated with general agreements, even when drafted in their most robust form.Conversely, the "Agreement is Agreement" stance asserts that the MoU holds equivalent legal force to any other agreement, citing Article 1338(1) of the Civil Code, which recognizes the automatic legal binding nature arising from the parties' mutual agreement.In addition, Australia, operating under the common law system, adopts a perspective that initially views MoUs as lacking inherent legal bindingness, considering them merely as a prelude to core agreements during negotiations.However, an MoU can acquire binding and coercive legal force akin to a standard agreement if it adheres to the six essential elements outlined in Australian Contract Law.To mitigate prolonged disputes, such as the 1974 MoU Box, it is imperative for both countries to incorporate legal settlements in the MoU formulation process.This approach ensures clarity and minimizes the likelihood of misunderstandings.
In the context of international treaty dispute resolution, employing Private International Law becomes crucial.Identifying primary and secondary link points, as stipulated by Article 6(1) of Law No. 30 of 1999 concerning Arbitration and Alternative Dispute Resolution, provides a mechanism to address disputes in good faith.Additionally, disputes may be resolved through international bodies like the International Court of Justice, as exemplified by cases such as Sipadan and Ligitan, should the 1974 MoU Box dispute persist.To manage the ongoing dispute effectively, the Indonesian government can engage in proactive measures, such as socializing the terms of the 1974 MoU Box and subsequent agreements with local fishermen around Pasir Island.This proactive approach aims to prevent misunderstandings that could lead to the unwarranted arrest of Indonesian fishermen in the waters surrounding Pasir Island.
|
2024-03-30T15:09:56.194Z
|
2024-01-31T00:00:00.000
|
{
"year": 2024,
"sha1": "6230410009f3abcd5fd990cf7c6843aff0e48513",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.unnes.ac.id/sju/ipmhi/article/download/76236/26254",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fae9397c4a3b2052596e9b4a6416d9b9bf7d8d27",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
}
|
236449630
|
pes2o/s2orc
|
v3-fos-license
|
SARS-CoV-2 (COVID-19), viral load and clinical outcomes; lessons learned one year into the pandemic: A systematic review
BACKGROUND Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections is diagnosed via real time reverse transcriptase polymerase chain reaction (RT-PCR) and reported as a binary assessment of the test being positive or negative. High SARS-CoV-2 viral load is an independent predictor of disease severity and mortality. Quantitative RT-PCR may be useful in predicting the clinical course and prognosis of patients diagnosed with coronavirus disease 2019 (COVID-19). AIM To identify whether quantitative SARS-CoV-2 viral load assay correlates with clinical outcome in COVID-19 infections. METHODS A systematic literature search was undertaken for a period between December 30, 2019 to December 31, 2020 in PubMed/MEDLINE using combination of terms “COVID-19, SARS-CoV-2, Ct values, Log10 copies, quantitative viral load, viral dynamics, kinetics, association with severity, sepsis, mortality and infectiousness’’. After screening 990 manuscripts, a total of 60 manuscripts which met the inclusion criteria were identified. Data on age, number of patients, sample sites, RT-PCR targets, disease severity, intensive care unit admission, mortality and conclusions of the studies was extracted, organized and is analyzed. RESULTS At present there is no Food and Drug Administration Emergency Use Authorization for quantitative viral load assay in the current pandemic. The intent of this research is to identify whether quantitative SARS-CoV-2 viral load assay correlates with severity of infection and mortality? High SARS-CoV-2 viral load was found to be an independent predictor of disease severity and mortality in majority of studies, and may be useful in COVID-19 infection in susceptible individuals such as elderly, patients with co-existing medical illness such as diabetes, heart diseases and immunosuppressed. High viral load is also associated with elevated levels of TNF-α, IFN-γ, IL-2, IL-4, IL-6, IL-10 and C reactive protein contributing to a hyper-inflammatory state and severe infection. However there is a wide heterogeneity in fluid samples and different phases of the disease and these data should be interpreted with caution and considered only as trends. CONCLUSION Our observations support the hypothesis of reporting quantitative RT-PCR in SARS-CoV-2 infection. It may serve as a guiding principle for therapy and infection control policies for current and future pandemics.
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic and associated mortality continues to rise and spread unabated in United States and worldwide. Coronavirus disease 2019 (COVID-19) infection is diagnosed via real time reverse transcriptase polymerase chain reaction (RT-PCR). However this assessment is qualitative and reported as a binary positive or a negative test. There is an urgent need to identify high risk patients early in the course of the illness, which includes rapid testing. Quantitative viral load may provide valuable assessment in risk stratification and may assist with early implementation of therapy in susceptible populations such as elderly, immunosuppressed patients with comorbidities.
Quantitative viral RNA load as determined by qRT-PCR assay and reported as cycle threshold (Ct < 38) value and/or log 10 (viral copies/mL) from respiratory or blood specimens is a critical factor in diagnosing SARS-CoV-2 virus infection . In addition, viral load dynamics in body fluids such as plasma, serum, urine, feces is emerging as a factor in determination of severe inflammation, infectiousness and transmissibility of COVID-19 .
Similar association of high viral load along with age, comorbidities and elevated mortality were also demonstrated during the previous SARS-CoV, pandemic in Hong Kong in the year 2003 and MERS-CoV pandemic in middle east in 2012 [61][62][63][64].
At present there is no Food and Drug Administration (FDA) Emergency Use Authorization issued for quantitative viral load assay in the current pandemic [59]. The intent of this research is to identify whether quantitative SARS-CoV-2 viral load assay correlates with clinical outcomes, particularly if there is any correlation with severity of infection and mortality? This a correlation study and does not imply causation. The author qualitatively examined the available data from different manuscripts to find patterns and generate a hypothesis for future research. These may assist clinicians; epidemiologist and health care policy makers develop strategies to improve care in COVID-19 sepsis.
MATERIALS AND METHODS
A systematic literature search was undertaken in PubMed/MEDLINE using combination of terms "COVID-19, SARS-CoV-2, Ct values, Log 10 copies, quantitative viral load, viral dynamics, kinetics, severity of symptoms, sepsis, mortality'' for a period between December 30, 2019 to December 31, 2020. Review of manuscripts was performed according to principles outlined in Cochrane handbook. Figure 1 (PRISMA flow diagram).
Due to an explosion of COVID-19 related research and manuscripts, search was limited to adult (> 18 years) human subjects and published in English language journals. All data is retrospective, de-identified and conforms to the ethical principles in "Declaration of Helsinki". Manuscripts from preprint non-peer reviewed servers, review articles and individual case reports were excluded. After screening 990 manuscripts, a total of 60 manuscripts which met the inclusion criteria were identified. Data on age, number of patients, sample sites, RT-PCR targets, disease severity, intensive care unit (ICU) admission, mortality and conclusions of the studies was extracted, organized and presented (Table 1). Other relevant articles with relevant information on viral load assessment and mortality, severity and infectiousness and transmission were also included for discussion purposes. During the course of the pandemic in the year 2020, the author followed the PubMed literature on the research question and carefully tracked and evaluated the consistency and quality of the published articles to ensure credibility, reliability, transferability and reduce the risk of bias. The full text of selected articles was fully read, and the key findings were extracted. To establish reliability the author recorded the data in a table and updated assessment of the results. The use of the tables for recording manuscripts provided this researcher with a chance to evaluate the results of the data provided in each manuscript and follow the trends in this topic. The table also helped in construction of concise conclusions of the data. The table is transparent and reproducible and may be useful for other researchers to follow upon.
Due to a high heterogeneity in patient population, data from different countries, different methods in sampling, comorbidities, and different parameters used, the content was analyzed and is summarized using qualitative (descriptive) terms. Data with P value (< 0.05) was considered statistically significant.
A total of 10514 patients were pooled from all reported studies. Quantitative RT-PCR and viral dynamics are reported in samples obtained from nasopharyngeal and oropharyngeal swabs, saliva, sputum, bronchial/tracheal lavage, feces, plasma/serum and urine samples. All studies had initial COVID-19 diagnosed on upper respiratory samples. Subsequent quantitative viral load was obtained and described from various other specimens and body fluids.
Most studies consistently defined severity of illness and sepsis as: There is variation observed in kinetics, tissue distribution and antibody response between mild and severe infections. Wang et al [13] analyzed a cohort of 12 severe and 11 mildly ill patients and demonstrated a significant difference in the initial nasopharyngeal peak viral load (P < 0.001) between two groups. Subsequent prolonged viral shedding in other body fluids and stool occurred with detectable viral load for up to 40 d (days) in severely ill compared to 15 d in mildly ill group. Viral RNA was detected from respiratory tract, stool, plasma and urine samples in the Yu et al [20] analyzed their cohort of 92 patients and observed that high viral load in baseline sputum samples was linearly associated with severity and risk of disease progression (P < 0.017).
Another cohort of 96 patients with mild and severe infections demonstrated similar viral kinetics. Respiratory viral load remained elevated in the severe group up to the third and fourth week after disease onset, compared to milder group where viral load peaked in the second week followed by a decline. Subsequent viral detection in serum samples was also higher in patients with severe disease than in patients with mild disease (45% vs 27%, P < 0.03) [15].
In general nasopharyngeal viral levels remained high in severe group and, begin to decrease after 14 d of symptom onset [4,15,65]. Subsequently, samples from other sites may also test positive for the virus. For example, viral load from stool samples were found to peak during the third and fourth weeks after disease onset and continue to remain positive during convalescence [9,13,15,19,25,31]. Some studies also reported presence of high viral load in stool up to 50 d after onset of COVID-19 symptoms [31,38].
Significance of viral load in stool remains unclear, whether it represents a true infection or residual viral nucleic acid and not transmissible live virus. Gastrointestinal epithelium also expresses angiotensin-converting enzyme II (ACE-2) receptors. Infection of gastrointestinal (GI) tract may occur primarily from swallowed nasopharyngeal secretions or due to dissemination to GI Liu et al [4] analyzed their cohort of 46 mild and 30 severely ill patients with elevated nasopharyngeal viral load and demonstrated an association with severity. Viral load was 60 times higher in severe cases and with severe clinical outcomes (P < 0.005). Mild cases had viral clearance, with 90% of patients testing negative after 10 d. In contrast, all severe cases had persistently elevated viral load beyond 10 d of symptoms were elderly and required ICU care.
In a cohort of patients on dialysis, Schwierzeck et al [41] also demonstrated a similar association with severity. Ct values of symptomatic cases were significantly lower compared to asymptomatic cases (22.55, 29.94, respectively, P = 0.007), indicating approximately 200-fold higher viral load [41]. Similarly other authors from their cohorts from different countries Bermejo-Martin et al [48]; Spain, Shlomai et al [49]; Israel , Chen et al [52]; China, Zhou et al [54]; China, Maltezou et al [55]; Greece have demonstrated a statistically significant association between admission high viral load and intubation, ICU care and multi-organ dysfunction. July 9, 2021 Volume 10 Issue 4 Collectively these data from different cohort of patients suggests that severe COVID-19 patients with a high viral load correlate with higher risk for severe infection with ICU admission and multi-organ dysfunction. Factors common to these cohorts was increased age, and active preexisting medical co-morbidities.
In a cohort of 48 patients, Chen et al [8] reported an association between high viral load in serum with elevated Il-6 Levels (≥ 100 pg/mL) and cytokine storm in critical compared to mildly ill patients (P < 0.001). These patients had a higher incidence of multi-organ failure and mortality.
Similarly Xia et al [42] in their cohort of 10 patients with severe illness and elevated nasopharyngeal viral load reported severe lymphopenia with CD4+ lymphocyte counts as low as 61 cells/uL (reference value: 355-1213 cells/µL). Neutrophil to lymphocyte ratio was also elevated in this group.
Blot et al [33] in their series of 14 patients demonstrated a positive correlation of high nasopharyngeal viral load on admission with risk of hypoxemia, increased oxygen requirements and SOFA score in respiratory distress syndrome patients (P = 0.013). Similar association with increase in severity of sepsis, organ damage and mortality was also reported by Xu et al [36].
Lucas et al [66] in their series of 113 patients with COVID-19 patients demonstrated an overall increase in cells of innate lineage and a reduction in T lymphocytic cell counts. High viral load correlated significantly with levels of IFNα, IFNγ, TNF and tumor necrosis factor-related apoptosis-inducing ligand. Chemokines responsible for monocyte recruitment correlated significantly with viral load in severe disease. Inflammasome associated cytokines were also elevated, including IL-1α, IL-1β, IL-6, IL-18 and TNF [66].
Collectively these studies provide evidence that high viral load may be a surrogate marker for predicting inflammation and severity in COVID-19 infection. Pujadas et al [27] demonstrated an association of viral load as an independent predictor of mortality in a cohort of 1145 hospitalized patients. Mean log 10 viral loads significantly differed between patients who survived [n = 807; mean log 10 viral load 5.2 copies/mL (SD 3)] vs those who succumbed [n = 338; 6.4 copies/mL (SD2.7)]. Cox proportional hazards model was adjusted for age, sex, asthma, atrial fibrillation, coronary artery disease, chronic kidney disease, chronic obstructive pulmonary disease, diabetes, heart failure, hypertension, stroke, and race. The results demonstrate a significant independent association between viral load and mortality [hazard ratio 1.07 [95% confidence interval (CI): 1.03-1.11), P = 0.0014], and 7% increase in hazard for each log transformed copy/mL. Univariate survival analysis also demonstrated a significant difference in survival probability between high and with low viral load (P = 0.0003), with a mean follow-up of 13 d and a maximum follow-up of 67 d [27].
Similarly Huang et al [29] in their analysis of 308 patients demonstrated a high viral load associated with in-hospital mortality in (6/16) of critical patients, while no mortality was observed in the low viral load group (P < 0.0001). High viral load was associated with myocardial damage, elevated troponins, coagulopathy, abnormal liver and renal functions. Elevated IL-6, LDH, and elevated neutrophil counts and reduced CD4+, CD8+ lymphocytes were noted in deceased patients P < 0.0001) [29].
In Collectively these multiple cohort of patients from different studies shows a trend of the association of high viral load and mortality in hospitalized patients.
Association between viral load and infectivity remains unclear, but earlier peak in viral load in SARS-CoV-2 infection suggests that infectivity may be higher earlier in the course than would be expected based on the SARS model [5,62,63].
Subgroup analysis suggests these patients are younger and had milder disease and may be highly infectious and transmit virus to the population given their asymptomatic or presymptomatic nature of illness. These studies shed light on high viral load and its association with infectivity and transmissibility. Highest respiratory viral load was noted at pre-symptomatic stage and infectiousness peaked before symptom onset [1,2,3,5,12,14,16,22,34,40,47,50].
Xu et al [2] reported on 51 symptomatic patients, demonstrating transmission from primary (patients who visited the epicenter, Wuhan), to secondary (patients who came into contact with primary) and tertiary (patients who came into contact with only secondary cases). Their findings suggested incubation period in tertiary group was longer compared to primary and secondary groups (both P < 0.05). Ct values detected in tertiary were similar to those for the imported and secondary patients at the time of admission (both P > 0.05). For tertiary group, the viral load was undetectable in half of patients (52.63%) on day 7 and in all patients on day 14. One third of patients in imported and secondary groups remained positive on day 14 after admission. They concluded that infectivity of SARS-CoV-2 may gradually decrease in tertiary patients [2]. This study emphasizes that early quarantine and lock down measures may have mitigated the spread of disease in countries that enforced it strictly. The reason for decrease in infectivity from secondary to tertiary exposed patient remains unclear. Although speculative, this may be due to reduced quantitative viral load transmitted and other strict mask and quarantine measures [2,44].
Some reports demonstrated an association of high viral load and risk of transmission in a closed knit population [28,40]. In a cohort of 80 patients including both health care workers and nursing home residents from COVID-19 outbreak in Washington State, high viral load in unrecognized asymptomatic and presymptomatic July 9, 2021 Volume 10 Issue 4 patients contributed to infectiousness and transmission. Although the mortality was high in these patients, it did not correlate statistically with the viral load [28]. Similarly Kimball et al[40] analyzed their cohort of 23 patients from a long term care facility. Ten (43%) had symptoms on testing, and 13 (57%) were asymptomatic. Seven days after testing, 10 of these 13 previously asymptomatic residents had developed symptoms and were inferred as presymptomatic at time of testing. The Ct values indicated large quantities of viral RNA in asymptomatic, presymptomatic, and symptomatic residents, suggesting potential for transmission regardless of symptoms [40].
There are at present limits to our understanding and evidence in determining infectiousness and the risk of transmissibility. As described earlier, there is evidence of ongoing viral shedding in various body fluids after symptom resolution in COVID infection and may be prolonged, especially in stool samples compared to respiratory secretions (P < 0.001-0.5) [9,13,15,19,25,31,38,67]. Currently there is no reported evidence of fecal -oral transmission. Further the severity of illness also appears to extend the duration of viral shedding. However, based on current data, there is no convincing evidence that duration of shedding correlates with duration of infectivity. The viral nucleic acid detected in various body fluids later in the course of infection may represent non-viable fragments of virions.
Wölfel et al [14] demonstrated that live virus can be cultured from respiratory samples in patients with positive SARS-CoV-2 RT-PCR. However, the percentage of positive cultures declined and no live virus was successfully isolated after day 8 from symptom onset despite ongoing high quantitative viral load. Additionally, virus could not be isolated from samples less than 10 5 copies/mL. However a caveat with this cohort was that patients had mild symptoms and were young and middle aged adults. This emphasizes the point that elevated high viral load in convalescing patients may be suggestive but not a definitive factor in infectiousness and transmissibility [14].
There is evidence that children are susceptible to SARS-CoV-2 infection, but frequently do not have symptoms, raising possibility that children could be facilitators of viral transmission. Reports comparing viral kinetics in adults and pediatric patients have demonstrated that children, adolescents and adults can have same variation of viral load, but higher risk of transmission and asymptomatic illness in children may have other contributing factors[16, 47,50].
The immune responses of the host to COVID-19 and its relation to infectivity and transmission remain unclear and data is emerging[5,13,59,68,69]. Most patients seroconvert by day 15 after symptom onset and Anti-SARS-CoV-2-NP or anti-SARS-CoV-2-RBD IgG levels correlate with virus neutralization [5]. While risk of transmission after symptom resolution and the presence of antibodies may be lower, it cannot be ruled out with available evidence [1][2][3]5]. Transmission by asymptomatic or minimally symptomatic individuals also appears likely and highlights the importance of contact tracing and isolation of exposed individuals, especially as transmission potential may be maximal early in course of infection as depicted in the nursing home cohort [28,40]. In their large series of 100 patients Li et al[68] demonstrated specific anti SARS-CoV-2 (IgM, IgG, IgA) antibodies to S-1, N, and RBD viral proteins in the serum within two weeks after onset and reached a peak in 17 d and maintained high levels up to 50 d post infection.
Fourati et al [69] demonstrated an inverse relationship of lower serum titer of neutralizing antibodies (anti-S1 Ig A and Ig G) with elevated nasopharyngeal viral load and severe COVID-19 sepsis. This may indicate an inability to clear infection and have a deleterious impact on survival. Patients who were alive at 28 d displayed higher titers of anti-S1 Ig A and Ig G on admission compared to those who succumbed [69]. Similar observation was demonstrated by Bryan et al [59]; this study demonstrated that detection of anti-SARS-CoV-2 nucleocapsid IgG is associated with lower viral loads in patients. They concluded that high viral loads almost never coexist with SARS-CoV-2 sera-positivity and suggest that persons with anti-SARS-CoV-2 antibodies on admission have reduced 30-d all-cause mortality [59]. Both these studies may suggest that presence of antibody titers on admission, coupled with molecular testing, may be particularly prognostic factor, helpful to assess the disease course for high risk patients who cannot provide a clinical history [59,69]. The mechanism may be due to lower host humoral immune response in the elderly patients with comorbidities.
The heterogeneity of the non-respiratory specimen's limits its significance in explaining the risk of transmission and no correlation can be inferred. Further research is needed. In addition it is also important to determine viability of virus outside the respiratory and gastrointestinal tract at different stages of infection in both asymptomatic and symptomatic individuals. This will improve understanding of transmission risk and allow greater certainty around guidelines for appropriate July [74] compared the limit of detection for various assays and reported it to be between 85-499 copies/mL for CDC assays and 74 copies/mL with other commercial high-throughput laboratory analyzers. Digital droplet PCR is another technique useful in situations with a high suspicion of infection but a low viral load or a negative test. This test has an advantage of absolute quantification and higher sensitivity in viral RNA detection especially in low viral load samples [32,75].
Strengths and limitations of this manuscript
This study is a large pooled, qualitative content analysis of 60 manuscripts with a cohort of 10514 patients' from different cohorts and countries evaluating patterns of quantitative viral load in predicting disease severity, mortality, risk of infectiousness, transmissibility, and prognosis in patients with COVID-19. The author presents the relative merits and discusses the objective data presented in these studies. This a correlation study and does not imply causation.
However, there are certain limitations in this study. Since there is a high heterogeneity of samples and data in the majority of these manuscripts, the content analysis is qualitative (narrative) and these data should be interpreted with caution and considered only as trends. Differences in distribution of age, sex, definition of disease severity, and other confounding variables such as medical comorbidities, different virologic tests and heterogeneous samples may contribute to different clinical outcomes. For instance very few studies adjusted their statistic models for the other medical morbidities which could have increased the risk for morbidity and mortality [4,6,7,15,19,27,30]. The majority of these studies are on hospitalized patients which has a potential bias of analyzing the more severely ill amongst the overall infected population. Further variations of ACE 2 receptors and expression in various tissues in different ethnic populations may play a role in virulence and transmissibility of this virus [76]. A viral nucleic acid load from a particular sample assay may not represent an exact systemic viral load in the body; further viral load may also not represent viable virions and may be falsely misleading. In addition there is no consistent trajectory of why certain samples test positive with high virus loads and others do not. Another important point to consider is that, majority of studies is from one country: China and from a few medical centers around the epicenter of outbreak, possibly leading to overlapping of population data in reported manuscripts. Other limiting factors may include the testing protocol and standards, set for RT-PCR targets vary between different laboratories[68-70]. Finally there is always a possibility of observer (author bias) which is to be considered.
Although majority of studies showed a positive association between a high viral load and mortality there were three studies with (434 patients) suggestive of an inverse correlation between the two. Argyropoulos et al [35] in their report on 205 patients demonstrated an inverse correlation of admission nasopharyngeal viral load with duration, severity of sepsis and no correlation with survival (P < 0.001). The reason for low mortality in this study is unclear. One possible explanation could be due to the fact that viral loads detected from nasopharyngeal samples were obtained at a later time point in the disease course. As we have described earlier, that SARS-CoV-2 viral load peaks earlier in the infection followed by cytokine storm and hyper-inflammation when the innate immune system is unable to control the initial viral replication [61]. At these later times points the viral replication may start to defervesce but the multi-organ dysfunction is secondary to systemic hyper-inflammatory response. Similarly Hasanoglu et al [46] on their cohort of 60 patients demonstrated an inverse relationship of high viral load with mortality; however their study had a mean age of 32 signifying a younger age group, where mortality is lower compared to older patients. Another group of 169 patients, reported from Spain by Carrasquer et al [57] demonstrated no statistical association of high viral load with in hospital mortality when adjusted to age, gender and serum cardiac troponin levels. The conclusions from this study suggested myocardial damage with medical comorbidities as the cause for increased mortality in susceptible population and not high viral loads.
Why is quantitative viral assay important?
Although infection and inflammation begins with the respiratory tract, it also involves extra pulmonary organs [77]. Isolation of viral nucleic acid in multiple tissues, blood and body secretions are indicative of systemic spread and are indicative of severe infection. Evidence from these manuscripts suggests that high viral load occurs in respiratory tract samples during presymptomatic period and peaks at the onset of symptoms and gradually declines over the next one to three weeks [1,2,3,5,9,12,14,16,22,34,40]. Increased viral load in respiratory tract represents active viral replication and a surrogate marker for predicting severity [28,32,37,61]. This is in contrast to previous SARS-CoV epidemic in 2003 where the peak viral load occurred during second week after symptoms appeared and was positively correlated with increased mortality [5,62,63]. This fact explains the increased infectivity and rapid transmission of SARS-CoV-2 compared to previous SARS-CoV epidemic [5]. Along with comorbidities, assessment of viral load from nasopharynx or sputum may determine the risk of severity of sepsis in symptomatic, hospitalized elderly patients [4,5,18]. High viral load is also associated with elevated cytokine, lymphopenia i.e., markers for inflammation and portends poor prognosis [8,24,33,36,37,42,52,65,66]. Early determination of viral load also has therapeutic benefits, such as administration of convalescent plasma, neutralizing antibodies, antiviral medicines and corticosteroids in susceptible elderly patients[6,7,11].
SARS-CoV-2 pandemic continues to spread unabated in United States and worldwide. This is particularly evident after the end of lock down and social distancing measures with increased mobility of the population. A report from a reference laboratory evaluated 29713 de-identified samples from respiratory tract. 14.9% of samples tested positive. Highest positivity rate was identified in males born between1964-1974. Patients between ages of 11-25 had highest viral load (> 10 Log 10 copies/mL). The clinical symptoms or outcomes of these patients were not known. This study demonstrates that high viral load in younger group may be an important risk factor for infectivity and transmission in a community, regardless of their symptom status [78].
COVID -19 infections in younger asymptomatic patients, with high viral load may fare well due to their robust physiologic reserve. However, they are at highest risk for transmitting the disease and are called super spreaders. These infections generally appear asymptomatic or milder in younger population, but elderly patients bear the brunt of severe infection, hospitalization and mortality [61,62].
CONCLUSION
High SARS-CoV-2 viral load was found to be an independent predictor of disease severity and mortality in high proportion of studies, and may be useful in predicting the clinical course and prognosis of patients with COVID-19. However there is a wide heterogeneity in fluid samples and different phases of the disease and these data should be interpreted with caution and only considered as trends. In aggregate, these observations support the hypothesis of checking and reporting viral load by quantitative RT-PCR, instead of binary assessment of a test being positive or negative.
Research background
High viral load has an implication in the clinical outcomes in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic. At present there is no Food and Drug Administration Emergency Use Authorization for quantitative viral load assay in the current pandemic. Currently the coronavirus disease 2019 (COVID-19) tests are reported as a binary assessment of either positive or negative test.
Research motivation
The intent of this research is to identify whether quantitative SARS-CoV-2 viral load assay correlates with severity of infection and mortality?
Research objectives
To assess high viral load and its association with the severity, mortality, infectiousness in COVID-19 infections.
Research methods
A systematic literature search was undertaken for a period between December 30, 2019 to December 31, 2020 in PubMed/MEDLINE using combination of terms "COVID-19, SARS-CoV-2, Ct values, Log 10 copies, quantitative viral load, viral dynamics, kinetics, association with severity, sepsis, mortality and infectiousness''. Data on age, number of patients, sample sites, real time reverse transcriptase polymerase chain reaction (RT-PCR) targets, disease severity, intensive care unit admission, mortality and conclusions of the studies was extracted, organized and is analyzed.
Research results
High SARS-CoV-2 viral load was found to be an independent predictor of disease severity and mortality in high proportion of studies, and may be useful in predicting the clinical course and prognosis of patients with COVID-19.
Research conclusions
There is a wide heterogeneity in fluid samples and different phases of the disease and these data should be interpreted with caution and only considered as trends. In aggregate, these observations support the hypothesis of checking and reporting viral load by quantitative RT-PCR, instead of binary assessment of a test being positive or negative.
Research perspectives
In future, longitudinal studies with viral load should be monitored and analyzed, so it can be considered in interpretation of outcome data. It may also be a guiding principle for therapy and infection control policies for current and future pandemics.
|
2021-07-28T05:18:15.858Z
|
2021-07-09T00:00:00.000
|
{
"year": 2021,
"sha1": "c93e64ea2e124dc304034e9697ce78590d62bef1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5492/wjccm.v10.i4.132",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c93e64ea2e124dc304034e9697ce78590d62bef1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
124629565
|
pes2o/s2orc
|
v3-fos-license
|
A CHARACTER-BASED CONSTRUCTIONAL APPROACH TO CHINESE IMPERFECTIVE
在 zai and 着 zhe are commonly recognized imperfective aspect markers in Mandarin Chinese, though there are noticeable differences between their distributions and functions. By resorting to origins, historical evolutions, and corpus data for the meanings and functions of these two characters, it is observed that they are both polysemies displaying semantic networks organized around a central sense respectively, and thus the characters 在 and 着 are distinct in form and meaning pairings. 在 is a construction indicating presence within a certain range while 着 generally denotes 'reach to'. Related to their basic meanings, 在 and 着 exhibit some constraints respectively when marking imperfective aspect. From this character-based constructional account, 在's and 着's qualifications as Chinese imperfective aspect markers are theoretically arguable.
Introduction
Aspects are different ways of viewing the internal temporal constituency of a situation (Comrie, 1976, p. 3;Bybee, 2003, p. 157). The contrast of perfective and imperfective is the most basic distinction of aspect. The perfective indicates that the situation is to be viewed as a bounded whole, looking at the situation from outside, without necessarily distinguishing any of its internal structure. The imperfective looks at the situation from inside, or looks inside its temporal boundaries, and it is crucially concerned with its internal temporal structure (Kibort, 2008).
According to Li & Thompson (1981, p. 185), Chinese has the following system of verbal aspect: (1) i. Perfectivity: 了 le and perfectivizing expressions ii. Imperfectivity (durative): 在 zai, 着 zhe iii. Experiential aspect: 过 guo iv. Delimitative: reduplication of verb This perspective is generally agreed on. 在 zai and 着 zhe are therefore commonly recognized as two imperfective aspect markers in Chinese (Huang, Li, & Li, 2009, p. 101). This does not mean they are treated the same by linguists. The most prominent difference is their distributions. 在 zai is preverbal while 着 zhe occurs post-verbally, as shown in (2).
Beside their distributions, meanings have not escaped from the attention of researchers either. 在 zai is argued to feature a dynamic meaning while 着 zhe is claimed to be relatively static (Kwan-Terry, 1978;Smith, 1991, p. 271).
Confronted with distinct distributions and meanings of the two imperfective aspect markers, a problem arises naturally that where do these differences come from. The present study suggests a character-based constructional approach to solve this problem. Section 2 describes and summarizes the forms and meanings (functions) when 在 zai and 着 zhe co-occur with various event types, thus provides an overall picture for the constructions in question. The character-based constructional approach is introduced in section 3, and the applications for 在 zai and 着 zhe are laid out in section 4, combined with historical data to illustrate their processes of grammaticalization and to explain the forms and meanings of them as distinct constructions. Closely related with their meanings, some constraints of the imperfective aspect marking 在 zai and 着 zhe are discussed in section 5, and Chinese imperfective aspect marking system is revisited. Section 6 is a summary and provides some implications for the character-based constructional approach in Chinese linguistics study.
Event types of Chinese verbs based on time notions
Since 在 zai and 着 zhe behave differently when co-occurring with various types of events, we find it necessary to begin our description with a summary of event types denoted by Chinese verbs based on time notions.
According to Vendler's (1967, p. 106) distinction of four categories of verbs, with the refinements by Dowty (1979) and Foley & Van Valin (1984, p. 33), states hold for an unbounded period of time. Achievements occur at a single moment, with an immediate end point. Activities go for a period of time, with no defined end point. Accomplishments go on for a period of time, but with a defined end point. Travis (2010, p. 120) introduced the following matrix to represent Vendler's four categories.
(3) -process + process -definite state activity + definite achievement accomplishment Tai (1984) argued that Chinese verb doesn't really have the subcategory of accomplishments. Most of the results in Chinese are expressed by word compounds. This view is accepted by most Chinese linguists (Chen, 1998;Jiang, & Pan, 1998, p. 333;Xuan, 2013;among others).
It can also be observed in Chinese, some verbs present a combination of an achievement and the resultant state, like 坐 zuo "sit down, sit", and 站 zhan "stand up, stand". We will refer to them as achievement-state for the convenience of description. Similarly, some verbs denote a combination of an activity and the resultant state, like 穿 chuan "put on, wear", and 堆 dui "pile up, lie in pile". We will refer to them as activity-state 1 .
Therefore we can draw an outline of the event types denoted by Chinese verbs according to time notions.
Forms and functions of 在 zai
Corpus data shows 在 zai actually never co-occur with state verbs or achievementstate verbs. The verbs following it have to be activities. As for activity-state verbs like 穿 chuan "put on, wear", when co-occurring with these verbs, 在 zai only indicates the progression of the activity. Activity-states Progression of Activity 穿 chuan "put on, wear"; 包 bao "pack, hold inside"; 堆 dui "pile up; lie in pile" As is shown in Table 2, basically the form "在 zai + activity" conveys the meaning that the activity is in progress.
This disagreement is understandable if we take into consideration all the possibilities when 着 zhe co-occurs with verbs, as presented in the following discussion.
1. "V zhe (+ object)" denotes an action in progression or a state in continuation, but not all state verbs are allowed in this form. Permanent states like 是 shi "is/am/are", 姓 xing "be surnamed" are among the few exceptions.
(4) 人们 跳着, 唱着。 Ren-men tiao-zhe, chang-zhe people-PL dance-ZHE, sing-ZHE People are dancing and singing. (Lü, 1999, p. 666) Chen (1980 noticed when 着 zhe co-occurs with activity verbs that are volitional, the clause sounds unfinished. He suggested 着 zhe has a subordinating function and usually serves as background information for the main event in discourse. The self-sufficiency of "V zhe (+ object)" increases as the verb becomes less volitional, and the whole structure is more state-like rather than activity-like at the same time. So when the verb is an achievement-state or an activity-state, and the subject is not the agent of the verb, the clause purely displays a state, without any activity meaning. Existential sentence (locative inversion) is among this situation.
top hang-ZHE one-CL painting There is a painting hanging on the wall.
2. "V1 zhe (+object1) +V2 (+ object2)" denotes two events happening at the same time. V1 can be the means of V2 and V2 can be the purpose of V1. The V1 in this form needs to be an activity and "V1 zhe (+object1)" serves as background information.
(8) 说着 说着 不觉 到了 门口 shuo-zhe shuo-zhe bujue dao-le menkou talk-ZHE talk-ZHE unconsciously arrive-PERF doorway arrive at the doorway unconsciously while talking (Lü, 1999, p. 666) 4. "S + V zhe + AP" denotes the subject displays some kind of property through the experience of the verb. The verb here needs to be a perception/cognition/emotion verb, corresponding to different event types based on the time notions.
A character-based constructional approach
Ever since Langacker (1987) argued syntactic patterns, are form and meaning pairings, but at a more abstract (schematic) level than words, lexicon-syntax continuum has become a fundamental notion among constructionalists and cognitive linguists. Croft (2003) and Baredal (2011) question the dichotomy between lexical rules and syntactic constructions. This idea was further demonstrated by Bybee (2006) by positing there is no unitary "grammar" of language but rather a continuum of categories and constructions ranging from low frequency, highly specific, and lexical to high frequency, highly abstract, and general. Boas (2008) also points out the importance of the lexiconsyntax continuum. Langacker (2008) stated clearly that there is no clear border between lexicon and grammar.
Based on the concept of lexicon-syntax continuum, the imperfective aspect markers in Chinese, 在 zai and 着 zhe, are also likely to be derived from some specific lexical items, exhibiting complicated polysemous networks ranging from lexical meanings to grammatical functions.
As for Chinese, characters can provide crucial hints for us to plot polysemous networks of words, considering the special properties of Chinese characters as a writing system. Saussure referred to Chinese writing system as an "ideographic system" (1983, p. 26). More specifically, Chinese characters are also declared to be "logographic writing system" (Diringer, 1962;Fabar, 1992), "morpho-syllabic" (DeFrancis, 1989) "morphemic writing system" (Hill, 1967;Su, 2001). Although these classifications are based on different perspectives, they all acknowledge the semantic functions of Chinese characters. In this sense, each character is a construction, as constructions are pairings of form and meaning/function 2 . In order to capture the semantic network of words, we can turn to corpora of classical Chinese and see how the meanings and functions of a certain character changed over time. This is what we call Character-based Constructional Approach.
Actually it has been noted there are usually some kinds of systematically related and therefore explainable connections between different meanings and functions of the same lexical items (Tyler, 2012, p. 6) and that linguistic units, i.e. lexical items, morphemes and syntactic constructions, can subsume a range of distinct but related meanings organized with respect to a central meaning (Tyler, 2012, p. 22), which means by taking the character-based constructional approach, it will be able to reveal the central meaning/function of the construction represented by the character.
In our study of Chinese imperfective aspect markers, we search characters 在 and 着 in the classical Chinese corpus, Yuliaoku Zaixian (http://www.cncorpus.org/) to extract their meanings and functions, including imperfective aspect marking, at different times. Data is analyzed for the central meanings of these two characters, as constructions.
It is not really a novel idea to approach Chinese function words from characters. As early as 1825, the Germany linguist and philosopher Wilhelm von Humboldt spoke of a threefold isolation in Chinese, "The Chinese writing expresses, by a single sign, each simple word and each integral part of composed words; it suits the grammatical system of the language perfectly. The latter offers . . . a threefold isolation, of ideas (concepts), words, and characters". Wenzel (2010) further shed light on the relationship of Chinese grammar, phonological system and writing system, "The Chinese language is basically monosyllabic, has a non-alphabetic script, and offers almost no morphology (no inflections)". In the same vein, Xu (2004) and Pan (2006) proposed a "character-based method" in Chinese linguistic study and Chinese teaching. However, what is innovative about the character-based constructional approach is systematically incorporating constructional grammar with character-based method, including the fundamental tenet of lexicon-syntax continuum as well as the emphasis on the bottom-up corpusbased research method. Boas (2008) discusses how construction grammar is supposed to deal with the interactions between lexical entries and grammatical constructions, and points out that further research should be done with a bottom-up corpus-based approach. The present study is carried out along this line.
The construction 在 zai
In oracle bone inscriptions at least 3000 years ago, 在 zai first appeared as a verb meaning "be living; exist; be in/at … (some place)". According to the etymological dictionary, 说文解字 Shuowenjiezi "The Explanation of Simple Graphs and Analysis of Compound Graphs" compiled by 许慎 Xu Shen (30 BC -124 BC), the seal style and the explanation of 在 zai is as follows.
No later than the Han Dynasty (206 BC-220 AD), 在 zai developed the function of a preposition which indicates "in/on/at certain time, location or range". The progression marking function of 在 was not developed until Ming Dynasty (1368-1644). The earliest appearance of " 在 +VP" structure detected in corpus is from Pingyaozhuan (1620). The polysemy network (mainly functions, in this case) of 在 and the timeline of its development can therefore be represented in Figure 1. It can be seen that the meaning/function of 在 originates from the temporal domain and gradually extended to the temporal domain, which reflects the cognitive feature that spatial sense is "more central" than the temporal sense (Lakoff, 1987, pp. 416-417), and the general conceptual metaphor which maps spatial notions onto nonspatial domains (Taylor, 2008;Langacker, 1987Langacker, , 1990Talmy, 2000;and Boroditsky 2000). At the same time, even though the domains are changed, the sense "presence within a certain range" is well preserved. The verbal meaning of "to exist" can be interpreted as to occupy some space, a range in the spatial domain, and when this range happens to be in the temporal domain, according to the event types we presented in section 2, it denotes an activity. Therefore, the form "A 在 B" can easily be understood as a presence construction, which means the presence of A in the range B. The biggest constraint is B has to cover a range in certain domain.
The construction 着 zhe
According to Wang (2004, pp. 357-361), 着 was originally a pure verb which means "adhere to; come into contact with; reach to". This use can at least be traced back to Warring States Period (475 BC to 221 BC). It is normally read as zhuo for this meaning in contemporary Chinese. (Liu, After the Southern and Northern Dynasties (220-589), the verb function of 着 disappeared in Chinese, but it is preserved in Japanese. 着く tsuku still means "reach; arrive at" in Japanese now. Ever since the Tang Dynasty (618-907), there could be an object after 着. Seeming to be rather similar to an aspect morpheme, 着 is normally pronounced as "zhao" or "zhuo" for this function in modern Mandarin and apparently bears some kind of lexical meaning of "come into contact with; adhere to; reach to". The typical progressive or durative aspect marker usage of 着 was first seen in the Song Dynasty (960-1279) and did not become common until the Yuan Dynasty (1271-1368). According to the search result from corpus, the state-continuation meaning, as shown in (21), was developed slightly earlier than the subordinating activity-progression sense, as shown in (22) So the development of functions of 着 along the timeline can be summarized in Figure 2. Similar to 在, the meaning of 着 also extends from the spatial domain to the temporal domain, reconfirming the "central" role of spatial perspective (Lakoff, 1987, pp. 416-417) in human cognition. The motivation underlying this extension seems to lie in its original meaning "adhere to; reach to". The earliest form "A 着 B" means "A reaches to/ comes into contact with B", from which the form "A V 着 B" was derived and 着 specifies the result, "reach to/ in contact with B". The progression of activity meaning occurs when "A V 着 B" is mapped onto the temporal domain, thus B is realized by another activity. So the "agent+ activity + 着" form we listed in section 2 can better be represented as "agent+ activity A + 着 + activity B" denoting activity A reaches to activity B in the temporal domain, just like in example (6) and (8), repeated here as (23) and (24). verb: adhere to; reach to; come into contact with resultative verb complement: reach to; come into contct with 500 AD 1000 AD 着 particle: state in continuation particle: activity in progress (subordinate) (23) 说着 看了 我 一眼 shuo-zhe kan-le wo yi yan speak-ZHE look-PERF I a glance gave me a glance while speaking (Lü, 1999, p. 666) (24) 说着 说着 不觉 到了 门口 shuo-zhe shuo-zhe bujue dao-le menkou talk-ZHE talk-ZHE unconsciously arrive-PERF doorway arrive at the doorway unconsciously while talking (Lü, 1999, p. 666) Moreover, if the verb in "A V 着 B" represents a static state but not an activity, the "reach to" meaning can be realized without the presence of B. Here the form "A V 着 To summarize, the central meaning of the character 着 is "reach to; in contact with". This meaning is retained in different constructions involving 着, including "A 着 B" and "A V 着 (B)".
Revisit Chinese imperfective aspect marking system
Assuming 在 zai and 着 zhe are the two imperfective aspect makers in Chinese, just like Li & Thompson stated in 1981, we should be able to claim under any circumstances, Chinese imperfective aspect is marked by either 在 or 着. However, there are actually some constraints involved with 在 and 着 respectively. Besides, there are some other plausible imperfective aspect markers in Chinese.
Constraints of aspect marking 在 zai
We have already shown 在 can co-occur with activity verb to denote activity in progress, but exception arises when the verb assigns locative as an argument. Generally all locatives need to be put between 在 and the verbs, appearing as adjunct phrases, probably because 在 is also the commonly used pronoun to introduce locative in Chinese. They are staying at New York today.
Constraints of aspect marking 着 zhe
In the first place, it has already been mentioned "agent+ activity + 着 zhe" is not selfsufficient. The function of 着 here is actually linking one activity to another, essentially having nothing to do with the aspect.
Another important constraint concerning 着 zhe's aspect marking function is it cannot be negated. More precisely, it basically does not appear in negative form. As we talked about in section 4, the basic meaning of 着 is "reach to" and this meaning is mapped from the spatial domain to the temporal domain. So if entities, activities or states do not come into contact (either in the spatial domain or in the temporal domain), we simply do not need 着 zhe. The negative forms of (25) is displayed in (28)
Other plausible imperfective aspect markers in Chinese
Some other morphemes (characters, according to the character-based constructional approach) beside 在 zai and 着 zhe can also express imperfective aspect independently under certain circumstances, like 正 and 呢. If we look at the 正 from the character-based constructional perspective, its central meaning is "no deviation, right", consistent with its definition in 说文解字 Shuowenjiezi "The Explanation of Simple Graphs and Analysis of Compound Graphs".
In example (29), the "no deviation" meaning is mapped onto the temporal domain, thus indicates two or more events happen exactly at the same time. Imperfective meaning is conveyed without the presence of 在 zai or 着 zhe.
As for the particle 呢 ne, there are various opinions regarding its functions. Considering the fact that 呢 ne normally occurs in the middle of discourse, this study follows Alleton (1981) and Shao's (1989) opinion that the basic function of 呢 ne is "to remind, appealing to the communicators' active participation." So in spoken Chinese, as long as there is proper context, it can denote imperfective aspect independently.
Section summary
From the above analysis, the relationship between 在 zai, 着 zhe and imperfective aspect marking can generally be shown as in Figure 3. Both 在 zai and 着 zhe have their distinct central senses, which underwent extension from spatial domain to temporal domain. In modern Chinese, they both can express imperfective aspect conditionally, but many constraints are observed at the same time. Additionally, imperfective aspect can also be expressed by other morphemes/characters in Chinese. Therefore, under the character-based constructional account, the roles of 在 zai and 着 zhe as Chinese imperfective aspect markers are questionable. We can only say, they can indicate imperfective aspect under certain circumstances, just like some other characters such as 正 zheng or 呢 ne.
Conclusion and implication
The character-based constructional approach believes that in Chinese, each character is a form-meaning pairing. By studying characters through historical development and with the assumption of the lexicon-syntax continuum, there can be a new perspective to look at Chinese lexicon and syntax.
Through this approach, it is discovered that the basic meaning of the character 在 is to indicate presence in a certain range and 着 is "to reach to". Their meanings and functions were originally developed in the spatial domain and were mapped onto the temporal domain later on, which reflects general human cognition principle. The process of grammaticalization is clearly exhibited here, consistent with Humboldt's hypothesis (1925) about evolutionary stage of language.
(32) Content word > grammar word > clitic > inflectional affix (Hopper and Traugott's, 2003, p. 7) Hopper and Traugott noted it is no coincidence that Humboldt's four stages correspond quite closely to a typology of languages that was in the air during the first decades of the nineteenth century (2003, p. 20). Chinese is basically known as an isolating language, corresponding to the stage of "grammar word" stage in the cline according to them. It is therefore self-explanatory that Chinese grammar words are polysemous. The semantic network of a single Chinese grammar word is organized around a central sense, which, according to the character-based constructional approach, can be accessed through the corresponding character(s). For this reason, the imperfective aspect marking use of 在 zai and 着 zhe are also constrained by their basic meanings respectively, and so do some other plausible imperfective aspect markers in Chinese. In other words, as an isolating language, the imperfective aspect marking system is not maturely developed.
Hopefully, the character-based constructional approach will be able to provide some novel insights for Chinese linguistics study, and to help explain some mysterious constructions under other frameworks, like the famous 把 ba structure. On the other hand, blurring the traditional boundary between spoken and written language, this account may also be able to facilitate classical Chinese study and Chinese dialect study.
|
2019-04-21T13:13:02.248Z
|
2016-06-29T00:00:00.000
|
{
"year": 2016,
"sha1": "64d9f35a891ca9b62a59e0d62906d94364469f6a",
"oa_license": "CCBYSA",
"oa_url": "https://revije.ff.uni-lj.si/ala/article/download/4156/6343",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b767082f54f2a6f0e9446dc8b4acc3aff76a6ee7",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
8628119
|
pes2o/s2orc
|
v3-fos-license
|
Inflammatory response and pneumocyte apoptosis during lung ischemia–reperfusion injury in an experimental pulmonary thromboembolism model
Lung ischemia–reperfusion injury (LIRI) may occur in the region of the affected lung after reperfusion therapy. The inflammatory response mechanisms related to LIRI in pulmonary thromboembolism (PTE), especially in chronic PTE, need to be studied further. In a PTE model, inflammatory response and apoptosis may occur during LIRI and nitric oxide (NO) inhalation may alleviate the inflammatory response and apoptosis of pneumocytes during LIRI. A PTE canine model was established through blood clot embolism to the right lower lobar pulmonary artery. Two weeks later, we performed embolectomy with reperfusion to examine the LIRI changes among different groups. In particular, the ratio of arterial oxygen partial pressure to fractional inspired oxygen (PaO2/FiO2), serum concentrations of tumor necrosis factor-α (TNF-α), myeloperoxidase concentrations in lung homogenates, alveolar polymorphonuclear neutrophils (PMNs), lobar lung wet to dry ratio (W/D ratio), apoptotic pneumocytes, and lung sample ultrastructure were assessed. The PaO2/FiO2 in the NO inhalation group increased significantly when compared with the reperfusion group 4 and 6 h after reperfusion (368.83 ± 55.29 vs. 287.90 ± 54.84 mmHg, P < 0.05 and 380.63 ± 56.83 vs. 292.83 ± 6 0.34 mmHg, P < 0.05, respectively). In the NO inhalation group, TNF-α concentrations and alveolar PMN infiltration were significantly decreased as compared with those of the reperfusion group, 6 h after reperfusion (7.28 ± 1.49 vs. 8.90 ± 1.43 pg/mL, P < 0.05 and [(19 ± 6)/10 high power field (HPF) vs. (31 ± 11)/10 HPF, P < 0.05, respectively]. The amount of apoptotic pneumocytes in the lower lobar lung was negatively correlated with the arterial blood PaO2/FiO2, presented a positive correlation trend with the W/D ratio of the lower lobar lung, and a positive correlation with alveolar PMN in the reperfusion group and NO inhalation group. NO provided at 20 ppm for 6 h significantly alleviated LIRI in the PTE model. Our data indicate that, during LIRI, an obvious inflammatory response and apoptosis occur in our PTE model and NO inhalation may be useful in treating LIRI by alleviating the inflammatory response and pneumocyte apoptosis. This potential application warrants further investigation.
Introduction
Currently, pulmonary thromboembolism (PTE) is the third most common cause of death in hospitalized patients [1]. After PTE treatment such as thrombolytic therapy, pulmonary embolectomy, pulmonary suction thrombectomy [2][3][4], or alternative interventional strategy of balloon pulmonary angioplasty for chronic thromboembolic pulmonary hypertension (CTEPH) [5], lung ischemia-reperfusion injury (LIRI) may occur in the region of the affected lung.
Because of the organization and recanalization channels within the chronic thrombus, the ischemic changes of chronic PTE may be reduced when compared to those brought about by deliberate/experimental ligation. In addition, the reperfusion pulmonary edema (RPE), which is one of the characteristics of LIRI, often develops within 48 h post-surgery such as thromboendarterectomy and lung transplant, thereby prolonging the intubation for mechanical ventilation, ventilator-associated infections, longer stays in the intensive care unit, and early postoperative mortality [6][7][8].
The inflammatory response plays a pivotal role in the development of LIRI after transplantation [9]. However, the inflammatory response mechanisms related to the LIRI in PTE, especially chronic PTE, are not studied as deeply as those involved in lung transplantation because there is no ideal PTE animal model due to the fact that animals do not develop spontaneous deep vein thrombosis or PTE.
With the guidance of Swan-Ganz float catheter under X-ray fluoroscopy, we have successfully established a reproducible modified experimental canine PTE model. This model mimics the pathological changes of chronic PTE and the location of thrombus is similar to that of CTEPH proximal type [10]. The pulmonary lower lobar artery is commonly involved due to a more extensive circulation in PTE [11]. Therefore, in this study, we aimed at precisely embolizing the right lower pulmonary lobar artery. Two weeks after the selective embolization, we performed the embolectomy with reperfusion to examine the LIRI changes, especially the inflammatory response during LIRI, in the canine PTE model.
Materials and methods
Animals and study design Animal procedures were approved by the Fujian Medical University Institutional Animal Care and Use committee, and all experiments were conducted in strict accordance with the Guide for the Care and Use of Laboratory Animals.
Twenty-four healthy mongrel dogs (weight 20 ± 1.7 kg) were randomly divided into four groups. In the sham group (Group 1, n = 6), the procedures were the same as those performed in the other groups, except that 0.9 % NaCl was infused into the canine right lower pulmonary lobar artery to replace the autologous cylinder blood clots. The remaining 18 dogs underwent selective embolization. Twenty milliliters of autologous blood extracted from the dogs' saphenous veins using a 20-mL syringe was rapidly injected into three segmental 7-cm cylinder tubes of pliable medical sterile intravenous transfusion polyvinyl chloride (PVC) tube with an inner diameter of 4 mm (the tube was named as tube I) to form the cylinder autologous blood clots at room temperature. Eight hours later, all blood clots were placed into a sterile container with 37°C saline for later use. The right external jugular vein was then dissected and cannulated with a 7F sheath. A Swan-Ganz float catheter (Edwards Lifesciences Llc, Irvine, CA, USA) was used to guide another PVC tube, with a length of 40 cm and inner lumen diameter of 5 mm (tube II), to float selectively into the right lower pulmonary artery under X-ray fluoroscopic monitoring. The Swan-Ganz catheter was then extracted out from the inside of tube II and the three segmental autologous cylinder blood clots induced ex vivo were selectively injected into the right lower pulmonary lobar artery through the PVC tube II lumen. Later, oral, enteric-coated indomethacin tablets (0.5 mg/kg, 3 times/day for 3 days) were provided for pain relief and oral tranexamic acid (TXA) (110 mg/kg, every 12 h, for the duration of the experiment) was provided to inhibit endogenous fibrinolysis. Prophylactic penicillin (80,000 U/kg, twice daily for 1 week) was also provided to prevent infection. Two weeks later, the 18 dogs were subdivided into 3 groups. The ischemia group (Group 2, n = 6) underwent the same surgical procedures as the other groups, except the embolectomy, and the dogs were observed for 6 h. The reperfusion group (Group 3, n = 6) Inflammatory response to LIRI in a PTE model 43 underwent embolectomy with reperfusion in the right lower lobar artery and the dogs were observed for 6 h after embolectomy. The NO inhalation group (Group 4, n = 6) underwent a process similar to the reperfusion group, but with additional NO inhalation at 20 ppm for 6 h through mechanical ventilation.
Experimental PTE canine models and embolectomy
Preparing the animal model before embolectomy Two weeks after embolization, the PTE dogs were anesthetized with 5 mL of intravenous propofol and intraperitoneal injection of 0.5 mL/kg of 3 % sodium pentobarbital. After endotracheal intubation, they were subsequently connected to Servo900C (SIEMENS, Bad Neustadt an der Saale, Germany) with volume controlled ventilation, tidal volume of 15 mL/kg, inspired oxygen concentration of 40 %, respiratory rates of 20 breaths/min, inspiratory time of 25 %, inspiratory pause time of 10 %, and positive endexpiratory pressure of 3 cm H 2 O. The arterial blood from the left femoral artery was periodically collected for analysis of the ratio of arterial oxygen partial pressure to fractional inspired oxygen (PaO 2 /FiO 2 ).
Embolectomy with reperfusion and mechanical ventilation
A right thoracotomy was performed through the fifth intercostal space. The right lower pulmonary lobe was mobilized by dividing the pulmonary ligament and the hilar structures were then dissected. By clamping the right lower pulmonary artery hilum, Fogarty arterial embolectomy was performed according to the exact location of the thrombus where we had previously injected the clots through the PVC tube II lumen as described above. Anastomosis of the lower pulmonary artery was performed with non-absorbable 5-0 running sutures. The lower pulmonary artery was then unclamped and reperfusion changes were observed for 6 h.
In the NO inhalation group NO inhalation group, an NO delivery device (SensorNOx, SensorMedics Co. Yorba Linda, CA, USA) was introduced downstream of the humidifier through the inspiratory limb of the respiratory circuit after the embolectomy. NO was administered at a concentration of 20 ppm, starting immediately after initiating reperfusion and continuing for 6 h during the reperfusion period. The concentrations of NO and NO 2 were determined continuously by the SensorNO x delivery device, using electrochemical cell analysis. NO 2 levels did not exceed 3 ppm.
A chest tube was inserted and the thoracotomy closed. Intravenous injection of 100 U/kg heparin was performed after every surgical procedure. Arterial PaO 2 /FiO 2 was measured at baseline (0 h) and at 2, 4, and 6 h after surgical procedures. Each animal was covered during the experimental period to prevent hypothermia.
Treatment of the animals and lung tissues
The lung was removed from each animal for observation. Serum concentrations of tumor necrosis factor-a (TNF-a) were measured at different times. Alveolar polymorphonuclear neutrophils (PMNs) and myeloperoxidase (MPO) concentrations in lung homogenates were measured. The wet to dry ratio (W/D) of small fresh tissue in the segment part of the right lower lobe distal to the clot was measured and calculated. Lung tissue pathology, apoptotic pneumocytes, and ultrastructure were determined.
Serum TNF-a and MPO concentrations in lung homogenates Serum TNF-a concentration at different times was measured by using an enzyme-linked immunosorbent assay (ELISA) kit (Medical Science And Technology Co., Ltd, Nuoshi, Beijing, China) according to the manufacturer's instructions. The right lower lobe lung tissues were harvested, immediately weighed, and homogenized on ice in ten times their volume of normal saline. The homogenates were centrifuged at 3,000 rpm for 15 min. MPO levels in the supernatant were measured by using an ELISA kit (Assay Designs Inc., Ann Arbor, MI, USA) according to the manufacturer's instructions.
Alveolar PMNs Alveolar PMNs in the left lower lobar lung were observed under optical microscopy. Small pieces of the right lower lobe lung tissue were placed in 10 % formalin (Pharmaceutical company, Nai Ming, Shanghai, China), fixed, and paraffin embedded. Paraffin tissue sections were dewaxed, rehydrated, and stained with hematoxylin and eosin (H&E). After H&E staining, PMNs were counted in 10 continuous microscopic fields (magnification 4009) that only contained alveoli.
Lung W/D ratio The right lower lobe lung tissue was excised, weighed immediately with a weighing scale (precision of 0.001 g), and then dried at 80°C with continuous blowing for 72 h. The residuum was weighed and the W/D ratio was calculated.
H&E staining The formalin-fixed lung tissues were embedded in paraffin and then cut into 4-mm-thick tissue sections, which were H&E stained.
Apoptotic
pneumocytes Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) was performed according to the manufacturer's instructions (R&D Systems, Minneapolis, MN, USA) for apoptosis detection. The slides were analyzed by a blinded pulmonary pathologist.
TUNEL-positive pneumocytes were counted in 100 microscopic fields (magnification 4009) per lung lobe. Only the cells lining the alveolar wall with positive nuclear and no cytoplasmic staining were regarded as apoptotic pneumocytes. Those found within the interstitium or in the alveoli were not counted.
Ultrastructure Fresh lung tissue was cut into small pieces and immersed immediately in universal fixative (1 % glutaraldehyde, 4 % paraformaldehyde, pH 7.4), post-fixed in 2 % osmium tetroxide, dehydrated in graded acetones, and embedded in an Epon-Araldite mixture (Fisher Scientific Corp., Toronto, Canada). Selected blocks were thin-sectioned, mounted on copper grids, and contrasted with uranyl acetate and lead citrate. The grids were examined for pneumocytes using a Philips 208s electron microscope (N.V. Philips, Eindhoven, Netherlands).
Statistical analysis SPSS 11.0 software (IBM, Chicago, IL, USA) was used for statistical analysis. Numerical parameters with normal Gaussian distribution (According to the Kolmogorov-Smirnov test) are expressed as mean ± standard deviation. The difference of measured parameters between the different time points after surgery within the same group was analyzed by repeated-measures analysis of variance (ANOVA) and the differences between groups were assessed by ANOVA. Pearson's correlation coefficient was used to assess the correlation between two variables. P \ 0.05 was considered significant.
A reddish brown thrombus was observed after embolectomy of the lower pulmonary lobar artery
Thrombi taken out from the lower pulmonary lobar artery after embolectomy were completely elongated strips with multiple pink granulation-like protrusions and multiple branches consistent with the pulmonary artery branches (Fig. 1).
PaO 2 /FiO 2 parameters at different time points
In the sham and ischemia groups, no significant difference was observed for the PaO
Lung sample ultrastructure evaluated by electron microscopy
In the sham group, lung ultrastructural architecture was normal and type II pneumocytes with lamellar bodies were observed (Fig. 4a). Swelling of lamellar bodies in some type II pneumocytes were detected in the reperfusion group (Fig. 4b). Macrophages with long pseudopodia, phagocyting swelling, and vacuoles degeneration of lamellar bodies were observed in the reperfusion group (Fig. 4c). High electronic intensity dots were phagocytosed by macrophages (Fig. 4d). Disintegrated naked nuclei and other amorphous necrotic material were observed in the alveolar cavity in the reperfusion group (Fig. 4e). PMNs were in close contact with alveolar epithelial cells with vacuole degeneration in the reperfusion group (Fig. 4f).
W/D ratio of the right lower lobar lung The right lower lobe W/D ratios in the reperfusion and NO inhalation groups were significantly higher than that of the sham group (7.73 ± 2.81 vs. 4.02 ± 1.13, P \ 0.05 and H&E staining (4009) The alveolar structure with some exudation in the right lower lung of the sham group is shown (Fig. 6a). Some collapsed alveolar structures, thickened alveolar septa, and a few exudative cells in the alveolar space were observed in specimens from the ischemia group (Fig. 6b). Incomplete and destructed alveolar structure with a large number of exudative cells, mainly PMNs, and exudation were detected within the alveolar space in the reperfusion group (Fig. 6c). Similar incomplete and destructed alveolar structures with exudative cells, mainly PMNs, and exudation were observed in the NO inhalation group, but to a much lower degree than observed in the reperfusion group (Fig. 6d).
Correlation between the amount of apoptotic pneumocytes and PaO 2 /FiO 2 , W/D ratio of the lower lobar lung, and alveolar PMNs In the reperfusion and NO inhalation groups, the amount of apoptotic pneumocytes in the lower lobar lung was negatively correlated with the arterial blood PaO 2 /FiO 2 (r = -0.74, P \ 0.05 and r = -0.80, P \ 0.05, respectively),presented a trend toward a positive correlation with the W/D ratio of the lower lobar lung (r = 0.6, P [ 0.05 and r = 0.5, P [ 0.05, respectively), and was positively (Table 3).
Discussion
A modified canine PTE model We have successfully established a modified canine PTE model. This model mimics PTE pathological changes. A precise embolization into the intended right lower pulmonary artery enabled us to perform embolectomy to investigate the inflammatory response to LIRI in the PTE model. In the PTE model, thrombi taken out from the lower pulmonary lobar artery after embolectomy were complete elongated strips with multiple pink granulation-like protrusions and multiple branches consistent with the pulmonary artery branches.
Ischemia-reperfusion injury
In general, vascular response includes at least two phases, ischemia and reperfusion, resulting in ischemia-reperfusion injury in systemic vascular beds. As demonstrated by H&E staining in our ischemia group, LIRI results in a moderate inflammation characterized by a 'chronic' fibroproliferative state, including infiltration and pulmonary remodeling [12]. In the ischemic phase, the unresolved clot and inflammatory cells provide the microenvironment and may stimulate cell proliferation and injury to the cells, which is associated with lack of oxygen and cell damage [13,14].
Once the blood flow is re-established, the injury elicited by reperfusion can be more severe than that caused by ischemia per se. The injury may also be attributed to the combined effects of ischemia and reperfusion [15]. RPE is the most important factor complicating the early postoperative period after pulmonary thromboendarterectomy [16]. In our experimental model, an incompletely destructed alveolar structure with a large number of exudative cells and exudation was observed within the alveolar spaces as revealed by H&E staining after embolectomy of the lobar artery. Therefore, the significant increase in W/D ratio and alveolar PMNs may impair gas exchange (PaO 2 / FiO 2 decreased) due to increased edema formation and surfactant inactivation as often occurs in patients after endarterectomy [17].
Injury mechanisms during LIRI
LIRI is due primarily to mechanisms that cause alveolarcapillary barrier, especially alveolar epithelial cell damage, and that increase pulmonary vascular permeability, which is associated with the formation of reactive oxygen intermediates, endothelial cell injury, cytokine activity, neutrophil activation, complement activation, and inflammation response [18,19].
Reactive oxygen species (ROS) generation It is commonly believed that reperfusion lung injury is primarily based on the generation of ROS on reperfusion, which may cause cellular damage and apoptosis [20,21]. In hypoxic conditions, the inducible nitric oxide synthetase (iNOS) can be enhanced, which may lead to ROS-type cellular injury and increase oxidant radical byproducts, including the peroxynitrite anion [22]. ROS have diverse actions on pulmonary tissue, including cell proliferation, gene transcription, smooth muscle contraction, and interaction with redox enzymes [23]. The array of inflammatory mediators released into the circulation, in turn, governs the chemoattraction of various nonresident leukocytes that initiate the production of ROS [24]. Of the chemoattracted cell types, neutrophils and monocytes possess the greatest ROS-generating potential [25]. Therefore, during LIRI, the damage and apoptosis in the lung may be more serious.
The inflammatory response in our LIRI model The inflammatory response plays a pivotal role in the development of LIRI after transplantation. This inflammatory process is characterized by the infiltration of PMNs and other inflammatory cells such as macrophages that release inflammatory mediators, including TNF-a.
Roles of macrophages and TNF-a during LIRI in the PTE model Macrophages play a key role in the response to LIRI [26]. They are known as ''the defenders of human health'' by phagocyting bacteria, viruses, foreign bodies, damaged cells, and necrotic tissue. In our study, the lung ultrastructural architecture showed that swelling of lamellar bodies occurred in some type II pneumocytes in the reperfusion group. Additionally, macrophages with long pseudopodia phagocytosed the swelling and vacuolar lamellar bodies. Moreover, disintegrated naked nuclei and other structures of the amorphous necrosis material were also observed in the alveolar cavity.
Macrophages may also promote the production of inflammatory mediators [26]. Damaged cells spill cytoplasmic and nuclear components into the extracellular milieu, which activate macrophages, leading to the production of pro-inflammatory cytokines and chemokines, including TNF-a [27]. TNF-a is a pro-inflammatory factor that is released by lung macrophages in early stages and plays an important role in initiating lung inflammation [28]. Another study showed that there was a close correlation between pro-inflammatory factors and LIRI in rabbit models after the left pulmonary artery occlusion was released. TNF-a level was continuously elevated in the reperfusion group [29]. In our experimental model, serum TNF-a concentrations in the reperfusion group increased significantly 6 h after reperfusion as compared with the baseline value and the value at 2 h after reperfusion. TNF-a concentrations were also much higher than those of the ischemia and sham groups. TNF-a released by macrophage is an objective index to evaluate the severity of inflammatory response during LIRI in PTE. Therefore, we can attenuate acute LIRI by reducing TNF-a level [29]. TNF-a produced by macrophages can also lead to the recruitment of PMNs to the injured tissue [27], which may be responsible for serious lung damage during LIRI in PTE.
Roles of PMN and MPO during LIRI in the PTE model After ischemia-reperfusion, PMN infiltration into the alveolar cavity increased significantly and PMNs are responsible for tissue damage [30]. In our experimental model, alveolar PMN infiltration in the reperfusion group was much higher than that of the ischemia group, resulting in the destruction of alveolar structure as observed by H&E staining, including an incompletely destructed alveolar structure with a large number of exudative cells and exudation. MPO concentrations in the lung tissue are related to neutrophil activation [31]. In this study, MPO concentrations from lung homogenates in the reperfusion group were much higher than those of the ischemia and sham groups. Meanwhile, the amount of alveolar PMNs in the lower lobar lung was positively correlated with the lung MPO, indicating PMN recruitment and progressive activation in the lung tissue. One study demonstrated that PMNs play an important role in LIRI in a model of rat lung transplantation and that the gas exchange of the transplanted lung could be improved by reducing alveolar PMN infiltration [32]. In this study, PMN infiltration was also negatively correlated with the arterial blood PaO 2 /FiO 2 . Therefore, it is vital to control PMN activation to avoid excessive tissue damage during PTE reperfusion.
Interaction between PMNs and macrophages during LIRI in the PTE model The inflammatory mediators produced by macrophages can also stimulate the recruitment of neutrophils to the injured tissue. It has been shown in experimental models of inflammation and also in clinically relevant models that macrophage-derived chemokines (TNF-a) promote neutrophils' egress from the vasculature [27]. In this study, electron microscopy showed that PMNs were in close contact with the alveolar epithelial cells with vacuoles degeneration in the reperfusion group, and the amount of alveolar PMNs in the lower lobar lung was positively correlated with serum TNF-a concentrations. Thus, PMN recruitment to the injured lung tissue may be due to the increased concentrations of TNF-a produced by macrophages during LIRI in the chronic PTE model.
Apoptotic pneumocytes after ischemia-reperfusion When a cell is sufficiently injured, cell death occurs either through necrosis or apoptosis. Apoptosis is morphologically characterized by nuclear condensation and shrinkage followed by fragmentation of nuclear chromatin without typical inflammation. Apoptosis was determined by TUNEL in this study. Studies indicated that RPE clinical manifestations after interventions for PTE are similar to those of lung transplantation [5][6][7][8]. A significant number of pneumocytes undergo apoptosis after reperfusion in the transplanted rat lung [33]. In our experimental model, the number of apoptotic pneumocytes increased after reperfusion similarly to our previous study [34]. The amount of apoptotic pneumocytes in the lower lobar lung was also negatively correlated with the arterial blood PaO 2 /FiO 2 in the reperfusion and NO inhalation groups. Therefore, during LIRI in PTE, pneumocyte apoptosis may be attributed to the low PaO 2 /FiO 2 , which is similar to triggering apoptosis by exposure to certain environmental conditions such as hypoxic conditions [20] and may result from the increased number of alveolar PMNs after reperfusion.
NO inhalation improved PaO 2 /FiO 2 , macrophage numbers and TNF-a levels, PMN numbers and MPO levels NO inhalation may be useful to treat acute and chronic pulmonary embolism due to its vasodilatation property [35,36]. In the NO inhalation group, the PaO 2 /FiO 2 significantly decreased 2 and 4 h after reperfusion. However, due to the improvement in ratio of ventilation and blood flow (V/Q matching) [3,37,38], the PaO 2 /FiO 2 increased gradually to the baseline 6 h after reperfusion. Compared with the reperfusion group, the PaO 2 /FiO 2 increased significantly after inhalation of 20 ppm NO for 4 or 6 h, related to elevated inducible nitric oxide synthase (iNOS) expression and its activity [39].
After NO inhalation, LIRI can be effectively blunted by reduction of the macrophage-dependent injury and be attenuated by minimizing neutrophil sequestration [40]. In our study, in the NO inhalation group, TNF-a concentration decreased significantly when compared with that of the reperfusion group. Our results are in agreement with data indicating that breathing NO prevented the induction of TNF-a production and that NO inhalation improves outcomes after successful cardiopulmonary resuscitation in mice [41]. Alveolar PMN infiltration and MPO concentration in the lung homogenates in the NO inhalation group decreased significantly as compared with those of the reperfusion group, which may possibly result from the decreased production of macrophage-derived chemokines such as TNF-a. A study showed that a short period (10 min) of NO inhalation preconditioning with low concentration can alleviate LIRI in mice and that it is associated with the inhibition of toll-like receptor 2/4 in the lung after LIRI [42]. However, in our study, decreased TNF-a concentrations and alveolar PMN infiltration can be observed after 4-6 h of NO inhalation when compared with the reperfusion group. Hence, a moderate duration of NO inhalation can alleviate LIRI in PTE.
The therapeutic window for NO applications is narrow because NO inhalation can be either protective or toxic to the lung depending on the dose, timing, duration of NO administration, source of NO, and the local redox environment [43][44][45]. A study showed that the maximum protective effect is achieved with NO concentrations between 10 and 20 ppm [46]. NO inhalation is routinely provided for the first 4 h postoperatively at doses of 15-20 ppm [47].
Limitations and clinical implications
The mechanisms of LIRI are complex and may include neutrophil activation, cytokines, ROS, arachidonic acid derivatives, complement, hemolysis, thromboxane/PGF2, and platelet activating factor, causing cellular damage and apoptosis. In addition, the inflammatory response includes various inflammatory cell infiltration and proinflammatory cytokine release. Therefore, further studies should focus intensively on the interactions between inflammatory response factors during LIRI in PTE. The routine use of NO inhalation after lung surgery for PTE should also be further studied.
Conclusions
Our PTE model allowed us to observe an obvious inflammatory response and apoptosis during LIRI.
It seems that the pneumocyte damages caused by the inflammatory response are more serious than apoptosis during LIRI in PTE. Remarkably, physiological improvements are observed when NO inhalation is used as a therapeutic approach to treat LIRI in a canine PTE model. NO inhalation may be useful in treating LIRI resulting from acute or chronic PTE by alleviating the inflammatory response and pneumocyte apoptosis. This potential application warrants further investigation.
|
2016-05-12T22:15:10.714Z
|
2015-02-13T00:00:00.000
|
{
"year": 2015,
"sha1": "82e0b651f7aa3ebc5f8a338a0ffd2f2305375894",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11239-015-1182-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "82e0b651f7aa3ebc5f8a338a0ffd2f2305375894",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218887307
|
pes2o/s2orc
|
v3-fos-license
|
A rAre AssociAtion: Ankylosing spondylitis And A genetic diseAse
Ankylosing spondylitis (AS) is a chronic systemic inflammatory disease that affects the axial skeleton and sometimes the peripheral joints, leading to the development of bone bridges and ankyloses with impaired joint mobility and quality of life. The HLA B27 antigen, which occurs in approximately 97% of patients, is an important risk factor and also a diagnostic element to consider. The typical onset of the disease is in the 3rd-4th decade of life; juvenile onset of AS under 16 years is associated with the predominant involvement of peripheral joints and multiple complications (coxitis, acute anterior uveitis) which influence the evolution of the disease under treatment being related with a negative prognosis. Noonan syndrome is a genetic disease with dominant autosomal transmission characterized by a small stature and other phenotypic features associated with congenital heart defects, especially pulmonary stenosis and atrial septal defect. Multiple genes within the RAS subfamily involved in various cellular signaling pathways such as signal transmission via mitogen-activated protein kinases are responsible for the occurrence of the disorder. Different hematological diseases such as myeloproliferative syndrome and neoplastic disease, particularly affecting the lung, may be correlated with Noonan syndrome. We present the case of a young patient with juvenile onset AS and Crohn’s disease who has Noonan syndrome with operated pulmonary stenosis and septal atrial defect, the association of these diseases bringing together cumulative complications that required multiple therapies and surgical interventions with strict monitoring.
INTRODUCTION
Affecting predominantly the sacroiliac and spinal joints, ankylosing spondylitis (AS) is the prototype of axial spondyloarthritis group which includes, depending of radiographic involvement of the joints, 2 forms of disease: non-radiographic AS in which the joint lesion is diagnosed using MRI of sacroiliac joint and spine and AS in which the joint lesion is diagnosed using X-ray of the sacroiliac joint. HLA B27 antigen is positive in over 97% of the patients and it is a disease risk marker and a positive diagnostic element (1,2). Depending on the form of onset of the AS, there are two categories: juvenile -onset ankylosing spondylitis (JOAS) with manifestations before the age of 16 years and adult -onset ankylosing spondylitis (AOAS) with manifestations after the age of 16 years, these differentiations being useful in many aspects such as symptomatology, progression and evolution of the disease. In the JOAS, peripheral joints (especially ankles, knees, hips and shoulders) are more commonly affected clinically and radiologically, and these patients are more prone to hip arthroplasty, while in the AOAS the disease primarily affects the axial skeleton (3). Because of the different articulation structure, in the case of JOAS, the diagnosis is delayed, and clinical progression is faster and more severe, with impairment in functionality. At the same time, the prevalence of uveitis is higher in juvenile form. In the AOAS, the patients have a typical clinical picture with involvement of sacroiliac joints and lumbar spine; the appearence of acute anterior uveitis is less frequent and it can be independent of the activity of the spinal disease (4).
The striking relationship between inflammatory bowel disease (IBD) and AS has been recognized for many years: up to 10% of IBD patients develop AS, and, vice versa, IBD commonly develops in patients primarily diagnosed with AS. As both have an important underlying genetic heritability, it has been suggested that the two diseases could have an overlapping set of predisposing genes such as HLA B27 (5).
Noonan syndrome was first described in 1883, but was named after Jaqueline Noonan who noticed that children with a rare type of heart defect called pulmonary valve stenosis often had a characteristic physical appearance with short stature, webbed neck, wide-spaced eyes, and low-set ears. She presented her first paper on the subject in 1963, and after several more papers and recognition, the condition was officially named Noonan syndrome in 1971 (6). It is a genetic disease with dominant autosomal transmission that is characterized by specific facies, low stature, developmental delay, cardiac abnormalities and skeletal malformations. Facial features include widely spaced eyes, light colored eyes, low set ears, a short neck, and a small lower jaw. Heart problems may include pulmonary valve stenosis. The sternum may be either protruded or clogged while the spine may be abnormally curved. Intelligence is often normal (7). Noonan syndrome may be caused by a mutation in any of several genes, and can be classified into subtypes based on the responsible gene. It is typically inherited in an autosomal dominant manner, but many cases are due to a new mutation and are not inherited from either parent (8). It is one of several RASopathies, the underlying mechanism for which involves a problem with a cell signaling pathway. The cause of 20-30% of Noonan syndrome cases remains unknown (9).
CASE REPORT
We present the case of a 33-year-old male, known with JOAS, admitted for clinical and paraclinical re-assessment to continue biological treatment with golimumab (Simponi). Upon presentation, the patient experienced intense pain in the left shoulder, at the insertion of the muscles on the antero-superior iliac spine, at the lumbar spine and bilateral knee pain requiring non-steroidal antiinflammatory drugs (NSAIDs).
The rheumatologic illness began in 1999 with pain in the small joints of the hands, later on lumbar spine, knees and other joints pain were added. Under the guidance of the local rheumatologist, the HLA B27 antigen was tested and the patient was diagnosed with AS. The recommended treatment at that time included sulfasalazine and NSAIDs without any favorable response. In addition, after 3 months, the patient presented sanguinolent diarrhea, which is why he interrupted the treatment based on his own initiative. Subsequently, after another treatment with sulfasalazine, methotrexate and meloxicam, he gave up this therapeutic regimen in May 2006 due to the appearance of sanguinolent diarrhea, vomiting, headache, arrhythmias, dyspnea and chills. Since June 2006, the patient has been enrolled in the Sf. Maria Clinical Hospital, Bucharest, Romania, where he was treated with Solu Medrol (methylprednisolone sodium succinate) pulse therapy. At this point, for the first time, a biological treatment was considered. Since January 2007, adalimumab (Humira) 40 mg, 1 ampoule every 2 weeks and methylprednisolone in the mild dose were given, with an initial favorable progression for 4 years (2007-2011). After 4 years, the patient presented secondary non-responsiveness to adalimumab. From 2011, the patient started treatment with etanercept (Enbrel) 50 mg, 1 ampoule per week and NSAIDs (diclofenac, meloxicam) under which the inflammatory syndrome and painful joint symptoms persisted. In 2013 the patient underwent a left hip prosthesis for coxitis with osteonecrosis. At the same time under etanercept (Enbrel) treatment, the patient suffered two episodes of uveitis in the same eye, which is why this treatment was switched with infliximab biosimilar (Remsima). In 2017, 2 infusions of Remsima were given, the second with an allergic reaction. Later, treatment with certolizumab (Cimzia) 200 mg, 1 ampoule every 2 weeks was indicated for a 3-month period, but his condition didn't improve at this time either, leading to, in March 2018, a second intervention, a right hip prosthesis, also for coxitis with osteonecrosis. In 2017, due to multiple episodes of sanguinolent diarrhea and persistent anemia (hemoglobin, 8 g/dl) associated with intense inflammatory syndrome (ESR 111 mm/h, CRP level over 100 mg/dL), a colonoscopy was performed. This procedure highlighted suggestive elements for Crohn's Disease (inflammatory infiltration in the lamina propria, segmented focal encryption) and budesonide (Budenofalk) treatment was recommended, currently with a good response. It should be mentioned that during the evolution of the disease the patient presented significant leukocytosis, up to 30,000 leukocytes/mmc, which is why in November 2018 hema-tological evaluation was assessed, the bone marrow having a histopathological appearance of non-specific myeloid hyperplasia.
Considering small stature, typical phenotypic traits (characteristic facies with hypertelorism, low ears implantation and curly hair, pectus ecvinus, webbed neck) in a patient operated at 6 years old for atrial septal defect and pulmonary stenosis, the diagnosis of Noonan syndrome was established at 14 years old (Fig. 1).
The clinical examination at the current presentation reveals the typical characteristics for Noonan Syndrome: small stature (150 cm), thoracic postoperative scars. The AS evaluation shows the occiput-wall distance = 1 cm, tragus-shoulder distance = 3 cm, inspir-expir diameter = 3 cm, finger-soil distance = 40 cm, positive Schoeber maneuver; peripheral joints are without modifications, with normal functionality (Fig. 2). ECG shows verticalized heart, sinus rhythm and a minor block of right branch (Fig. 3).
Chest X-ray shows enlarged left pulmonary hilum, minimal vascular and interstitial thickening of the left lower lobe and the pelvic X-ray shows bilateral grade IV sacroileitis and metallic prosthesis of coxofemural joints (Fig. 4).
FIGURE 4. Bilateral hip arthroplasty with metallic prostheses
Colonoscopy was performed in 2017 and showed edematous, erythematous mucosa with spontaneous bleeding from the right colic flexure to the cecum and infracentimetric superficial ulcerations. The lesions had a segmental character being interposed with normal mucosal areas. Multiple biopsies were taken at that time, confirming the Crohn's disease (Fig. 5).
The confirmed diagnosis of this HLA B27 patient is axial AS with grade IV bilateral sacroileitis and bilateral metallic prosthesis of coxofemural joints, with early onset and progressive evolution, currently under treatment with golimumab (Simponi). The patient associates histologically confirmed Crohn's disease and Noonan syndrome with typical features and operated congenital cardiac malformation.
DISCUSSION
This case is interesting because the association of AS with a genetic disease such as Noonan syndrome is rare and both of these diseases might have complications and comorbidities severely affecting the quality of life. There are few case reports in the medical literature showing the association of Noonan syndrome with peripheral spondyloarthrities and even with acute rheumatic fever (10,11). This association of Noonan syndrome with rheumatic diseases represents a challenge for the clinicians because both of them can affect in different ways the skeleton.
The JOAS of this patient is a severe form of the disease and has a negative prognosis. The evolution of the rheumatologic disease included early coxo-femoral joint damage for which total bilateral hip arthroplasty with metallic prostheses were performed. Considering that the disease did not respond to non-steroidal anti-inflammatory therapy and that the patient had digestive intolerance to sulfasalazine, he was treated with several successive biological therapies (adalimumab, etanercept, infliximab, certolizumab) and was either non-responsive or had allergic reactions (Fig. 6).
During the course of the disease, colonoscopy and biopsy of colon mucosa were performed, histologically confirming Crohn's disease. IBD is generally reported to be associated with spondylarthropathies in 5%-15% of cases (12). Studies have shown that microscopic gut inflammation was found in ear- ly forms of AS and is associated with age, sex, disease activity and degree of MRI inflammation on sacroiliac joints. Althought the connection is further supported by overlapping treatment options for AS and IBD, therapeutic outcomes are not always the same. These dissimilarities can be assigned to differences in not only the cytokine pathways and cells involved in disease, tissue localization and environmental factors but also in pharmacokinetics and biodistribution (13).
Currently, the patient is in treatment with golimumab and budesonide and the short-term progression is favorable.
CONCLUSIONS
JOAS is a severe disease with predominant peripheral joint involvement, with earlier coxitis de-FIGURE 6. Patient's evolution under biological treatment velopment and a higher predisposition for hip prosthesis. This form of the disease requires multiple non-biological and biological drug treatments to achieve the therapeutic goals. Patients should be closely monitored in the context of comorbidities, disease complications and treatment. The presence of a genetic disease in a patient with JOAS is a challenge in clinical practice. Moreover, the association of JOAS with Crohn's disease is a challenge for both rheumatologists and gastroenterologists. To summarize, the management of this patient with JOAS and Crohn's disease in association with Noonan syndrome implies multidisciplinary health care for a good outcome.
|
2020-04-30T09:06:59.586Z
|
2019-09-30T00:00:00.000
|
{
"year": 2019,
"sha1": "8b6cfadce9a373a94a55efc1158294d026e1adab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.37897/rjr.2019.3.4",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8f2e400459decb2ec15b805db49186fc97d71950",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258939824
|
pes2o/s2orc
|
v3-fos-license
|
Philosophy’s Past: Cognitive Values and the History of Philosophy
Recent authors hold that the role of historical scholar-ship within contemporary philosophical practice is to question current assumptions, to expose vestiges or to calibrate intuitions. On these views, historical scholar-ship is dispensable , since these roles can be achieved by nonhistorical methods. And the value of historical scholarship is contingent , since the need for the role depends on the presence of questionable assumptions, vestiges or comparable intuitions. In this paper I draw an analogy between scientific and philosophical practice, in order to float one role for historical scholarship that is nonreplicable and noncontingent. It has long been acknowledged that cognitive values – features of theories that facilitate understanding, such as ontological parsimony, ideological simplicity, computational ease and fecundity – play a key role within science. The role of some of these values within philosophy also has received attention but left understudied are the values of novelty and conservativeness. These values influence theory choice, the selection of methodology, the setting of research agenda, and the presentation of results; and are best assessed with a historically informed evaluation.
INTRODUCTION
I will start with a question.Why study the history of philosophy?That, at least, is an easy question to answer: the history of philosophy is interesting; historical scholarship has intrinsic value, and historians of philosophy do not need to justify what they are doing.But let me ask a related question.What is the instrumental value of historical scholarship for contemporary philosophy?
To answer this question, I will draw an analogy between scientific practice and philosophical practice.It has long been acknowledged that cognitive values such as ontological parsimony, ideological simplicity, and explanatory power play a role within science in theory choice, selecting a methodology or setting a research agenda.In this paper, I will argue that there are certain cognitive values, playing a role within philosophy, that are best assessed in a historical context.And so scholarship on the history of philosophy makes a distinctive contribution to philosophical practice.
A map of the paper may be helpful to the reader.I will begin with a survey of the discussion on the role of historical scholarship in contemporary scientific and philosophical practice (Section 2).This discussion has tended towards the view that this role is to question current assumptions, expose vestiges or calibrate intuitions.On this view, historical scholarship is dispensable, since these roles can be achieved by nonhistorical methods.And the value of historical scholarship is contingent, since the need for the role depends on the presence of questionable assumptions, vestiges or comparable intuitions.This yields several desiderata for an account of the instrumental value of the history of philosophy.I will argue for a role for historical scholarship which, if not indispensable, at least cannot be achieved by other methods and which is not dependent on these contingencies.To begin this argument, I then will discuss the role of values in scientific and philosophical practice (Section 3).Parameters along which may be assessed theories, positions, methodologies and research agendas include: epistemic values -features that make a theory more likely to be true, such as internal and external consistency, empirical adequacy and predictive competence; cognitive values -features that facilitate understanding, such as ideological simplicity and fecundity; and societal values, tracking the potential benefits and harms arising from a research agenda.I next will turn to the understudied values of novelty and conservativeness (Section 4).These features carry cognitive value, and can influence how contemporary philosophers set a research agenda, choose a problem set, select a methodology and present results.They may also carry epistemic and societal value.I will argue that an adequate assessment of novelty and conservativeness requires a historically informed evaluation.This role for historical scholarship meets the above desiderata.The role is modest but is not replicable by nonhistorical methods, and is not contingent on the presence of questionable assumptions, vestiges or comparable intuitions.
To illustrate this role for the history of philosophy, I briefly will discuss a case study (Section 5).And I will end by responding to a few objections (Section 6).
A SELECTIVE SURVEY OF THE RECENT DISCUSSION
In this section, I will survey some of the recent discussion on the role of historical scholarship in contemporary scientific and philosophical practice.I do not intend to give an exhaustive survey of this literature.Such a survey would not be needed for my present purposes, which is not to show that everyone else is wrong -indeed, I agree with many of the following views.But a selective and admittedly opinionated survey of a few positions will bring out some common tendencies in this literature.And the survey will bring out several desiderata for any new proposal to meet.Before looking at a few authors whose views are more promising, let me lay aside a few possible positions: inspired by Glock (2008), although not following his classification precisely, we might label these positions ahistoricism, extreme historicism and moderate historicism.According to ahistoricists, the history of philosophy is a field of study distinct from philosophy and the study of philosophy's history provides no advantage to the contemporary practice of philosophy.According to extreme historicists, philosophy is to be identified with its history -or a little more carefully, the correct contemporary philosophical methodology is to be identified with a certain kind of historical scholarship.And according to moderate historicists, philosophy and its history are distinct yet the study of the history of philosophy is a necessary component of philosophy.This is a coarse grained classification, and others make finer grained distinctions.The above mentioned Glock, for example, distinguishes extreme historicism, the view that philosophy ought to be historical in both method and conclusions, from mainline historicism, which by contrast, holds that philosophy ought to be historical in method but ought to draw nonhistorical conclusions.But a coarse grained classification serves our present purposes.
I will explain soon why we can lay these positions aside.But it may prove to be useful downstream to linger for a moment on extreme historicism.Let me take for illustration just a few examples from a single volume.These authors tend to view contemporary philosophical positions, debates and methods as essentially negatively characterized by reference to their historical predecessors.For example, Taylor (1984, pp. 17ff.)holds that it is essential to understanding philosophical problems that one understands them genetically, since the reasons for our practices have become partly repressed over time.To make these reasons more perspicuous requires that we recover lost previous articulations.This historical retrieval can liberate us from past but lingering mistakes, or restore fruitful views away from which we have drifted.
To give another example, Rorty (1984) views philosophy as a Geistegeschichte: unlike doxography, philosophy is not a mere record of what the historical views were; unlike rational reconstruction, philosophy should be concerned with historically accurate representation; yet unlike historical reconstruction, philosophy should go beyond an uncritical study expressed in the vocabulary of the historical figures themselves.The Geistegeschichte is the formulation of a canon, and justifies our belief that we have made progress on the problem sets deserving of the honorific 'philosophy'.Somewhat similarly, Krüger (1984) holds that philosophy is essentially of a historical nature, since it is constitutive of a successor theory that it can interpret its predecessors and evaluate the limits of their applicability.Krüger (1984, pp. 93ff.) also suggests an argument that, were contemporary philosophical positions not understood by reference to historical views, we would lack warrant in our belief that there is philosophical progress.If a contemporary view is not a corrective to an earlier position, or a contemporary debate centered around a problem set that supplants and improves upon an earlier debate, we could not say there is progress, but only that there is merely a succession of one view replacing another, or one debate fizzling out and another, unrelated debate starting up in its stead.
These arguments support a claim weaker than extreme historicism.As Glock (2008) notes, the history leading up to a contemporary position might be useful when a certain condition is metnamely, when the contemporary position is fruitfully characterized negatively with respect to the historical position.This condition surely sometimes obtains.But there is no reason to think that this condition always obtains or that the condition necessarily obtains.Moreover, even when a contemporary view is helpfully characterized in contradistinction to a predecessor, these authors give us little reason to believe that such a characterization exhaustively defines the contemporary view or uniquely picks it out from other alternatives.Placing a contemporary view within its historical context may be helpful, without contemporary philosophical method and historical study thereby being identical.Similar comments can be made in response to the considerations Krüger puts forward in support of viewing the history of philosophy as providing warrant for our conviction that we are making progress.This at best shows that recognition of a certain kind of progress can be justified partly through historical scholarship; it does not show that this recognition is an essential feature of philosophical practice.For all these reasons, the characterizations of philosophy put forward by these authors fail to support extreme historicism.
Well, that was rather quick and dirty.I do not expect anyone to be persuaded against extreme historicism by these brief remarks.But we need not go further for our purposes.Recall that I said I would set aside three positions: ahistoricism, extreme historicism and moderate historicism.My reason for ignoring ahistoricism and extreme historicism is that on neither position does the question what contribution is made by historical scholarship really arise: philosophy and its past are either wholly unrelated or they are one and the same.On neither view would historical scholarship have merely instrumental value for contemporary philosophy.And I do not aim to defend moderate historicism: I wish to show that historical scholarship is beneficial to, not a requirement for, philosophical practice.
I will begin with the literature on the role of historical scholarship in contemporary scientific practice, and then turn to the analogous question for philosophy.The discussion on the relation between scientific practice and its history has been dominated over the last sixty years by claims made by Kuhn, Feyerabend and others on the incommensurability of theories separated by scientific revolutions or paradigm shifts.Such views were put forward in opposition to the view of scientific development as a continual approximation of truth, with taxonomies, methodologies and goals that remain invariant over time.Instead, theories separated by a scientific revolution might have distinct taxonomic classifications of the same entities, entirely different methods or even different goals.For example, Kuhn (2012, p. 148) argues that, according to the Ptolemaic taxonomy, part of what is meant by 'earth' is a fixed position, so Copernicus' claim that the earth moved was, by Ptolemaic lights, incoherent.
Notice, however, that incommensurability does not entail incomparability.Feyerabend (1962, 66) notes that some empirical observations can be seen as refuting a current theory only after an incommensurate alternative has been proffered with which to read the observations.In some cases, incommensurable alternatives can better assess a theory than commensurable alternatives.To continue the Copernicean revolution example, Kuhn (2012, p. 68) notes that the discrepancies between predictions made with Ptolemy's system and the available empirical observations were best resolved by rejecting altogether the Ptolemaic taxonomy, on which the earth is by definition immoveable.Much of this discussion concerns forward progress in moving from an antecedent theory to an incommensurable subsequent theory.What about reflection backwards?On the view that some theories are incommensurable, special care must be taken to bring historical information to the assessment of contemporary theories.Antecedent views might have radically different assumptions, taxonomies, methodologies and scientific goals.But careful reflection on historical positions might nonetheless aid assessment.Just as subsequent theories can shed light on the shortcomings of their precedents, so too historical consideration of antecedent theories can throw into sharp relief the assumptions, taxonomies, methodologies and goals characteristic of subsequent theories.
Although the contrasts are likely to be striking when comparing incommensurate theories, similar assessment strategies can be employed with comparing commensurate theories not divided by scientific revolutions or paradigm shifts.For example, Maienschein, Laubichler and Loettgers (2008) draw on episodes in biological research to illustrate how the history of science can clarify assumptions in contemporary models and theories.They (2008, p. 347) write: [i]n general, critical historical evaluations are needed because the crucial assumptions and conceptual constraints, the details of the central experimental systems or original formal models, as well as the supporting data and measurements, are generally not included in current or semiaxiomatic formulations of most (biological) theories.It is, therefore, not surprising that many models and theories in biology are currently used mostly pragmatically: scientists tend to know which ones "work" and tend to modify or adapt them to different data rather than reevaluating their fundamental assumptions.However, in cases in which substantial revisions are required, researchers generally go back to the original literature in order to uncover precisely those assumptions that have constrained the model or theory in the past.In this sense, history is an essential part of the avant-garde of biology.
On this view, the history of science can be useful when there are vestigial assumptions, tacitly operative in contemporary theoretic frameworks, which are inhibiting forward progress.
Now, what of the instrumental value of historical scholarship to contemporary philosophical practice?Philosophy's history is a repository of ideas, positions, debates and methods.So looking to philosophy's past might enrich a contemporary discussion.But does approaching this repository as a history, and not a synchronic list of views, offer advantages?Not unlike Maienschein, Laubichler and Loettgers's (2008) view of the history of science, some authors view historical work as exposing vestiges whose influence on contemporary philosophy might otherwise go unnoticed.For example, Glock (2008) perhaps views the exposure of vestiges as a role for the history of philosophy that could not be provided by a repository of synchronic positions.Wilson (1992) illustrates a similar view by arguing that the contemporary discussion of perception can incorporate unawares inappropriate vestiges from a wholly different historical discussion of perception.Philosophers in the early modern period explain the relation between sense experience and physical reality as a rival to scholasticism, with science and philosophy seamlessly combined.Wilson worries that appropriating aspects of the modern discussion can bring into the contemporary discussion assumptions about the relation between science and philosophy with which we no longer agree.Historical scholarship can correct this misappropriation.Following Wilson as an explicit influence, Domski (2013) worries that appropriativists distort.Embracing the contextualist claim that philosophical practice is contextually defined, and the standards for doing philosophy are supplied by the historical situation, she argues that one role for the historian is to identify how past philosophy is different.One goal for historical scholarship, then, is to gain an enhanced perspective on the questions and concerns that define contemporary philosophy.Domski's (2013, p. 284) aim is that "[b]y looking at Descartes's and Newton's competing arguments for why it is appropriate to mathematize nature in order to understand nature, [she hopes] to show that structural realists could benefit from addressing the extent to which the epistemic and the ontological determine the development of our physical theories."Reflecting on her methodology of 'contextualism', approaching historical figures on their own terms, Domski (2013, pp. 299-300) writes that she is "not claiming that Descartes and Newton provide us the right (or only) answers to questions about what our mathematically formulated physical theories represent about nature.Nor is [her] point that attention to these historical actors can somehow correct our current practice.[Her] point is that a contextualist reading of Descartes and Newton can reorient our current discussions and enhance the terms of the current debate between epistemic and ontic structural realists." Others view historical work as useful for overcoming contemporary prejudices, not by exposing historical vestiges but by bringing in historical rivals to challenge contemporary views.For example, Della Rocca (2013) criticises the contemporary use of intuitions in philosophical arguments.How can we break away from this methodology?Della Rocca suggests: look to philosophers working before the 'veil of intuitions'.Della Rocca holds that historical figures typically do not aim to accommodate intuitions: as an example, he notes that Spinoza follows the Principle of Sufficient Reason to counterintuitive consequences.Historical scholarship can in this way offer alternative theories or illustrate alternative methodologies.A similar approach is taken by Schliesser (2013) and Nelson (2013) in the same volume.And Glock might endorse a somewhat similar line, taking the philosopher to critically engage with historical views with which they disagree, with the potential for correction of contemporary beliefs.He (2008, p. 892) writes: "The interpreter is open to the text precisely because she treats it as a philosophical challenge.She allows the text to question both her own understanding of it and her prejudgements about the matter at issue.The dialogue may either necessitate a revision of her interpretation, or of her prejudgements, or it may confirm the original attribution of error." Again, let me emphasize that I do not intend to provide an exhaustive survey of the literature; my goal is to bring out a few common themes, and so to lay out a few desiderata for a positive proposal.But it might be worth taking a moment to note that the view that the history of philosophy challenges contemporary views, assumptions, frameworks, or question sets, has been not uncommon within the relatively recent discussion.Some view the history of philosophy as a repository of positions and arguments.For example, Garber (1988, p. 28) characterizes Bennett (1984) as viewing the history of philosophy as "a kind of storehouse of positions and arguments, positions and arguments that we can use as guides or inspirations the positions we should take, or illustrations of dead ends that we should avoid." Gracia (1988, p. 108) views the history of philosophy as unnecessary for philosophy (since he takes philosophy to make claims without reference to time) but useful "for it furnishes diverse formulations of positions and arguments that facilitate the philosopher's task.In many instances it may supply the solution or the seeds of the solution that the philosopher was looking for, or it may show that certain views are oversimplistic, or that certain arguments are unsound."Sorell (2005, p. 6) holds that "older approaches can throw light on current versions of old problems, or produce instructive examples of failed solutions."The view that the history of philosophy is a repository of positions is also held by some detractors; for example, it may be behind Sauer's (2022) contention that historical scholarship lacks instrumental value (since historical views are mostly false).Others stress the exposure of contemporary assumptions.Speaking, like Wilson, of the modern view of the distinction between philosophy and science, Garber (1988, p. 41) writes: "we should be careful about attributing our distinction between philosophy and science to earlier thinkers.There is a philosophical lesson to be learned as well.. . .Why is it that we tend to see such a radical break between philosophy and science, and, more importantly, should we?The question can be raised directly, without the need for history, as Quine has done.But history brings the point home in an especially clear way: it shows us an assumption we take for granted by pointing out that it is not an assumption everyone makes."Hatfield (2005, p. 93) sees the history as allowing us to "gain new perspective on current assumptions, or to question general platitudes."Garber (2005, p. 145) takes history to "show the philosopher alternative ways of conceiving what philosophy is . . .[and so] can help free ourselves from the tyranny of the present."Cottingham (2005, p. 31) views history as "making the familiar seem strange, and vice versa.The sense of strangeness may create a kind of hiatus, making us pause and stand back from the immediate mêlée of contemporary philosophical disputes, leading us to re-evaluate the presuppositions we (often unconsciously) bring to bear on those disputes."Similar views are expressed by Williams (1994) and Garber (1989).
Before moving on, let me briefly discuss an interesting recent outlier.McDaniel (2014) takes a distinctive approach, and one which does not require that there be vestiges or unattractive contemporary views for historical scholarship to be useful to contemporary philosophical practice.
McDaniel views the history of philosophy as calibrating intuitions and confronting groupthink by exposure to philosophical traditions unlike our own.We calibrate our intuitions by coordinating our intuitions with the intuitions of others.McDaniel notes that coordination with historical figures is especially useful since, because their philosophical setting is unlike our own, we enrich our philosophical community by widening it.
These are not the only approaches.But they indicate some of the tendencies, and some of the range of views, in the recent literature.Let me make a few comments as takeaways from this brief and partial survey.First, our discussion might mislead the reader.Remember, I do not claim that the history of philosophy has value only if it makes a contribution to contemporary philosophy.In my opinion, the history of philosophy is intrinsically interesting, and scholarship on this history would be worthwhile even if it made no contribution to contemporary philosophy whatsoever.But our concern here is with the instrumental value of historical scholarship.It is not easy to say with precision how the history of philosophy is relevant to contemporary philosophy and, as I will explain momentarily, the views just canvassed leave something to be desired.But I do not deny that exposing vestiges, challenging contemporary assumptions or calibrating intuitions is worthwhile.Identifying vestigial views, or contrasting contemporary assumptions with their historical antecedents, viewed as such, or calibrating intuitions against those of historical figures -that is to say, approaching these tasks by appeal to history -may, as Glock, Wilson, Domski, McDaniel and others suggest, make achieving these goals easier.And so the history of philosophy may contribute to contemporary philosophy in these ways.
But these envisioned roles for historical scholarship are attenuated, in at least two ways.The roles are dispensable, since they can be replaced by nonhistorical considerations.And the roles are often available to be filled only contingently -only if there are indeed vestiges, contemporary assumptions which can be helpfully contrasted with their historical antecedents, or comparable intuition.This is true of many of the roles canvassed for historical scholarship within scientific practice.Consider again the view that certain scientific theories are incommensurable.On this view, as we mentioned, historical consideration of incommensurable antecedent theories can throw into sharp relief the assumptions, taxonomies, methodologies and goals characteristic of subsequent theories, but these features can be assessed without appeal to historical antecedents.
And the employment of these historical considerations are dependent on the theories being on either side of a scientific revolution.Or consider Maienschein, Laubichler and Loettgers (2008), who hold that historical positions can be used to bring into question fundamental theoretical or methodological assumptions.Notice that on this construal, too, the usefulness of the history of science to contemporary scientific practice is contingent on the need for revision.Furthermore, this is a role that may be played by nonhistorical methods: historical research may be useful for uncovering fundamental assumptions of methods and theories, but careful analysis can achieve the same ends.
Turning to the discussion of philosophical practice, recall that Glock holds that a history offers advantages over a synchronic list of positions, debates and methods, since some historical positions are vestiges, still playing an implicit role in contemporary philosophy.Glock (2008, p. 882) concedes that, although it may be easier to bring them into view through historical scholarship, it is possible to retrieve these implicit features nonhistorically.Moreover, on this view, the utility of the history of philosophy is contingent on the presence of vestigial traces of that history.If there happens to be no such remnants, then the history of philosophy offers nothing that could not be provided by a repository of synchronic positions.Indeed, the space of possible philosophical positions at any time far outstrips the collection of philosophical positions which were at some time or other actually endorsed, so limiting oneself to historical views, when assessing contemporary ones, would be disadvantageous, even if there are vestiges.Recall that Wilson holds that there are anachronistic views concerning the relation between science and philosophy hidden in contemporary discussions of perception.Historical scholarship would be needed to identify these views as vestigial traces of earlier discussions of perception.But exposing these views as inappropriate in the contemporary discussion would not require any historical input.
Domski, recall, holds the view that history can reorient and enhance our current discussions.But Domski recognizes the attenuated role of the history of philosophy on her view.Of this role, Domski (2013, pp. 299-300) writes: Must we turn to history to gain this deeper perspective on our current practices?Here I grant that the answer is no.But the main question at hand is whether we can turn to history for such a perspective, and in this regard, I hope I have said enough to address the worries of revisionism and distortion and to show that, with the proper mediation, contextualist history can illuminate our current philosophical circumstances and possibly even motivate us to reorient our philosophical priorities.
Moreover, if the role is there to be filled, it is still contingent whether a historical approach is advantageous, since this depends on the presence of whatever factors make exposure of vestiges or contemporary assumptions through historical approach easier than exposure through nonhistorical approaches.
Finally, consider McDaniel's proposal that the history of philosophy helps to calibrate contemporary intuitions.The calibration of our intuitions benefits from the expansion of the data set of intuitions.But as McDaniel notes, this expansion is a goal that can be achieved through a variety of nonhistorical methods -for example, by enlarging the demographic representation of those practicing philosophy, through the study of other contemporary philosophical traditions, and by the polling of those not familiar with academic philosophy through the methods of experimental philosophy.The calibration of intuitions is arguably an important role within philosophical practice, and one that can be played by historical scholarship, but it is a role that is replicable by other methods.A merit of McDaniel's view is that the instrumental value of historical scholarship is not dependent on the presence of vestiges or unattractive contemporary assumptions.The calibration of any given set of intuitions is beneficial regardless whether there is local agreement or disagreement between contemporary philosophers and historical figures.However, calibration arguably requires historical cases sufficiently similar to contemporary cases to elicit comparable intuitions.The particular views might be dissimilar, but calibration requires that the problem sets and methodologies be commensurable.The value of historical scholarship for the calibration of contemporary intuitions is contingent on the presence of comparable intuitions.
Our survey of views has yielded several desiderata.I will argue for a modest role for historical scholarship, but one that cannot be played by nonhistorical considerations, and one that is not dependent on contingencies such as the presence of vestiges, unattractive contemporary assumptions or comparable intuitions.
VALUES IN SCIENTIFIC AND PHILOSOPHICAL PRACTICE
To begin this argument, I will discuss the role of values in scientific practice.The discussion of values in this context arose in response to the observation that evidence underdetermines theory choice.To adjudicate among rival scientific theories with equivalent conformity to empirical evidence and predictive power, philosophers of science have appealed to a wide variety of values.For example, Kuhn (2012) cites accuracy, simplicity, internal and external consistency, breadth of scope, and fruitfulness.Quine and Ullian (1978) list conservatism, modesty, simplicity, generality, and refutability.Longino (1990) cites among traditional virtues empirical adequacy or accuracy, simplicity, and explanatory power or breadth of scope.Douglas (2009, 89) lists predictive accuracy, explanatory power, scope, and simplicity or economy.However, a host of alternatives to these traditional values also have been proposed.Laudan (1984) includes prediction of surprising results, and variety of evidence among virtues.Longino (1996) cites novelty and ontological heterogeneity.And Douglas (2009) includes concern for human life, reduction of suffering, and promotion of political freedoms.
Values allow evaluation and preferential ranking among theories, positions, hypotheses, methodologies, frameworks, problem sets, and research agendas.Evaluations and rankings can be in terms of different goals and so values are of different kinds.Some evaluations are in terms of epistemic goals.Features such as internal consistency, empirical adequacy and predictive competence might assess the likelihood of truth for theories, positions and hypotheses, and they might also assess the likelihood of producing results for methodologies, frameworks, and problem sets.Let me flag that not all theorists view such traits as values.Douglas (2009) views these features as instead baseline requirements.Unlike values, which are features for which to strive but need not be fully present in all cases, these traits are necessities.And unlike values, which allow us to rank acceptable theories, these traits operate negatively to exclude a theory that does not instantiate them.Internal consistency is such a feature, since a self-contradictory theory is a non-starter, a failed theory that wears its falsity on its sleeve.Empirical adequacy is such a feature, since a theory that does conform to the world in its broad strokes falls short of minimal requirements.And predictive competence is such a feature: as Douglas notes, predictive competence is not the same thing as predictive precision or accuracy, since a theory can be competent with neither precision or great accuracy; but a theory that is not close enough to get by is not a contender.In the next section, I will suggest that there are truth conducive features that are not just minimal requirements.
Other values aid cognition.These evaluations are in terms of ease of understanding the positions, assessing the arguments for the positions or otherwise following the reasoning, agreeing with the intuitions or other data supporting the theory, appreciating the significance of the issue for other areas of research, and so on.For example, all else being equal, an ideologically simpler theory facilitates understanding.It is easier to grasp a theory with fewer primitive concepts than one with more, easier to follow the reasoning from those primitives to derived theorems, easier to apply the concepts so to classify the data, easier thereby to assess the classified data to confirm or disconfirm hypotheses, and so easier to appreciate the results of the theory.Similar comments could be made for ontological parsimony.Other cognitive values facilitate our appreciation of a theory, the significance of its results, and its relation to other areas of research.For example, Kuhn (2012) and Longino (1996) discuss explanatory power or breadth of scope.A theory that exhibits explanatory power can explain a wide range of phenomena.A theory that exhibits breadth of scope yields consequences that extend beyond those the theory was originally developed to explain.I will discuss several other cognitive values below.
Finally, some values are social, political or ethical.These evaluations are in terms of societal benefits and harms, or advantages and disadvantages to individuals.Scientific theory choice is sensitive to considerations such as applicability to current human needs, concern with the effects of adopting a theory, and the risks and potential for harm.Typically, these values are related to public standards, to regulative ideals shaping the normative discourse in a scientific community, and to criteria guiding the formulation, acceptance, and praise or disparagement of theories.
It will prove helpful later to discuss these various kinds of values -epistemic, cognitive and societal -in just a bit more detail.These classes of traits are not disjoint.For example, ontological parsimony may be both an epistemic and a cognitive value.If Ockham's Razor is true, then a theory with fewer kinds of entities is more likely to be true than a theory with more kinds of entities.But an ontologically parsimonious theory also might facilitate cognition.On the other hand, ontological homogeneity may be in certain contexts a societal or political vice.Longino (1996) argues that ontologically homogeneous theories place pressure on theorists to reduce differences by privileging one class of entity and viewing the rest as dependent, deviant, incomplete, or failed.Where ontologically homogeneous theories reduce individual differences to as few categories as possible, ontologically heterogeneous theories tend towards treating individual differences as important and not to be elided in abstractions.Longino holds that, where ontologically homogeneous scientific theories tend to support political hierarchies, and perpetuate gender oppression by reducing gender visibility, the promotion of ontologically heterogeneous theories is connected to the rejection of theories of inferiority.
To what extent do values typically viewed as nonepistemic influence theory choice?It is relatively uncontroversial that scientific inquiry is value-laden and that cognitive and societal values influence aspects of discovery such as research agenda, the allocation of financial support and other resources, and the presentation and application of results.But it is controversial that cognitive and societal values influence scientific method, theory choice and the justification of results.Some authors hold that this influence is minimal.For example, Douglas (2009) holds that there is a legitimate direct role for social and ethical values in the initial stages of a scientific project, contributing to the decision which projects to undertake, and what methodological approach to adopt.This role is direct, since the values provide reasons to pursue one project or one method over another.Once the study is underway, however, any direct role of values, in Douglas' view, must be highly constrained.Our desire to promote ethical or political goals should not influence what we take to be true.Rather, values can continue to play only an indirect role in directing scientific practice.Douglas views the indirect role for social and ethical values to lie in assessing the 19331592, 2024, 3, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/phpr.12990,Wiley Online Library on [15/05/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License consequences of error.For example, the evidential support required to confirm a hypothesis in medical research should be higher when being wrong could harm individuals.
What of features typically taken to be cognitive values?Douglas holds that these do not provide reasons to accept a theory.Explanatory power does not entail reliability: for example, just-so stories can provide explanations despite being false.And similar comments can be made for many other cognitive values: a simple theory might be wishful thinking in a complex world; diverse phenomena might fall under the confines of one theory and so breadth may not track truth; a fruitful theory may prove false over time, even if it spurs on research; and even precise theories may not be accurate.However, if my earlier observation that it is possible for a feature to be both a cognitive value and an epistemic value is correct, these features discussed by Douglas still might provide epistemic reasons for choosing one theory over another.Such reasons are not necessarily conclusive, and must be weighed against other considerations.Douglas is right to point out that explanatory power, simplicity, breadth of scope and precision need not track truth.But the same might be said for certain canonical epistemic virtues.To give just an example or two, internally consistent theories need not be true.And a theory might competently predict results without accurately describing the events that led up to those results.Some features commonly taken to be cognitive values may well be conducive to truth, although under a ceteris paribus condition.Ockham's Razor is not well thought of as the principle that the more ontologically parsimonious theory is more likely to be true, regardless of other considerations.Rather, the Razor is the principle that a theory being more parsimonious is a reason for believing the theory -a reason that must be weighed against other considerations, some of which might pull in the other direction.And if Ockham's Razor is correct, and a theory committed to fewer kinds of entities is indeed, all else being equal, more likely to be true, then analogous reasoning might apply to many other cognitive values.A theory being ideologically simple is a reason for believing it true.A theory with explanatory power need not be reliable but, all else being equal, it is reasonable to believe that the theory with greater explanatory power is more likely to be true.And so on.I will not say more to defend these observations here; we will touch on it again in what follows; but the main points I want to make about the instrumental value of the history of philosophy will not depend on it.
Let us now draw an analogy between the roles of various values governing theory choice in the sciences, and the role of similar values in philosophical practice.There are some obvious disanalogies.Philosophy does not make use of empirical data in the same way that the sciences do.However, philosophical theories are sensitive to evaluation along epistemic, cognitive and social parameters.Research agenda, methodology selection and theory choice are all influenced by values in philosophical practice no less than in science.One might prefer one theory over another on the grounds that the former theory is more ontologically parsimonious, and this choice might be partly due to the conviction that the one theory being more parsimonious gives a reason to believe that this theory is, all else being equal, more likely to be true.Or one might endorse the ideologically simpler theory on the grounds that such an approach will aid cognition.Or one might choose to undertake a certain research project in the hope that the results will have salutary societal benefits.In the next section, I will discuss the role of certain understudied values within philosophical inquiry.
NOVELTY AND CONSERVATIVENESS
I turn to the values of novelty and conservativeness.These values are paradigmatically used in assessments of proposed theories relative to current alternatives: conservative proposals are consistent with presently accepted theories; novel proposals, inconsistent.But these evaluations can reflect conformity with, or difference from, contemporary views in a variety of ways.Novel theories, for example, can deviate from present theories by postulating different entities and processes, adopting different principles of explanation, incorporating alternative metaphors, or by attempting explain phenomena not previously the subject of investigation.Within the philosophy of science literature, novelty is suggested by Harding's (1986) call for successor science and is encouraged by Longino (1996) and others.But values such as novelty and conservativeness are also not wholly incidental to philosophical progress.It might be easy to miss this.The novelty or conservativeness of a position would seem to lend that position little direct support.We care whether a view is true or false, not whether our predecessors held it.True, gestures towards a tradition are sometimes made.Consider for example the not uncommon trope, usually found when a view is first introduced in a paper, of name dropping a predecessor from the canon with a vaguely similar view.The move, however, is typically a superficial matter of presentation, softening up the readers for what is coming by reminding them of a half-forgotten view likely encountered by the author and their readers during their respective but not dissimilar educations.Citing a precedent neither is intended by the author, nor is taken by the reader, as supporting the truth of the view.
But values such as novelty and conservativeness also can lend support for a position.Much of this support is cognitive.A view that is conservative gains cognitive accessibility from familiarity, at least to those working in the tradition.It may well prove easier to appreciate tweaks on established views than to appreciate unusual new proposals.A theory that is novel, on the other hand, might be less immediately accessible but may prove fecund, and yield the benefit of new insights.Moreover, values such as novelty and conservativeness contribute to philosophical progress in other ways.These values attach not only to positions but also to frameworks, sets of problems, sets of assumptions, the bases on which we weigh some considerations over others, the methodological proclivities of practitioners, the divisions by which we carve up a field into areas of specialization, and the déformation professionnelle that influences our views on the place of philosophy within society.Locating these features within a historical context, and evaluating their novelty or conservativeness, moves philosophical inquiry forward.Indeed, it is here that these values perhaps play their most prominent role.For example, the influence of a traditional set of questions, orthodox way of framing these questions, or received way of going about answering the questions, can last long after the initial contenders for answers have fallen by the wayside.
There is no straightforward application of these features in assessments.Novelty and conservativeness are not all-or-nothing affairs.A theory may be orthodox in some respects, and radical in others.For example, to return to Kuhn's (2012) discussion, Copernicus rejected the Ptolemaic identification of the earth with a fixed point, while retaining other aspects of Ptolemy's theory, such as the assumption that the other planets travel at an uniform speed.And so the decision whether a theory is, all things considered, novel or conservative may not be a simple calculation.Moreover, these values pull in different directions.For example, conservativeness and novelty are in tension.Should we prefer the more conservative theory or the more novel?There is no one-sizefits-all answer.Generally perhaps, an optimal theory might exhibit a balance among these values.But where the equipoise lies will vary by case, and overall assessment requires careful judgement.
I have noted that the features of novelty and conservativeness are, in part, cognitive values.And much of the trade-off in assessment involves comparison of the respective costs and benefits novelty and conservativeness bring to the promise of a problem set, the accessibility of a methodology, the manner of presentation of theoretic results, and so on.But novelty and conservativeness also can carry societal or ethical value.For example, Longino (1996) can be a societal virtue, and conservativeness a societal vice, in a context where the orthodox scientific theories support oppressive political hierarchies.On the other hand, in contexts where research traditions have had salutary ethical or political benefits, conservativeness might be a societal virtue and novelty a societal vice.As Douglas (2009) argues, when medical researchers choose a research agenda, the process requires careful calculation of the likelihood of harms being incurred through error; in such situations, deviation from established patterns of research should be undertaken with caution.Although philosophical research seldom has similarly direct risks to public health, in adjudicating on the relative merits of novelty and conservativeness, we of course should not ignore the broader context within which we practice philosophical inquiry.
Might values such as novelty and conservativeness have more robust influence on philosophical practice?As we saw earlier, it is uncontroversial that scientific inquiry is value-laden and that such values influence discovery, presentation and application, but controversial that values influence scientific method, theory choice and the justification of results.We discussed in the previous section authors such as Douglas, who resist allowing cognitive and societal values from influencing theory choice.Other authors, by contrast, make these more strident claims for the role of such values within inquiry.Authors such as Longino hold that features that carry societal value can also influence what we take to be true.And some support, lent to philosophical positions from values such as novelty and conservativeness, may be epistemic or truth conducive.For example, a conservative position arguably is more likely to be true, since it carries the weight of a prolonged period of assessment, at least for those elements which are shared with its historical precedents.The likelihood of a conservative view being true for these reasons is perhaps not dissimilar to the likelihood of a parsimonious theory being true.And as with our discussion above of parsimony, conservativeness would be truth conducive only as a ceteris paribus value: a theory being conservative provides at best a defeasible reason for believing it likely to be true.These are perhaps reasons to view conservativeness as an epistemic virtue and novelty as an epistemic vice.In other contexts, however, the epistemic values may be reversed.For example, if a methodology long has been fruitless, just about any new approach may be more promising, and in such situations conservativeness may be an epistemic vice and novelty an epistemic virtue.
There are still other ways in which novelty and conservativeness may influence explanatory projects.Anderson (1995) argues that certain values can influence what we take to be significant.Theoretic inquiry aims at an organized body of truths.But not every set of truths about a phenomenon constitute an acceptable theory of that phenomenon.Some sets offer a biased representation of the whole, despite containing no falsehoods, through choices over what to emphasize and what to de-emphasize.Other sets are cluttered with trivial or irrelevant truths.Anderson argues that what constitutes an unbiased representation of the whole is relative to our values, interests and aims.Such considerations influence what truths are germane to our explanatory goals, what truths are not, and so what subset of truths will suffice for satisfying these goals.The values of novelty and conservativeness may also influence standards of significance and completeness.Conservative approaches may tend to accept established choices of significant truths; novel approaches may tend to adopt new subsets of truths as playing a prominent role within our explanatory projects.
To sum up, novelty and completeness may carry epistemic value.In one context, the more conservative theory, all else being equal, may be more likely to be true, or the more conservative methodology more likely to be effective.In another context, novelty instead may be truth conducive.Novelty and conservativeness also might influence what subset of the truths we take to be significant and, taken together, sufficient to meet our explanatory goals.And novelty and conservativeness also might carry societal or ethical value in our assessments of the benefits and harms 19331592, 2024, 3, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/phpr.12990,Wiley Online Library on [15/05/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License a research agenda might incur.Going forward, however, I will not rely on novelty or conservativeness being conducive to truth, as influencing significance and completeness, or as having societal value.Even if we view novelty and conservativeness as merely cognitive values, the assessment of such values gives historical scholarship a role within philosophical practice that meets our desiderata from Section 2. Now on to the assessment of novelty and conservativeness, and the role of historical scholarship within this assessment.Notice that conservativeness and novelty are relative to a contrast class.Longino (1996) takes the appropriate contrast set to comprise current theories.But the contrast classes also must include historical theories, in order that assessments of conservativeness and novelty meet minimal standards of adequacy.Consider an assessment of novelty that looked to a contrast class of merely concurrent theories.If the proposed theory were relevantly different from these contemporary theories, but identical to a traditional theory that was, until recently, widely held, few would conclude that the proposed theory is novel.
So the assessment of novelty and conservativeness must include historical positions.But not all historical positions are equally relevant to assessing conservativeness and novelty.We would not view a theory as conservative on the basis of its similarity to an esoteric position, advocated in the remote past, and lacking subsequent influence.What is needed is not a mere list of historical positions, but something that includes a diachronic chronology.This is not to say that there is an easy correlation between time and relevance.A more distal theory might be less relevant for assessing conservativeness and novelty.But on the other hand, a more distal position might exhibit greater influence on delineating the orthodoxy than a more proximate position.For these reasons, a mere chronology is insufficient.
The history should play a role as history: we need not a mere list of positions, nor even a chronology, but a historical narrative, tracing the context in which debates played out, positions were put forward, objections raised, and retorts retorted.Historical scholarship aims to provide a story of development, identifying the influences that help to produce a position, the stated commitments of a view, the reasons given in its support and the criticism a view received.We need, moreover, not a mere doxographical description of what was said, but an assessment of the explicit reasons given for or against a position.We ought to track the implicit reasons for and tacit commitments of a view.We ought to debate the correctness or incorrectness of a position and of its historical criticism.
It may be useful to compare the narrative of the history of philosophy with the causal history leading to any event.When stating the causal history of an event, we choose certain causal factors as salient to explaining why the event occurred, in the context of the act of explanation.We relegate other causal factors to background conditions.Typically, as Mackie (1974, pp. 34-7), Hart and Honoré (1985, pp. 32-44) and others note, these selection effects reflect what is taken, in the context of stating the causal history, to be ordinary and what is taken to be extraordinary.In ordinary contexts, the presence of oxygen is not cited as a cause of a house fire; but there are other contexts where the presence of oxygen is noteworthy and so is viewed as a cause within the causal history leading up to a fire -in a setting where a closed system is intended to be a vacuum, say.
As White (1965) observed, perhaps it is unavoidable that historical scholarship is also selective, and perhaps it is even desirable that it be so.The ideal of interest free history may be misguided.Selection effects might contribute to the intrinsic value of historical research.An exhaustive study of the history of philosophy is not feasible, and some selection is unavoidable.Historians need to take care to avoid the imposition of anachronism.But the research agenda for historical scholarship also can be influenced by contemporary interests, and this influence need not be pernicious.For example, what is extraordinary in a position and what is ordinary might be easier to 19331592, 2024, 3, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/phpr.12990,Wiley Online Library on [15/05/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License distinguish in hindsight.Selection effects in articulating a historical narrative also contribute to the instrumental value of historical scholarship.Contemporary interests can guide research agenda within historical scholarship.The resulting emphasis of certain positions, debates, methodologies, problem sets or traditions, over others, may facilitate the use of a historical narrative for assessing the novelty or conservativeness of a contemporary position.
It might be tempting to try to state these observations with more precision.For example, we might try to distinguish different strengths of conservativeness in the following way.A weakly conservative view conforms to each member of a contrast class, where each member is a contemporary or recent view (and similarly for positions, theories, methodologies, problem sets, and so on).Conformity might be cashed out in terms of resemblance, or consistency.A strongly conservative view, on the other hand, conforms to each member of a contrast class, where each member is a contemporary or historical view; the more inclusive the contrast class -the more views, but also the longer period of time from which views are drawn -the stronger the conservativeness.Different strengths of novelty might be characterized similarly.
The assessment of values, however, resists quantification.And novelty and conservativeness is not easily assessed in terms of difference from, or conformity to, other views.A new proposal might be similar or dissimilar to others in a wide variety of ways.Judgements on whether a proposal is, all things considered, novel or conservative, is delicate and highly context sensitive.And indeed, the judgement whether a proposal is, all things considered, novel or conservative, may be neither possible nor desirable to make.The application of assessments of novelty or conservativeness need not proceed by first forming an all things considered judgement.It is better to have a nuanced view of the various ways in which a contemporary position is novel or conservative relative to a history.
So to sum up the story so far.A historical narrative contributes to the assessment of conservativeness and novelty.This may subsequently contribute to epistemic and societal evaluations of positions and theories.But it is especially relevant to cognitive evaluations -setting a research agenda, choosing a problem set, selecting a methodology and presenting results.We have wanted a role for scholarship on the history of philosophy within contemporary philosophical practice that meets several desiderata.Although the contribution of a historical narrative is not indispensable to philosophical progress, it is distinctive, and not replicable by nonhistorical considerations.It is a contribution which cannot be delivered by just a list of the contemporary rivals, by a synchronic repository of possible positions or even by a chronology.Moreover, the contribution of a historical narrative is not dependent on contingencies such the presence of vestiges, questionable contemporary assumptions, or comparable intuitions.
A CASE STUDY
In this penultimate section, I will discuss a case study.The discussion will flesh out the role of values in directing research agenda, and so illustrate one instrumental benefit scholarship on the history of philosophy offers to the contemporary practice of philosophy.I have discussed the specific issues raised in the case study at length in Corkum (2020), and since my aim is to illustrate certain historically informed cognitive evaluations, and not to argue in detail for or against specific claims, I will be brief here.I will lay out the philosophical issue, and then turn to its history.Grounding, a noncausal relation of dependence, has received much recent attention.Adopting Wilson's (2014) terminology, there are a variety of small-g grounding relations that already had been heavily discussed in the literature.This list includes mereological composition, under which the arrangement of parts determines the whole; material constitution, under which the material builds up the hylomorphic compound; set formation, under which a set is determined solely by its members; realization, under which for example a physicalist would hold that a phenomenological state is nothing over and above its corresponding physical state; and microbased determination, under which the microphysical facts exhaustively explain the macrophysical facts.
Is there a relation unifying these small-g relations?And if so, is this relation of grounding a single relation, a genus under which the small-g relations fall as species, a determinable of which the small-g relations are determinate relata, a natural resemblance class or a mere family resemblance class?There is reason to doubt that grounding is just a single particular relation.For example, Bennett (2017) argues against the view that there is a single relation of grounding (or in her preferred terminology, building) operative in all cases of small-g grounding.Since two components, a and b, can build both the mereological sum a+b and the set {a, b}, there must be some difference between mereological composition and set formation which a single relation of grounding would be unable to capture.
But grounding sceptics such as Wilson (2014) and Koslicki (2015Koslicki ( , 2020) ) doubt that there is any relation at all unifying the small-g relations.Grounding enthusiasts looking for unity might appeal to features shared among the small-g relations.For example, grounding is often characterized as a strict partial ordering.But simply being a strict partial ordering is insufficient to be a grounding relation: the < relation is a strict partial ordering on the natural numbers but is not thereby a grounding relation.Moreover, the various small-g relations fail to exhibit shared features.The part relation is arguably transitive, but set formation is not.And the characterization of grounding as a strict partial ordering has become controversial.Talk of grounding then risks appearing to be too coarse grained to be useful.
On the other hand, there are reasons to pursue further talk of grounding.Some theorists hold that there are cases of noncausal determination not easily subsumed under one of the established small-g relations.Grounding also may be a useful umbrella term for discussing generally a family of noncausal determination relations, even if there are not formal features common to all members of that family, even if there is not a single relation operative in every case, even if there is not a genus of which the small-g relations are species, and so on.
Is there grounding in the history of philosophy?To restrict our attention, consider one example from ancient philosophy.In the Euthyphro, Plato has the character of Socrates ask the question, "is that which is pious loved by the gods because it is pious, or is it pious because it is loved by the gods?" (Euthyphro 10a, my translation, based on Cooper in Hamilton and Cairns (1961)).The Euthyphro question is commonly used when illustrating grounding: for some recent examples, see Raven (2013), Bliss and Trogdon (2016), Schaffer (2016) and Maurin (2019), among others.These authors seem to view the Euthyphro question as seeking to distinguish, from among facts about which things are pious and facts about which things are god-loved, the relatively fundamental from the relatively derivative.
Plato views the question somewhat differently.Socrates' interlocutor, the character of Euthyphro, canvasses the answer that "it is because it is pious that it is loved; it is not pious because it is loved" (Euthyphro 10d) but Socrates rejects this answer: "Euthyphro, it looks as if you had not given me my answer-as if when you were asked to tell the nature of the pious, you did not wish to explain the essence of it.You merely tell an attribute of it, namely, that it appertains to piety to be loved by all the gods.What it is, as yet you have not said" (Euthyphro 11a).Socrates seeks a definitional account of piety.Being god-loved is an attribute of the pious, not the essence of what it is to be pious, and so is ill-suited for supplying a definiens.When further attempts in the dialogue to define piety prove fruitless, and merely circle back to the attribute of being god-loved, Euthyphro makes a quick exit, and the dialogue suddenly ends.
Plato's concern in the Euthyphro, then, is to specify criteria for being a definiens, and arguably does not concern a more robust notion of grounding or noncausal determination.Is there other evidence for grounding in the Euthyphro?Correia and Schnieder (2013, pp. 2-4), in a brief but engaging discussion of the Euthyphro, note that Plato's argument draws on features associated with grounding.For example, Euthyphro endorses the claim that if something is pious, it is pious because it is god-beloved, and Socrates concludes that it follows that if something is pious, it is pious because the gods love it.This inference is licensed by the explicitly made claim that if something is god-beloved, it is so because the gods love it, and by the tacit assumption that 'because'-clauses chain; as Correia and Schnieder put it, Plato assumes that grounding is transitive.But this and the other features discussed by Correia and Schnieder are logical characteristics of 'because'-clauses, and so are features shared by grounding and other explanatory notions such as causation.They do not on their own provide conclusive evidence for there being grounding in the Euthyphro.
It is then not obvious that Plato recognized anything like grounding.If we view grounding thinly, as merely the correlate of the in virtue of relation, then it would be fairly uncontroversial that many historical figures tacitly canvass ground.But most contemporary authors view grounding more thickly.For example, many grounding theorists view grounding as a relation among facts.If the ascription of grounding commits one to an ontology of facts, then ascribing grounding to Plato might saddle him with anachronistic commitments.
It may strike the reader that the Euthyphro question is a narrow topic, and that the question whether there is grounding in the Euthyphro is an artifact of the discussion within contemporary metaphysics rather than a question that arises from the interests and methodologies of historians.(Thanks to an anonymous reader for expressing the worry.)But the more general question whether there is grounding in ancient philosophy has received recent attention.A partial survey over the last ten years or so of ancient scholars canvassing or criticizing the ascription of grounding to ancient authors includes discussions of grounding and kindred notions in Plato (Thomas 2014), Aristotle's De Anima (Cohoe 2016), Plotinus (Cohoe 2017), and Aristotelian demonstration (Malink 2020, Sandstad forthcoming).Furthermore, the question whether grounding in ancient philosophy lends the contemporary discussion novel or conservative value cannot be separated from an appreciation of the question of grounding's presence or absence from the long history of philosophy that connects us to antiquity.And historians working on periods after ancient philosophy discuss grounding in medieval philosophy (Cameron 2020, Ward forthcoming), modern philosophy (Embry 2017, Schechtman 2018, Cameron 2020, Puryear 2020), Bolzano (Roski 2017, Roski 2019, Roski and Schnieder 2019, Roski 2020) and Austro-German phenomenology (Mulligan 2020).Notice, however, that the potential influence of historical scholarship on the contemporary discussion of grounding is not limited to historical work that explicitly references grounding.Grounding relations concern dependence and connect the relatively fundamental with the relatively derivative.And so contemporary discussion of grounding intersects with topics on ontological dependence, fundamentality, metaphysical foundationalism, substance ontology, the principle of sufficient reason, and so on.Needless to say, the history of philosophy contains a wealth of discussion on these topics.
A historically informed evaluation of the novelty or conservativeness of grounding may influence decisions regarding the need to critically assess the role of grounding in metaphysics and to respond to grounding scepticism.I do not claim that such historical considerations replace other considerations.But if it can be shown that grounding has a long history in metaphysics, then that goes some way towards mitigating concerns over the intelligibility and applicability of the notion of grounding.Or rather, the burden of proof shifts somewhat towards the grounding sceptic needing to make a case in order to throw shade on an unitary notion of grounding.Or at very least, scepticism then would be best pitched as exposing a long standing and perhaps deeply entrenched error, rather than as questioning a recent and untested innovation.On the other hand, if grounding is a novel notion, this might encourage further study, but also make pressing the need to defend the initial plausibility and utility of the notion.Of course, the arguments made by the grounding sceptic might well succeed, and trump the historical considerations, regardless of whether the history shows that grounding is a novel or a conservative notion.But the assessment of novelty and conservativeness influences sociological aspects of contemporary philosophical practice, partly guiding research agenda, argumentative strategy, and the presentation of results.In weighing the worthiness of a research agenda in grounding, and the urgency of responding to grounding scepticism, a historically informed evaluation of the novelty or conservativeness of grounding is one of the several considerations in play.
OBJECTIONS AND REPLIES
To bring the paper to a conclusion, I will briefly respond to a few objections.First, one might object that the proposal that historical scholarship has instrumental value for contemporary philosophy since it can contribute to the assessment of certain cognitive values, is too narrow; historical scholarship can make a contribution in a wide variety of ways.In response, although I have noted that the role that historical scholarship can play in the assessment of certain cognitive values is understudied and worth closer study, I do not claim that the instrumental value of historical scholarship lies solely in its contribution to the assessment of novelty and conservativeness.Recall, I do not deny that historical scholarship can contribute in other ways.Historical research can indeed expose vestiges, provide alternatives to contemporary assumptions, and calibrate intuitions.Such roles, however, fail to meet certain reasonable desiderata: they are replicable by other methods; they are reliant on contingencies such as the presence of vestiges, questionable assumptions and comparable intuitions; and they arguably fail to explain why historical scholarship as such -the articulation of a historical narrative, rather than a mere list or chronology of positions -is valuable to contemporary philosophical practice.For what it is worth, the assessment of novelty and conservativeness meets these criteria.Notice that I also do not deny that there may be other roles for historical scholarship within contemporary philosophy that meet these desiderata.I do not know what kind of argument could show that it is only through the assessment of novelty and conservativeness that historical scholarship has instrumental value that is nonreplicable, noncontingent and legitimately historical.
Next, a tu quoque objection.I object to the contingency of identifying vestiges, providing alternatives to questionable contemporary assumptions, or drawing on comparable intuitions.But vestiges, challengeable contemporary assumptions and comparable intuitions are pervasive.So why care that history is merely contingently useful, if it is very often useful in these ways?Moreover, the objection might continue, the novelty or conservativeness of a contemporary position, relative to historical views, is also highly contingent.
In response, let me repeat that the exposure of vestiges, the provision of alternatives to questionable contemporary assumptions, the gauging of intuitions, and so on, are useful and commonly available roles for historical scholarship.Moreover, historical scholarship plays a role in the determination whether there are vestiges, challengeable assumptions or comparable intuitions, even 19331592, 2024, 3, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/phpr.12990,Wiley Online Library on [15/05/2024].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License when the answer is no.But if it turns out that there are no vestiges, say, historical scholarship plays no further role.Determining whether there are vestiges does not in itself contribute to contemporary philosophical practice; rather, it determines whether a contribution can be made.By contrast, although it is contingent which of novelty or conservativeness a contemporary position exhibits, and to what degree, it is not contingent that it has value along these parameters.
Finally, a weak tea objection: the role for historical scholarship canvassed in this paper is excessively modest, contributing only to the assessment of novelty and conservativeness, and so merely influencing the sociology of philosophy.
In response, recall that the values of novelty and conservativeness may not be solely cognitive values.As discussed above, in the right context, these values may be truth conducive.And so the impact of historical scholarship might extend beyond sociological factors such as the choice of research agendas, methodologies and problem sets.But moreover, let us not underestimate the impact sociological factors has on philosophical practice.Readers might be inclined to view such considerations as peripheral to philosophy.We are concerned with what positions are true, what arguments are sound, what objections are pressing -we are less concerned with the process by which true positions, sound arguments, or pressing objections are developed.One of my aims in this paper has been to draw attention to the sociology of philosophy.Progress is often made through the setting and revising of research agendas, methodologies and problem sets, through the choice of argumentative strategy, and through decisions concerning the presentation of results.Our case study in the previous section illustrated the influence questions of novelty and conservativeness can have on these factors.So even were historical scholarship of instrumental value solely due to these aspects of philosophical practice, it would not be of merely peripheral value.
A C K N O W L E D G E M E N T S
Thanks to audiences at the University of Calgary, Uppsala University and the American Philosophical Association Central Meeting, including John Baker, Susan Brower-Toland, Ekrem Cetinski, Richard Cross, Matti Eklund, Jeremy Fantl, Maarten Heenhagen, Noa Latham, Fr.Raphael Mary, Tobias Olsson, Robert Pasnau, Anne Siebels Peterson, Christian Pfeiffer, Jonathan Shaheen, C. Kenneth Waters, Ron Wilburn, and Nicole Wyatt; and thanks especially to Caleb Cahoe, David Liebesman, Howard Nye, and Pauliina Remes.
|
2023-05-28T15:06:33.486Z
|
2023-05-25T00:00:00.000
|
{
"year": 2023,
"sha1": "abd0ced52cb2d6c51873ed8416f590b5b0dc51f0",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/phpr.12990",
"oa_status": "CLOSED",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0a484892e390586dff621f34ab1ce5597284d253",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
}
|
214364042
|
pes2o/s2orc
|
v3-fos-license
|
Morphological characteristics and classification of soils formed from acidic sedimentary rocks in North Kalimantan
The morphology of acidic mineral soils explains the evolution that occurs in the soil body during the process of soil formation, which is predominantly influenced by the parent material and climate. Determination of epipedon and endopedon of diagnostic horizons through description and interpretation of soil profile properties is the basis of land classification, as the knowledge of soil properties, capabilities, and utilization. This research was a descriptive study, carried out in North Kalimantan using a survey method at two different locations based on the rock types, namely Location 1 (clay sand sedimentation) and Location 2 (sandstone). The description and interpretation of soil morphology were carried out by observing the nature of the soil profile, then determining the diagnostic horizon and soil classification referring to the Field Observation Guidelines in the Field and Key to Soil Taxonomy. The results of the study found that, based on its morphological characteristics, the soil in Location 1 had brighter colors, lower OC, higher Al solubility, and a thicker horizon compared to the soil in Location 2. Meanwhile, the texture, structure, pH, and consistency were the same. In both soil profiles, umbric epipedon and cambic endopedon were found. Therefore, the soil was classified as Inceptisols. At the sub-order level, Location 1 was classified as Udepts, and Location 2 as Aquepts. At the great group level, the soil in Location 1 was classified as Hapludepts, and the soil in Location 2 as Endoaquepts. This type of soil had low natural fertility therefore the efforts are needed to improve its ability and utilization through amelioration and fertilization technology.
Introduction
Dryland in Kalimantan region is dominated by mineral soils formed from acidic sedimentary rocks. The spread is estimated reach an area of 30.15 million ha or 57.22% of the total area of Kalimantan Island (Puslittanak, 2000;Suharta, 2010). The characteristics of acidic sedimentary rocks vary due to their formation process, which depends on the characteristics of the component materials, the process or model of the deposition, and the environmental conditions of the deposition area. Physio-graphically, mineral soils from acidic sedimentary rocks spread to tectonic landforms, which are formed as a result of geomorphic processes from inside (endogenous/hypogenous) or from outside (exogenous/epigenous), like in forming of forces, folds, faults, and/or a combination thereof (Marsoedi et al., 1997). The relief or slope formed in this landform is closely related to geomorphic processes and its lithological (structural) properties.
Soil morphology is the presence of the soil properties observed and studied in the field (Hardjowigeno, 1993). Study on the soil morphology is very important to get a picture of changes, or evolution, that occur in the soil body through the description and interpretation of soil profile properties in the form of diagnostic epipedon and endopedon, which will be the initial information in soil classification activities. Soil classification is important to have the information of soils properties of the soil and theirs productivities. Land use that based on soils characteristics and capability will be more productive and can minimize the threat of wasteful land use, leads to in decreasing the sustainability of land resources.
The natural fertility of acidic mineral soils is very dependent on the mineral composition of the parent material or soil nutrient reserves. The higher the soil nutrient reserves, the higher the level of soil fertility. Nutrients in the soil are very dependent on the composition, amount, and type of minerals. Marginal soils from acidic sedimentary rocks had low mineral reserves or nutrient reserves. Marginal soils in Kalimantan are characterized by various soil textures from sand to clay due to acidic sedimentary rocks formed from two types of soil parent material, namely coarse-textured sandstone and fine-textured claystone or siltstone (Suharta, 2010). The characteristics of acidic sedimentary rocks vary due to their formation process, which depends on the nature of the component material, the process or model of the deposition, and the environmental conditions of the deposition area.
Each soil has unique characteristics, consequently, the data on the morphology and diagnostics of epipedon and endopedon are needed to determine the overall development of the soil in balance with the environment. This study aimed to determine the morphological characteristics and classification of the soil formed from acidic sedimentary rocks in North Kalimantan.
The condition of soil and environment
This research was a descriptive exploratory study, carried out using a survey method in the field and supported with data from laboratory analysis. This research was conducted in two locations classified based on the differences in soil parent material. The first location was East Tarakan District located at 3 o 23'28" E and 117 o 32'56" N with soils composed of clay sand sedimentation. Meanwhile, the second location was West Tarakan District located at3 o 19'4" Eand 117 o 37'45" N with soils composed of sandstones ( Figure 1). Some variables of soil and environments conditions observed were the average annual rainfall, average monthly evapotranspiration, dominant vegetation, altitude (asl), slope, and soil moisture regime.
Soil morphology observation
Observation of soil morphology in each profile was made based on the reference book for Field Observation Guidelines and Key to Soil Taxonomy 1999. Profiles were made perpendicular to the soil body measuring 1.5 m x 1.5 m to a depth of groundwater and parent material. Observation of soil morphological characteristics in each profile on each layer/horizon included solum thickness, layer boundary, color, texture, structure, consistency, and roots. The data were used to determine the diagnostic horizon (epipedon and endopedon) as the basis for land classification. Soil classification referred to Key to Soil Taxonomy 1999 to the great group level category.
Laboratory analysis
Soil samples were taken at each horizon from the two soil profiles to analyze the soil chemical and physical properties. Soil samples were dried and sieved with a 2 mm diameter sieve. Texture was measured by using pipette method (Black et al., 1965), bulk density was measured by using Wax method (Blake and Hartge, 1986), density was measured by using pycnometer method (Blake and Hartge, 1986), pH H 2 O (1: 2.5 ) was measured by using a pH meter (McLean, 1982), organic C was determined by using Walkley-Black method (Nelson and Sommers, 1982), cation exchange capacity (CEC) and available K, Ca, Mg, Na were determined by using NH 4 Cl Extract (Hanudin, 2000), Total N was determined by using Kjeldahl/Titration method (Balittanah, 2009), and total P and K content were determined by using HCl 25% (Balittanah, 2009).
Environmental condition of the research site
The average of rainfall and evapotranspiration are 3900 mm yr -1 and 41.65 mm month -1 , respectively. The soil moisture categorized into udic, where rainfall exceeds evapotranspiration. The soils in the study site are formed from different parent materials, namely clay sand and sandstone sediments, classified as acidic sedimentary rocks ( Table 1).The region of the research site is wavy to hilly. The effective depth of the soil is shallow to deep. The depth of the soil in lowland areas is generally thick, while in slopy areas is thin. High rainfall caused the movement of water on high slope areas and leaching of soil particles, including organic matter, nutrients, and other soil components (Arsyad, 2000).
Soil morphology
The morphological characteristics of the two soil profiles are presented in Table 2. The thickness of the soil in the two profiles was different. The profile thickness of Profile 1 was more than 112 cm. In this depth the parent material was found. The thickness of Profile 2 was more than 59 cm, and the water table was found at this depth. There were no organic horizon in both profiles. Horizon A tended to be dark (dull yellowish-brown, 10YR 4/3 and brownish-black, 10YR 3/2). In moist conditions, the color of the soil from the BW horizon was dull yellowish-brown (10YR 6/4) and yellowish-brown (10YR 6/4), gradually turned into brighter following the depth (Figure 1 and 2). The texture of the soil was dominated by sand in top soil. In BW1 horizon was silt loam and became coarser in C horizon (loamy sand). The dominance of the sand fraction was caused by leaching that occurred intensively, which was driven by a high amount of annual rainfall of around 3900 mm. Although there was an increase in clay on the BW horizon, it did not indicate the presence of argillic horizon, due to did not meet the requirement of an arigillic horizon. Suharta (2010) stated that rough soil texture (sand dominance) leads to the low ability of the soil to retain water and nutrients, and the soil becomes prone to drought and sensitive to erosion. In general, soil structure on the A-BW horizon (BW1 and BW2) was crumbs, and on the B/C and C horizon were granular. The soil consistency in moist conditions was loose in BW1, BW2 and B/C horizon, while in C horizon was loose. The roots in the two profiles were few and even did not found in the deeper horizon. The limitation of root indicated that the population of shrubs vegetation above the ground were rare.
Chemical and physical properties of the soil
The chemical and physical properties of the soil are presented in Table 3 and 4. The soil reactions were acid to very acid. The content of P and K nutrient is were low to very low, while saturation of Al were high to very high. The available Fe were high to very high.
The low soil pH was caused by the acidic sedimentary parent materials. The acidity of the soil caused the very high content of Al-dd and Fe-dd that affected the P content in the soil. Suharta (2010) stated that mineral soils formed from acidic sedimentary rocks in the Kalimantan region had acid to very acid reaction, contributed to high Al content and reduced the available P (Yatno et al., 2000). The level of soil salinity or was low to moderate, therefore salinity was not a problem.
The total N and organic C in were varied in these two sites from low to very low The N and organic C content tended to decrease with depth. This was the common situation, where the organic C content in the upper layer was higher than in the lower layer (Suharta, 2010). Soil nitrogen content was in line with the content of soil organic C due to some of soil nitrogen derived from organic matter. The presence of organic matter in acidic mineral soils was largely determined by the level of soil weathering and organic matter content and the types of minerals in the soil. Very low organic C content was caused by a very high decomposition process and the dominance of quartz mineral (about 98%). This was also stated by Suharta & Prasetyo (2008) on acidic mineral soils in Riau which were dominated by kaolinite, goethite, and quartz.
Coarse soil texture in the two research sites will cause the low soil ability in retain water nutrients that prone to be drought and to erosion. There was a close relationship between coarse soil texture and soil chemical fertility. The higher the content of sand fraction in acidic rock mineral soils in Kalimantan, the lower the content of organic C, N, P, and K, meanwhile the Al was higher (Suharta, 2007). Conversely, the higher the content of the clay fraction, the higher the content of P and K, exchangeable bases (Ca, Mg, and Na), soil CEC, and base saturation. Low organic matter content in the tropics was common due to the high mineralization process.
The exchangeable bases of soil in both locations were low to very low, the cation exchange capacity (CEC) was low, and the base saturation was very low. According to Suharta (2010), the exchangeable bases (Ca, Mg, K, and Na) in acidic minerals is classified as low to very low, due to the excecive leaching and/or the parent materials had poor basic cation content. The exchangeable bases at the upper horizon was higher than the lower horizon, indicated the occurrence of a plants residues in the upper (Suharta and Prasetyo, 2008), which were decomposed as organic matter (Quideau et al., 1999).
The CEC value is was influenced by the types of minerals and organic matter in the soil. Prasetyo et al. (2001) stated that soil developed from acidic sedimentary rocks was dominated by kaolinite, that naturally had a low CEC value. The presence of organic matter was very important in increasing the CEC value. The base saturation of acidic mineral soils in Kalimantan was relatively low to very low, whereas the saturation of Al was classified as very high, which increased with the increase of the solum depth (Suharta, 2010
Soil classification
Referring to the soil classification system of Soil Taxonomy (USDA, 1999), soil classification was carried out based on the diagnostic epipedon and endopedon. Based on the interpretation of environmental conditions, soil morphology, and supported by the results of laboratory analysis, the upper diagnostic horizon (epipedon) of the two soil profiles was classified as umbric epipedon, with the following characteristics: a. Color values was 3 or less in moist condition and 5 or less in dry condition; and b. Color chroma was 3 or less; and c. Chroma in the C horizon was least 1 unit lower, or a minimum chroma of 2 units lower than the umbric horizon d. Base saturation was less than 50% in part or all parts of epipedone e. Soil organic carbon content was 0.6% higher than in the C horizon.
The diagnostic horizon in the subsurface (endopedon) was classified as cambic horizon, with the characteristics following characteristics: a. Aquatic conditions at a depth of 50 cm from the ground, or drained, with properties of having a soil structure, colors that do not change when open in the air, the dominant color, moist on the surface of the ped or in the matrix, value color of 4 or more, and chroma of 1 or less b. Not having aquatic combinations in 50 cm from the ground or drained and having a soil structure, higher chroma, higher color values, and higher clay content compared to the upper horizon c. Having properties that do not meet the requirements for other epipedons.
Based on the diagnostic horizon in the surface (epipedon) and subsurface (endopedon), the soil in both locations was classified as Inceptisols. At the sub-order level, the soil in Location 1 was classified as Udepts due to had udic soil moisture regime. Meanwhile, the soil in Location 2 was classified as Aquepts because, at a depth of 50 cm from the surface, the mineral soil had aquatic conditions during the time of normal years (or drained). At the great group level, the soil in Location 1 was classified as Hapludepts because of the limited development of the
|
2020-01-02T21:45:13.430Z
|
2019-12-30T00:00:00.000
|
{
"year": 2019,
"sha1": "ad82b98ae7ec75810eca1b32c339deedbc52324c",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/393/1/012083/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7c2e89ddb4cab2fda03d181eb13c35f39cc80f7e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geology"
]
}
|
247776627
|
pes2o/s2orc
|
v3-fos-license
|
Incidence, Clinical Characteristics, and Predictors of Cardiovascular Immune-Related Adverse Events Associated with Immune Checkpoint Inhibitors
This article evaluates the incidence, clinical characteristics, and predictors of cardiovascular immune-related adverse events (CV-irAEs), focusing on the feasibility of serial cardiac monitoring using a combination of B-type natriuretic peptide, cardiac troponin T, and electrocardiogram for the prediction of future symptomatic (grade ≥2) CV-irAEs
Introduction
Immune-related adverse events (irAEs) associated with immune checkpoint inhibitors (ICIs) are side effects related to enhanced immune system activity caused by ICIs, which can affect multiple organs, including the skin, gastrointestinal tract, liver, lungs, endocrine system, renal system, musculoskeletal system, and cardiovascular system. 1 The recently published clinical practice guideline of the American Society of Clinical Oncology (ASCO) defined irAEs and their grading, and indicated that cardiovascular immune-related adverse events (CV-irAEs) include not only myocarditis but also pericarditis, arrhythmia, heart failure, and vasculitis. 2 Moreover, CV-irAEs defined by the ASCO guidelines included abnormal cardiac biomarker results, such as cardiac troponin and abnormal electrocardiogram (ECG) findings, without any concurrent cardiovascular symptoms (asymptomatic CV-irAEs, grade 1). However, considering that most previous reports on CV-irAEs were retrospective and focused mainly on symptomatic ICI-related acute myocarditis, thus possibly overlooking asymptomatic cases, CV-irAEs may have been underdiagnosed and underreported. 3,4 As such, the actual incidence of CV-irAEs, including asymptomatic cases, remains unclear. Therefore, a pilot registry was established to prospectively evaluate the incidence, clinical characteristics, and predictors of CV-irAEs, including asymptomatic ones, in real-world patients. Furthermore, this registry aimed to e411 The Oncologist, 2022, Vol. 27, No. 5 determine the feasibility of serial monitoring (prospective screening) using a combination of B-type natriuretic peptide (BNP), cardiac troponin T, and ECG for the early detection of future symptomatic CV-irAEs.
Patient Selection
This single-center prospective observational study included patients with non-small-cell lung cancer (NSCLC) who underwent ICI monotherapy. A total of 129 consecutive patients with NSCLC who received ICI monotherapy at the Sendai Kousei Hospital between May 2018 and December 2019 were enrolled. There were no patients who failed to receive serial cardiac monitoring or refused to participate during the study period. Patients underwent the following ICI monotherapy every 2-3 weeks until documented disease progression, intolerable toxicity levels, or termination by the physician: nivolumab (3 mg/kg or 240 mg every 2 weeks), pembrolizumab (200 mg every 3 weeks), atezolizumab (1200 mg every 3 weeks), and durvalumab (10 mg/kg every 2 weeks). No patients received concurrent chemotherapy or radiotherapy during serial cardiac monitoring. The study protocol was approved by the Institutional Review Board of the Sendai Kousei Hospital (approval number, 30-8; approval date, May 16, 2018), and all patients provided written informed consent prior to their enrollment. All procedures performed in this study were in accordance with the ethical standards of the institutional research committee and the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. The study was registered at the University Hospital Medical Information Network Clinical Trials Registry, as accepted by the International Committee of Medical Journal Editors (No. UMIN000032729).
Endpoint Definition
The baseline laboratory data, including BNP and cardiac troponin T levels (qualitative measurement), were collected before ICI treatment. The cardiac troponin T level was measured using a rapid bedside assay with a cutoff value of 0.1 ng/ mL (TropT, Roche Diagnostics, Mannheim, Germany). The ECG and transthoracic echocardiography data before ICI treatment were also obtained. Serial cardiac monitoring included BNP, cardiac troponin T, and ECG, all of which were performed at baseline (before ICI treatment) and once every 4-6 weeks depending on the timing of ICI administration. Moreover, additional measurements of BNP, cardiac troponin T, or ECG were conducted other than the scheduled ones at the discretion of the attending pulmonary physicians. Each ECG finding was reviewed by both an attending pulmonary physician and a dedicated cardiologist. When BNP elevation (≥200 pg/mL), cardiac troponin T conversion, or new-onset morphological ECG abnormalities were documented during ICI treatment, cardiology consultation was always considered and, if necessary, transthoracic echocardiography and cardiac catheterization, including coronary angiography and endomyocardial biopsy, were performed. On the basis of the definition by the ASCO (Supplementary Table S1), grade 1 CV-irAEs (asymptomatic CV-irAEs) were characterized as a composite of BNP elevation ≥200 pg/mL, cardiac troponin T conversion, or new-onset morphological ECG abnormalities without any cardiovascular symptoms. Grades 2 and 3 CV-irAEs were characterized as a composite of BNP elevation to ≥200 pg/mL, cardiac troponin T conversion, or new-onset morphological ECG abnormalities with mild (grade 2) or more severe (grade 3) cardiovascular symptoms. Furthermore, grade 4 CV-irAEs were characterized as a composite of BNP elevation to ≥200 pg/mL, cardiac troponin T conversion, or new-onset morphological ECG abnormalities with life-threatening conditions or severe conditions requiring immediate IV medication or intervention. The difference between grades 2 and 3 events as defined by the ASCO guidelines is ambiguous because both definitions are based on subjective judgment. Therefore, to avoid ambiguity, we did not distinguish between grade 2 and 3 CV-irAEs. Instead, we merged "grade 2," "grade 3," and "grade 4" events into the "grade ≥2" events category. BNP elevation ≥200 pg/mL was defined as the BNP level that is lower than 200 pg/mL at baseline, which subsequently increased to 200 pg/mL or more after ICI treatment initiation. This was intended to exclude cancerrelated BNP elevation as much as possible because plasma BNP levels have been reported to be elevated, although usually not exceeding 200 pg/mL, due to cancer-related inflammation in advanced cancer patients. 5 Cardiac troponin T conversion was defined as a negative troponin T-test result at baseline that subsequently turned positive at least once after ICI treatment initiation. New-onset morphological ECG abnormalities included any morphological changes with respect to the baseline ECG results. Patients on ICI treatment with concurrent diseases that could potentially explain the laboratory abnormalities, including myocardial oxygen supply/ demand mismatch in the setting of sepsis, anemia, and hypoxia, and patients with concurrent diseases that could cause false-positive troponin T results (eg, rhabdomyolysis) were not counted as developing CV-irAEs. Moreover, patients with BNP levels elevated to ≥200 pg/mL or positive cardiac troponin T-test results prior to beginning ICI treatment were also not counted as developing CV-irAEs, even if they had either positive troponin T-test results or BNP levels elevated to ≥200 pg/mL during ICI treatment. Additionally, if the patients on ICI treatment developed the same ECG abnormalities as those already documented before ICI treatment, eg, paroxysmal atrial fibrillation and negative T-wave, they were not counted as developing CV-irAEs. Symptomatic (grade ≥2) CV-irAEs included (i) biopsy-proven acute myocarditis; (ii) acute decompensated heart failure, the cardiogenic shock of unknown etiology, or heart failure deterioration in at least one New York Heart Association functional class; (iii) life-threatening arrhythmias, including advanced or complete atrioventricular block and ventricular tachycardia or fibrillation; (iv) non-lethal arrhythmias causing cardiovascular symptoms, including atrial fibrillation or flutter; and (v) cardiac death, new-onset acute coronary syndromes, or any coronary revascularization procedure. irAEs, except for CV-irAEs, were assessed by an attending pulmonary physician and a dedicated nurse specialist every 2-3 weeks throughout the course of ICI treatment and were graded according to the ASCO definition. 2
Statistical Analysis
Continuous variables were presented as medians [interquartile ranges (IQRs)], while categorical variables were presented as frequencies (percentages). Univariate and multivariate Fine and Gray regression models for grade ≥1 CV-irAEs were performed to evaluate the associations among baseline clinical, The Oncologist, 2022, Vol. 27, No. 5 laboratory, and treatment characteristics with grade ≥1 CV-irAEs, considering non-cardiovascular death as a competing event. Predictors of grade ≥1 CV-irAEs were initially screened using a univariate Fine and Gray regression model. The level of significance for the univariate screening regressions was set at a P-value of <.05. Thereafter, a multivariate Fine and Gray regression model using variables with a Pvalue of <.05 in the univariate analysis was established to estimate hazard ratios (HRs) and 95% CIs. Similarly, univariate and multivariate Fine and Gray regression analyses were also performed to assess the factors related to grade ≥2 CV-irAEs. All statistical analyses were performed using the EZR software (version 1.53; Saitama Medical Center, Jichi Medical University; http://www.jichi.ac.jp/saitama-sct/SaitamaHP. files/statmedEN.html), a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria), with a two-sided P-value of <.05 indicating statistical significance.
The observed clinical events and abnormal laboratory findings after ICI treatment initiation among the 129 patients analyzed herein are presented in Table 1. BNP elevation ≥200 pg/mL only after ICI treatment initiation, cardiac troponin T conversion, and any new-onset morphological ECG abnormalities were observed in 12%, 10%, and 11% of the patients, respectively. A total of 41 patients had BNP elevation (≥200 pg/mL), cardiac troponin T conversion, or new-onset morphological ECG abnormalities during ICI treatment. Among them, the six patients who had concurrent diseases that could potentially cause abnormal increases in BNP level, cardiac troponin T conversion, or new-onset ECG abnormalities were identified during ICI treatment and were excluded from the grade ≥1 CV-irAE group. Of the six patients excluded, two patients with cardiac troponin T conversion experienced sepsis and rhabdomyolysis, respectively, three patients with paroxysmal atrial fibrillation experienced hypoxia secondary to advanced lung cancer, and one patient with BNP elevation developed severe anemia (possible anemia-induced heart failure). Consequently, 35 (27%) patients developed any grade ≥1 CV-irAEs with a median time of onset of 72 (IQR 44-216) days after ICI treatment initiation over a median follow-up duration of 255 days (IQR 134-386) (Fig. 1A). Additionally, 13 (10%) patients developed any grade ≥2 CV-irAEs with a median time of onset of 141 (IQR 69-234) days after ICI treatment initiation (Fig. 1B). Among patients with CV-irAEs, no cardiac death occurred during the follow-up period. Table 2 reveals the univariate associations among the baseline clinical, laboratory, and treatment characteristics between NSCLC patients with and without grade ≥1 CV-irAEs who underwent ICI treatment based on Fine and Gray competing risk analysis. Accordingly, significantly more patients with grade ≥1 CV-irAEs had prior histories of acute coronary syndrome and heart failure hospitalization compared to patients without CV-irAEs ( Table 3. Negative T-wave, spirometry-defined chronic obstructive pulmonary disease or emphysema on CT, and performance status ≥2 were not independently associated with grade ≥1 CV-irAEs. By contrast, prior acute coronary syndrome (adjusted HR Tables S3 and S4).
Discussion
To the best of our knowledge, this is the first prospective observational study that evaluated the incidence, clinical characteristics, and predictors of CV-irAEs in patients with NSCLC receiving ICIs. The lower risk of reporting bias compared to retrospective studies and the availability of sufficient information regarding cardiovascular background can be considered strengths of the present study. The main findings presented herein are as follows: (i) grade ≥1 CV-irAEs as defined by the ASCO guidelines were more common than previously recognized; (ii) patients with prior acute coronary syndrome and prior heart failure hospitalization were significantly associated with grade ≥1 CV-irAEs, and patients who achieved disease control were also significantly associated with grade ≥1 CV-irAEs; and (iii) patients with preceding grade 1 CV-irAEs had an approximately six times higher risk of subsequent onset of grade ≥2 CV-irAEs compared to patients without preceding grade 1 CV-irAEs.
First, our prospective observational study showed that grade ≥1 CV-irAEs as defined by the ASCO guidelines were more common than appreciated. Previous studies have reported that the incidence rate of symptomatic ICI-related myocarditis is in the range of 0.06%-3.30%. 3,4 However, considering the retrospective nature of these studies, and that there were several case reports discussing the occurrence of asymptomatic (subclinical) myocarditis following ICIs, 6-8 certain CV-irAEs may have remained unidentified or unreported. In the context of underdiagnosis of CV-irAEs, detecting CV-irAEs, including asymptomatic ones, is imperative in the preparation for a potential rechallenge with ICIs. This is especially true considering the results of certain recent studies revealed a 28.8%-55.0% recurrence rate of the same or different grade irAE following rechallenge. 9,10 Furthermore, one case report showed that rechallenge with ICIs induced a treatment-refractory exacerbation of myocarditis. 8 Thus, physicians should be sufficiently vigilant to avoid overlooking mild or even asymptomatic CV-irAEs.
Another key aspect of the ASCO definition of CV-irAEs is its inclusion of heart failure. Indeed, our study revealed that 5% of the patients receiving ICIs exhibited acute decompensated heart failure or heart failure deterioration in at least one New York Heart Association functional class. Although the present study considers the lack of cardiovascular magnetic resonance to have caused undiagnosed heart failure due to ICI-related myocarditis, patients with heart failure receiving ICIs should be afforded a high index of clinical suspicion for CV-irAEs considering one case report suggesting that nivolumab promoted dilated cardiomyopathy without inflammatory change. 11 Of note, CV-irAE was the most frequent adverse event (27%) among the observed irAEs. This result is contradictory to our previous study where skin reaction was The Oncologist, 2022, Vol. 27, No. 5 Table 2. Univariate associations among baseline clinical, laboratory, and treatment characteristics and grade ≥1 CV-irAEs in patients with NSCLC undergoing ICI treatment based on Fine and Gray competing risk analysis, considering non-cardiovascular death as a competing event. Achieving complete response, partial response, or stable disease. g The term, "clinically relevant", was defined as either BNP elevation (≥200 pg/mL) or LVEF dysfunction (<50%) originating from symptomatic (New York Heart Association class II or greater) chronic heart failure at the start of ICI treatment. .011 a Adjusted for prior ACS, prior heart failure hospitalization, negative T-wave before ICI treatment, spirometry-defined COPD or emphysema on CT, ECOG PS ≥ 2, and achievement of disease control. ICI-related nephritis was not included in the multivariate regression model because it could cause a falsepositive troponin T result regardless of the presence of cardiovascular events. Disease control was chosen as the representative best response categorization variable. There was no multicollinearity among prior ACS, prior heart failure hospitalization, and negative T-wave before ICI treatment. ACS, acute coronary syndrome; AHR, adjusted hazard ratio; CI, confidence interval; COPD, chronic obstructive pulmonary disease; CT, computed tomography; CV-irAEs, cardiovascular immune-related adverse events; ECOG PS, Eastern Cooperative Oncology Group Performance Status Scale; HR, hazard ratio; ICI, immune checkpoint inhibitor; NSCLC, non-small-cell lung cancer. the most frequent adverse event, 12 and this can be explained by the fact that serial cardiac monitoring was not performed in our previous study, and because a dedicated cardiologist was not involved in the diagnosis of CV-irAEs; consequently, CV-irAEs were less recognized in our previous study. Moreover, the underdiagnosis of the skin or gastrointestinal irAEs may have also occurred in the present study because the respective diagnosis was mainly based on the clinical symptoms. Second, the patients with prior acute coronary syndrome and prior heart failure hospitalization were significantly associated with grade ≥1 CV-irAEs, and patients who achieved disease control were also significantly associated with grade ≥1 CV-irAEs. Regarding the predictors associated with ICIrelated myocarditis alone, dual ICI therapy and diabetes were more common in ICI-related myocarditis cases. 3 Moreover, female sex and age ≥75 years were also associated with an increased risk of ICI-related myocarditis. 13 However, no reports have indicated the predictors that might affect the incidence of CV-irAEs as defined by the ASCO guidelines, which include not only myocarditis but also pericarditis, arrhythmia, heart failure, and vasculitis. 2 Thus, our study provides the first evidence that there are several predictors for CV-irAEs that could improve the risk stratification of CV-irAEs. Thus, NSCLC patients with such pre-existing cardiac diseases could be exposed to a higher risk of developing CV-irAEs upon ICI treatment and may benefit more from serial cardiac monitoring. Additionally, our study suggested that ICIs cause more symptomatic cardiovascular toxicity in patients with NSCLC who had a PS ≥ 2 than those who had a PS ≤ 1. Besides, PS at the beginning of treatment retained prognostic significance in patients treated with immunotherapy for NSCLC. 14 Considering these aspects, clinicians should be more careful about ICI use for patients with NSCLC who had a PS ≥ 2.
Variables With CV-irAEs (grade≥1) n = 35 Without CV-irAEs n = 94 HR a (95% CI) P-value
Third, the present study showed that patients with preceding grade 1 CV-irAEs had an approximately six times higher risk of subsequent onset of grade ≥2 CV-irAEs compared to patients without preceding grade 1 CV-irAEs. Thus, asymptomatic CV-irAEs (grade 1), which were detected by serial cardiac monitoring, may perhaps indicate the future onset of symptomatic CV-irAEs (grade ≥2). The advantage of serial cardiac monitoring using a combination of BNP, cardiac troponin T, and ECG is that their findings are relatively easy to interpret by physicians not specializing in cardiology considering that differences in the aforementioned factors from baseline are easily recognized after the initiation of ICI treatment. In contrast, serial quantitative troponin I levels were previously investigated for early detection of nivolumab cardiotoxicity in advanced NSCLC. 15 However, the authors might have underdiagnosed CV-irAEs. The main reason is that they mainly focused on ICI-related myocarditis. Second, complementary measurements of BNP level and ECG were not performed during ICI treatment. Finally, cardiologists were not involved in the diagnosis of ICI-related myocarditis. Although quantitative troponin I measurement would provide more information on the increased risk for major adverse cardiovascular events, 16 our current study emphasizes the simplicity of interpreting test results (positive or negative results alone for qualitative cardiac troponin T). Thus, our serial cardiac monitoring strategy, including qualitative cardiac troponin T, can become a simple and useful tool for predicting potential grade ≥2 CV-irAEs as long as cardiovascular evaluation before ICI treatment is performed to correctly interpret abnormalities of the variables documented in the serial cardiac monitoring. However, it should be noted that isolated positive biomarker test results without any cardiovascular symptoms should not prompt the discontinuation of ICI treatment, considering that ICIs are key drugs in the treatment of patients with advanced NSCLC, with long-term survival benefits. Even when the patients in our study developed grade ≥2 CV-irAEs, ICI treatment was usually continued because most of the CV-irAEs could be properly managed by dedicated cardiologists without treatment interruption. Only one patient, who developed acute heart failure due to biopsyproven acute myocarditis, had to discontinue ICI treatment. Therefore, early detection of ICI-related acute myocarditis may be critical in determining whether ICI should be continued. Considering that the sensitivity of cardiac magnetic resonance imaging is not high enough to exclude ICI-related myocarditis, 17 its negative findings should not eliminate endomyocardial biopsy whenever clinical manifestations involve severe heart failure or life-threatening arrhythmias and are suggestive of ICI-related myocarditis.
Future Direction
A prospective study of serial cardiac monitoring during dual ICI therapy will be needed because patients who received dual ICI therapy appeared to have more frequent and severe myocarditis than patients who received ICI monotherapy alone. 4 Additionally, our data revealed that CV-irAE may be a marker of tumor response. The same is true with skin reactions and pneumonitis. 18,19 However, the reasons why the development of any irAE is associated with tumor response remain unknown. Further research is warranted to uncover the reason for this association.
Study Limitations
Several limitations of the current study are worth noting. First, considering that our study population was limited to patients with NSCLC, these results cannot be generalized to patients with other types of cancer. Second, quantitative troponin T or I data were not obtained, which could have caused overreporting of CV-irAEs because, with qualitative troponin T data alone, the severity and cause of troponin abnormalities would be more difficult to discern. Third, incidental fluctuations in laboratory test results (ie, the risk of false positives) could not have been completely excluded from grade 1 CV-irAEs, because grade 1 events were defined as laboratory abnormalities without any cardiovascular symptoms. Therefore, overreporting of grade 1 CV-irAEs remains a possibility. Fourth, laboratory data and ECG data were evaluated at every other ICI treatment cycle (every 4-6 weeks), and thus certain events might have been missed, particularly those that occurred within the first six weeks of ICI treatment. This is especially true considering the findings of a previous study showing that the majority of ICI-associated myocarditis cases occurred within the first six weeks of ICI treatment. 20 Fifth, clinical validation of early detection of CV-irAEs in light of improved overall survival in patients with NSCLC was not clarified. Thus, it remains uncertain whether routine serial testing of BNP, cardiac troponin T, and ECG in unselected patients who underwent ICI treatment should be recommended or not. Sixth, advanced diagnostic examinations, including coronary angiography and endomyocardial biopsy, were performed only in selected patients with grade ≥2 CV-irAEs for further diagnostic workup. As a result, ICIrelated myocarditis may have been underdiagnosed. Finally, having no control group was a limitation of our study.
Conclusions
The current study clearly showed that grade ≥1 CV-irAEs were more common than appreciated. Prior acute coronary syndrome and prior heart failure hospitalization, and achievement of disease control were significantly associated with grade ≥1 CV-irAEs. Furthermore, our results revealed that preceding grade 1 CV-irAEs may confer a significantly higher risk of subsequent onset of grade ≥2 CV-irAEs and that serial cardiac monitoring was feasible for the prediction of future grade ≥2 CV-irAEs. We believe that our study contributes toward increasing awareness for this new clinical entity among all specialists involved in ICI treatment.
|
2022-03-30T06:17:48.955Z
|
2022-03-28T00:00:00.000
|
{
"year": 2022,
"sha1": "c727dd3a369f6dd951ee2d9716ecbabc4be3971b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1efa95adcd76e48159d98f477f33f95954b310a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119279745
|
pes2o/s2orc
|
v3-fos-license
|
Getting Science Beyond the Research Community: Examples of Education and Outreach from the IceCube Project
The IceCube collaboration has built an in-ice neutrino telescope and a surface detector array, IceTop, at the South Pole. Over 5000 digital optical modules have been deployed in a cubic kilometer of ice between 1450 and 2450 m below the surface. The novel observatory provides a new window to explore the universe. The combination of cutting-edge discovery science and the exotic Antarctic environment is an ideal vehicle to excite and engage a wide audience. Examples of how the international IceCube Collaboration has brought the Universe to a broader audience via the South Pole are described.
Introduction
The international IceCube collaboration has recently completed a multipurpose neutrino and cosmic ray observatory located on the Amundsen Scott Base at the South Pole 2 . After six seasons of construction, the biggest science project ever attempted in Antarctica and one of the largest detectors in the world is providing a new window to view the Universe. The allure of cutting-edge discovery science combined with the exotic Antarctic environment and international partners provide multiple opportunities to excite and engage a wide audience. This proceeding describes examples of the IceCube collaboration's education and outreach efforts targeted for students, teachers and the general public.
IceCube
The motivation for the IceCube project was to realize the dream of building a cubic-kilometer scale neutrino telescope to explore the Universe with neutrino messengers.
The Antarctic Muon and Neutrino Detector Array (AMANDA) established the feasibility of using the nearly 3000 m thick ice at the South Pole as medium for detecting neutrinos. A hot water drill was used to melt holes 60 cm in diameter and up to 2450 m deep. Photomultipliers embedded in the ice detected the light resulting from charged particles produced from neutrino interactions in and near the instrumented volume. The novelty of the idea was recognized in 1999 by Scientific American when AMANDA was named the weirdest of the seven wonders of Modern Astronomy [1].
Construction on the IceCube detector began during the 2003-2004 austral summer season. The short construction season annually brings a pulse of activity that offers unique opportunities for real and virtual participation. The Amundsen Scott Station opens around the end of October each year, and the last return flight of the season leaves in mid-February. During construction, IceCube personnel were a significant fraction of the approximately 150 to 200 South Pole population. Typically there were about thirty drillers, with a significant fraction of the drill team returning for most of the six seasons. In addition, there were about 20 other IceCube personnel at the South Pole including scientists, engineers, IT personnel, graduate students and postdoctoral researches, and occasionally, undergraduate students and high school teachers. Two or three IceCube winterovers remain at the South Pole Station to maintain the IceCube observatory during the cold, dark, winter months.
Collaboration
The IceCube Collaboration consists of over 250 scientists from 39 institutions, about half of whom are in the United States with the balance from Australia, Barbados, Belgium, Canada, Germany, Great Britain, Japan, New Zealand, Sweden, and Switzerland. A map of collaborating institutions and a list of the funding agencies are shown in Figure 1. The large international representation, together with the remote location of the observatory, provides opportunities for real and virtual participation on a variety of levels at locations around the world. Examples of these experiences will be provided below.
Observatory
The IceCube observatory consists of an in-ice array of Digital Optical Modules (DOMs) that are deployed in holes drilled with hot water. A DOM consists of a 10" photomultiplier tube housed inside a glass pressure vessel with a data acquisition system and light emitting diodes (leds) for calibration purposes (see Figure 2b). A cable with sixty DOMs is lowered into the water filled hole that freezes, locking the DOMs in place. The DOMs plus the cable in a given hole is referred to as a string. There are 78 strings on a triangular grid with a 125 m separation with DOMs deployed between 1450 and 2450 m below the surface. A sub array of 8 strings at the center of the detector, known as DeepCore has more closely spaced strings and DOMs deployed in the deepest clearest ice. DeepCore lowers the energy threshold for detecting neutrinos and, together with the outer strings, makes it possible to extend the solid angle of detection for low energy neutrinos to 4 .
There is also a surface array known as IceTop that contains 81 stations on the same 125 m triangular grid as the original in-ice array. Each consists of pairs of ice Cherenkov tanks approximately 1.6 m in diameter and 0.9 m deep that are monitored by two DOMs, one configured for high gain and the other for low gain to extend the dynamic range of the tanks. The distance between tanks in a station is 10 meters. A diagram of the IceCube Observatory is shown in Figure 2a.
Science
The primary focus of the IceCube project, as originally proposed, was to study the universe using high energy neutrinos. The discovery potential of new facilities is one of the captivating aspects of science for the general public. Another facet is the possibility to shed light on long-standing mysteries like the: origin and acceleration mechanism of the highest energy cosmic rays [2] source of gamma ray bursts [3] composition of dark matter [4] IceCube is poised to make contributions to solving these puzzles and others as the full capabilities of the observatory are developed. For example, a dust logger devised to characterize the instrumented ice has enabled IceCube researchers to study glacial dust layers and explore their connections to past climatological events [5]. By comparing dust logger results from multiple drill holes, it may be possible to determine wind speeds tens of thousands of years ago [6]. IceTop, originally designed for calibration and veto purposes or the neutrino telescope and cosmic ray studies above 10 15 eV, now also operates in a mode capable of detecting low energy particles produced during solar storms [7]. Examples of how IceCube brings this science to the broader community are described in the next section.
Education and Outreach
The goal of the IceCube education and outreach efforts is to get science beyond the research community. The fascination with the extreme Antarctic location and the novelty of new observational approach provide hooks to excite and engage the public, students, and teachers. The result is opportunities to demonstrate that science is an on-going pursuit to understand the Universe rather than the organized collection of established results, as may typically be presented. Significant education 4 Image by Jamie Yang 5 Image by IceCube Collaboration and outreach efforts are on-going throughout the IceCube collaboration. Specific examples of efforts in a few general categories based on the depth, duration, and intended audience will be discussed next.
One-time events
IceCube collaboration members have given hundreds of presentations to public and school groups. In addition to the compelling science, the fascination with living and working conditions in the extreme Antarctic environment are a big draw. A number of successful displays have also been developed to illustrate the operation of the observatory. This is possible because the observatory is relatively simple, especially when compared to traditional particle physics detectors. There is really only one active component, the DOM. The in-ice neutrino telescope consists of a grid of DOMs imbedded in the ice. To convey the essence of the IceCube neutrino telescope, DOMs can be suspended in space to illustrate the structure (Figure 3a).
In addition to traditional general interest talks, IceCube collaborators used targeted approaches to reach broader audiences. A micro scale hot water drilling activity (Figure 3b) has proven popular, especially with elementary school age children. The inaugural Nuclear Science Day 6 at Lawrence Berkeley National Laboratories drew over 150 boy and girl scouts. Among other activities, the scouts heard a talk about cosmic rays from an IceCube scientist and received career information. IceCube collaborators have also shown a willingness to engage older learners by presenting informal public talks at local taverns. The University of Wisconsin-River Falls (UWRF) visiting professor program has supported an IceCube scientist on two occasions. During their two days on the UWRF campus, they provided guest lectures on their work, delivered an evening public talk, and were generally available to meet with interested students one-on-one. One visit included a highly popular exhibit of the visiting professor's art work, providing another way to attract an audience that would not typically attend science events.
Field Experiences for students and teachers
The extreme Antarctic environment at the South Pole is one of the main draws for interest in the IceCube project. The IceCube collaboration continues to build on the tradition established by the AMANDA collaboration to offer opportunities for undergraduate students and teachers to participate in IceCube research. Many IceCube institutions have ongoing undergraduate student projects that have made significant contributions. In this section, a few examples of field experiences for undergraduates and teachers will be described.
The AMANDA and IceCube projects have had five teachers work at the South Pole in the last decade. One teacher, Matts Petterson, was from Sweden; the others, Jason Petula, Eric Muhs, Casey O'Hara, and Katey Shirey, were from the USA. Because of population constraints, these teachers needed to fully contribute while at the South Pole. In addition, the goal was to communicate their experiences as broadly as possible. Training for the on-ice research responsibilities were handled by the AMANDA and IceCube collaborations. Partnerships with NSF supported programs that pair polar researchers with teachers---Teachers Experiencing Antarctica and the Arctic (TEA)9 for the first three teachers, and PolarTREC 10 for the last two----have significantly enhanced the impact of their experiences. Casey and Katey's experience also profited from their association and support from the Knowles Science Teaching Foundation 11 .
Students have also befitted from international opportunities within the collaboration. Five UWRF undergraduate students and one student from the two-year college UW-Marathon County have spent the summer doing research at Stockholm University in Sweden over the course of three summers. UWRF, the University of Delaware, and the University of Uppsala also worked on a project to calibrate the response of an IceTop tank to low energy particles. An IceTop tank was constructed in a freezer container in Sweden and placed on the icebreaker Oden as it cruised from Sweden to Antarctica and back. The known latitude dependence of the geomagnetic cutoff, the minimum primary cosmic ray energy needed to reach sea level, was used perform an absolute energy calibration. Three undergraduates traveled on the Oden. Drew Anderson, UWRF physics major, traveled from Sweden to Uruguay. In Uruguay, Samantha Jakel, who started at two-year college UW-Rock County, boarded the icebreaker to complete the trip to Antarctica. She is now studying electrical engineering at UW-Madison. Kyle Jero, a UWRF Physics major, took the last leg from Antarctica to Chile. Their results and experience were presented at a variety of venues 12 including the Committee on Undergraduate's Poster Session at the Capitol Hill symposium in Washington, DC and the 24th National Council of Undergraduate Research Meeting in Montana.
Classroom Enrichment
The University of Maryland, Pennsylvania State University, and UWRF have all run multiple courses for high school teachers and/or students on IceCube science. UWRF has had IceCube programs for high school teachers and/or students for the last dozen summers. For example, the Oden cruise was the theme for the eight day residential summer 2010 science and math program for the UWRF Upward Bound (UB) educational program. Approximately two dozen low-income 9-12 grade students from underrepresented groups were engaged in an innovative, inquiry-based learning experience.
The curriculum was developed and taught by teachers with polar research experience----Eric Muhs, Katey Shirey, and Steve Stevenoski. The UB students started by learning how to represent data with contour maps as a lead into understanding the geomagnetic cutoff data. After spending time creating and mapping their own miniature landforms, they moved on to using GPS units to map out the UWRF outdoor amphitheater. To further connect to the work on the Oden, the UB students' week culminated with a cruise in kayaks down the Kinnickinnic River in River Falls. They collected a multidimensional data set including water temperature, depth, flow rate, and atmospheric pressure at multiple locations that were time and position stamped with GPS units.
Social Media and Site Broadcasts
To expand and diversify the IceCube Education and Outreach efforts, the IceCube project has a presence on Facebook, Twitter and in multiple blogs. According to a Nielsen Company study from August 2010, U.S. internet users spent over twenty percent of their time on social networking sites [8]. These tools are increasingly becoming the default tools for information exchange and it is important to utilize applications with which the target audience is comfortable.
Social media also has additional capabilities that make it more powerful than traditional static web pages offering oneway communication. Social media allows for discussion and feedback in real time and provides a convenient venue to archive postings. During construction of the neutrino detector updates and photos were posted of the progress that brought this phase to life for viewers. The social media tools allow connections to be made between interested people and IceCube personnel around the world. Even the winterovers at the South Pole engage in the conversation from time to time, sending photos and updates from the South Pole. The Facebook page list articles of interest, and is a platform to post photos, advertise events, and communicate with our audience.
In each construction season, there have been multiple site broadcasts over the internet from the South Pole. These interactive sessions give the public to hear first-hand from IceCube personnel---scientist, drillers, students, teachers, and support personnel---while they are on the "ice". Due to limited bandwidth and satellite access that restricts the hours available each day, coordinating broadcasts for convenient times around the world is sometimes challenge. However, the broadcasts have proven popular and remain an essential part of the IceCube activities each Antarctic season.
Summary
The construction and operation of the IceCube observatory at the South Pole provides unique opportunities for education and outreach. The IceCube collaboration seeks to engage groups beyond the traditional research community and to allow participation on a variety levels, both virtually and in person. These efforts help explain the process of science, which ultimately helps ensure continued public support while also motivating the next generation of scientists.
|
2019-04-13T01:15:47.974Z
|
2011-10-07T00:00:00.000
|
{
"year": 2011,
"sha1": "d749e33be60cb0d63cdcaea87f7d2dcabcbae835",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6f9878efce648db69afca10f689f4dd18e3cb9c6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
19420075
|
pes2o/s2orc
|
v3-fos-license
|
Clinical and Hematological Evaluation of Patients with Sickle Cell Anemia before and after Four Years of Using Hydroxyurea
Method: A retrospective cohort study implementing a quantitative, descriptive and analytical approach developed in two public teaching hospitals located in the Central-West region of Brazil, from November 2010 to October 2011. Data collection was performed through medical records of 32 patients with SCA to assess clinical and hematological parameters before and after HU treatment. The study was approved by the UFMS Ethics Committee under protocol number 1890/2010.
Introduction
Sickle-cell anemia (SCA) is a result of an autosomal recessive genetic disorder, with the exchange of two amino acids at position 6 of the β globin chain when glutamic acid (GAG) is exchanged for valine (GTG), resulting in a structural change in the peptide chain [1], characterized by the presence of homozygous hemoglobin S (HbSS) in double heterozygosity (HbAS) with other abnormal hemoglobins (HbSC, HbSD, among others) or in interaction with thalassemias (alpha or beta) [2].
Hemoglobin S (HbS) in deoxygenation suffer physicochemical changes, polymerizing and assuming a sickle shape.Decreased deformability makes blood flow slow, and depending on HbS concentration and deoxygenation, the aggregation of these molecules can lead to vasocclusion.This phenomenon is responsible for acute and chronic manifestations that can affect organs and systems from the first year of being affected and last throughout the life of affected individuals [3].
Episodes resulting from vasocclusion such as painful crises, Acute Thoracic Syndrome (ATS), and stroke/cerebrovascular accident (CVA), among others, cause suffering to the patient and to caregivers/family members [4].Seeking to mitigate this condition, the therapeutic approach for SCA favors attempts to replace HbS production by fetal hemoglobin (HbF), with the aim to alter spinal proliferation in order to facilitate F cell production kinetics.Fetal hemoglobin (HbF) is predominant in fetal cells and produced from proeritroblasts clones as a result of immature erythrocyte precursors, which uniquely activate genes and increase HbF levels [5][6][7].
The effect of hydroxyurea on the painful episodes of SCA was verified by the Multicenter Study of Hydroxyurea (MSH) in Sickle Cell Anemia, through a randomized double-blinded clinical trial conducted with 299 participants.The MSH related that therapy reduces painful crises and the number of transfusions required [8].Moreover, hydroxyurea (HU) therapy demonstrated effectiveness in improving hematological parameters in patients [9].
A MSH long-term follow-up study of 17.5 years verified survival of SCA patients treated with HU and found that long-term use of the drug was safe and could reduce mortality [10].Another retrospective study in adult patients evaluated the effect of HU dosage on the HbF response, organ damage and survival of patients using the drug, and it found that patients should be treated with the maximum tolerated dose before organ damage occurs [11].Recommendations for health professionals were shown by a panel of experts, including in relation to HU treatment [12].
Considering the published evidence and reports of SCA patients undergoing HU treatment, we realized the need to investigate whether there is a positive relationship between exposure to hydroxyurea and increased fetal hemoglobin, as well as whether there is a positive relationship between the use of hydroxyurea and improvement of hematological parameters.Thus, the following was adopted as the guiding question for this study: What are the effects
Conclusion:
Treatment with hydroxyurea showed a significant increase in fetal hemoglobin levels, increased hemoglobin, hematocrit and average corpuscular hemoglobin concentration, with reduced episodes of pain, infection and acute chest syndrome in such a way as to reaffirm its efficiency in treating these patients.of using hydroxyurea in relation to the clinical and hematological parameters of patients with sickle cell anemia?
In this perspective, this study aimed to evaluate the clinical and hematological parameters of patients with sickle cell anemia before and after four years of using hydroxyurea.
Method
This is a descriptive and analytical retrospective cohort study with a quantitative approach, developed in two public teaching hospitals located in the center-west region of Brazil from November 2010 to October 2011.Data collection was conducted from the medical records of 32 patients with SCA in order to evaluate the clinical and hematological parameters before and after HU treatment.
A time interval of thirteen years for including participants to be evaluated was established, corresponding to the years between 1998 and 2010.This period was established since it corresponded to the period in which the studied services registered patients treated with HU and provided the necessary information for evaluating the use of this medicine.
Patients with confirmed electrophoretic profile for HbSS with medical indication for HU therapy, a medical diagnosis confirmed by ICD10 D57.0 and D57.1, which correspond to cases of sickle cell anemia with and without crisis, respectively, in addition to undergoing HU treatment for a minimum period of four years were included in the study.Medical records of patients diagnosed with other hemoglobinopathies and patients undergoing HU for a period of less than four years were excluded.
A data collection instrument was developed specifically for systematizing the information extraction necessary for this study.The variables adopted (Table 1) were adapted from the Ministry of Health of Brazil MS/SAS number 55/2010 [13], which establishes the Clinical Protocol and the therapeutic guidelines for sickle cell disease in the country.
In order to evaluate the effect of HU treatment in patients with sickle cell disease, the studied variables were organized into two moments, being before and after therapy.The "before" refers to data recorded in the period one year prior to using the drug, and the "after" corresponds to the fourth year of use.The four-year period was stipulated in this study as a safety margin, considering that the MS/SAS Ordinance number 55/2010 requires a Table 1.Data collected from charts of patients' with sickle cell anemia to perform the study.Campo Grande/MS, Brazil, 2016.
Sociodemographic Variable Clinical Variables Hematological Variables
• Date of birth.
• Platelets -PLT (µL) and neutrophils ANC (%).*: Each patient's weight and the prescribed HU dose in the medical record were used to obtain the individual mg/kg ratio of the HU dosage.The simple three rule was used for the calculation, obtaining the initial dose/kg/day per patient and the final dose in the fourth year of using HU.The initial and after four years of using dosage means were calculated from the individual dosage.
minimum interval of two years for evaluating HU treatment [13].Data were collected in the first semester of 2012.The events were recorded in an Excel® 2010 spreadsheet according to the diagnosis established in the medical records.EpiInfo version 3.4.1 and Bio-Stat 4.0 [14] were used for data analysis, with descriptive analysis and tabular representation of the results.
Student's t-test for paired samples and Wilcoxon test were used to compare the means of the quantitative variables in the two moments evaluated.Both tests were used after checking the distribution normality by the Kolmogorov-Smirnov Test.A significance level of 5% was adopted.
This study was approved by the Research Ethics Committee of the Federal University of Mato Grosso do Sul under Protocol number 1890/2010, in accordance with the current national legislation for human research.
Results
The mean age of the 32 patients at the start of their HU treatment was 19.72±7.58years.The mean initial dose of HU was 15.59±4.27mg/kg/day, and 22.48±5.35mg/kg/day in the fourth year.
In comparing the number of episodes resulting from vasoconstriction before and after the use of HU, the following results were observed: Crisis (102 to 72), Infection (48 to 15), Pneumonia (33 to 16) and ATS (Acute Thoracic Syndrome) (12 to 2).The comparison between the number of blood transfusions at both moments presented an occurrence of 10.0±8.0 for 7.0±6.0procedures (Table 3).
Discussion
The patients' mean age at the time of HU treatment initiation was 19.72±7.58years with varying doses, where the mean in the first year was 15.59±4.27mg/kg/day and 22.48±5.35mg/kg/day in the fourth year.These data were similar to those found in a prospective cohort study on the effects of HU conducted with SCA patients over 18 years of age administered a mean dose of 21 mg/kg/day, ranging from 10 to 35 mg/kg [15].The gradual scaling regimen recommended by Ordinance MS/SAS number 55/2010 of the Ministry of Health of Brazil is initially a single dose of 15 mg/kg/day, with a gradual increase not exceeding the maximum tolerated dose of 35 mg/kg/day. 12 The maximum tolerated dose approved by the Food and Drug Administration (FDA) for moderate and severe adult patients is up to 35mg/kg/day [16].A daily oral dose of 25 to 30 mg/kg was tested and well tolerated by the majority of children aged 5 to 15 years participating in a study conducted in the United States [17].
The presented HbF values were significantly higher in the fourth year of HU treatment (14.49%) compared to the mean of the year before administration of the drug (7.59%).An observational study was conducted investigating 32 patients regarding HU scaling dosage, of which 26 were treated with the maximum tolerated dose (ranging from 10 to 35 mg/kg) for two to 38 weeks, and showed an increase in HbF (4.0±2% to 15±6%) among the analyzed laboratory parameters [15].
In response to the drug's effects, another study revealed that two patients treated with HU had an increase in young fetal cells between 48 and 72 hours, which resulted in a high level of HbF [18].A multicenter American study evaluated the safety of HU therapy in children aged 5 to 15 years, and as a result 69 children achieved the highest tolerated dose of HU (25.6±6.2 mg/kg/day), resulting in a higher production of HbF (17.8±7.2%)compared to the initial treatment parameter (7.3±4.9%)[17].
In this study, a significant increase in red blood cell count (2.54 10 12 /L, p<0.001) and hematocrit (25.30%, p<0.001) was found in the fourth year of HU use in comparison to the year before instituting the drug.These results are similar to those described in an English study, which showed that erythrocyte alterations due to the effect of HU cause an elevation of Hb and Hct, in addition to hydrating and improving erythrocyte survival. 9In another study, an increase in MCV and HbF was observed, however with reductions in leukocyte, platelet, reticulocyte, neutrophil and total bilirubin counts [17].
MCHC ranged from 32.88g/dL in the year prior to instituting therapy to 33.21g/dL in the fourth year of HU treatment.The same occurred with MCV, which initially presented a result of 99.51fL and 101.79fL in the fourth year; however, none of these had a significant relation when compared to the average of the year prior to instituting the drug, but remained within the reference values of the present study.These data do not corroborate with published evidence, suggesting that MCV is directly related to the increase in HbF [9].
When the MCV and HbF were analyzed together with the HU dosage, the results of this study showed that MCV levels in the fourth year did not have the expected increase.Scientific evidence regarding HU therapy emphasizes that there may be failures during treatment, including the fact of non-monitoring not only regarding medication use, but also regarding requesting of laboratory results in a systematic way, age at the time of indication and/or institution of the drug, ideal dosage, among others [19].The studies in which the maximum tolerated dose was reached probably registered the best results [9].
In the fourth year, leukocytes were reduced to 11.12x10 9 /L (±3.76x109/L; p=0.080), but remained within the reference values.In a retrospective study conducted in the US with 383 adult patients with sickle cell anemia between 2001 and 2010, absolute neutrophil count (ANC) was lower in the last consul-tation for the group receiving HU (4.9%; p<0.001) [11].However, it has been concluded that adults should be treated with the maximum tolerated dose of HU, preferably before organ damage.
In the fourth year, neutrophils (7,729.25%,p=0.030) and hemoglobin (9.22 g/dL, p<0.001) values showed a significant relationship when compared to the mean of the year prior to instituting the drug.In an observational study also conducted in the United States, neutropenia (73%), reticulocytopenia (22%) and a decrease in Hb concentration (1%) were detected at the moment of HU dose adjustment [15].
To monitor dose adjustment, neutrophil count, platelet count, reticulocyte count, and hemoglobin level may not be less than 2,000/mm 3 , 80,000/ mm 3 , 80,000/mm 3 , and 4.5 g/dL, respectively [5,13].Bone marrow suppression may occur, but it is reversible and transient with discontinuation of treatment, and specifically for granulocyte series, it represents a common adverse effect in patients treated with HU.The most frequent adverse events include neutropenia and thrombocytopenia, as well as anemia to a lesser extent [5,20].
Platelets remained within the reference values when compared to the mean of the year prior to instituting the drug (327,029 to 290,333 μL, p=0.002).An observational follow-up study originating from MSH evaluated the effects of HU on mortality and morbidity in 233 adults with sickle cell disease between 1996 and 2001.In this study, a subgroup of 63 patients with smaller reticulocyte counts (<250.000/mm 3 ) and Hb levels of <9g/dL, also presented platelets within in the reference parameter after two years of HU treatment (401,700±146,900 μL) [21].
Pain episodes due to vasoconstriction were reduced after HU treatment (102 to 72).It should be mentioned that the key point of the MSH study was a significant reduction of pain episodes [8].Improvement in erythrocyte deformity makes erythrocytes more spherical, and morphological changes in conjunction with rheology can represent a potential benefit to patients with sickle cell disease [16].HU acts from proeritroblast precursors, favoring red cell production with a high level of Hb F [4], and in this way partially inhibits polymerization and avoids falcization in deoxy-HbS conditions with a consequent reduction in painful crises [22].
After four years of HU treatment, lung infections (specifically pneumonia) reduced from 33 to 16 episodes.Splenic function (which is decreased in SCA) is one of the factors that contribute to greater susceptibility to infections.Over time, infections in SCA patients can affect organs from already damaged systems, such as lungs damaged by recurrent pneumonia, kidneys afflicted by urinary infections and bones with osteomyelitis.Bacterial, viral and parasitic infections cause greater morbidity and are difficult to control [3].
ATS and CVA episodes resulting from vasocclusion reduced from 12 to two cases and from four to one, respectively, after HU use.The MSH study demonstrated the efficacy of this drug in reducing pain episodes following UH therapy over placebo [8].
The mean number of blood transfusions in the fourth year of HU use was reduced from 10 to 7 transfusions in comparison to the year before treatment.In the MSH study, 299 severely affected adults with three or more pain episodes/year in the HU group had reduced transfusions (73 to 48, p=0,001) [8].
Scientific evidence has demonstrated the efficacy of HU in reducing the frequency of painful episodes, 8 improving hematological parameters [13] and toxicity without serious side effects. 17Therefore, the results of this study have demonstrated that HU therapy remains an option to be offered in order to benefit a larger number of patients with sickle cell disease worldwide.
Reticulocytes were not collected in this study because of the lack of systematic registering in medical records.Thus, hemolysis markers were not analy-zed, representing a limiting factor of this study, in addition to having data collection from secondary and retrospective sources.
Conclusion
The effects of hydroxyurea on hematological parameters demonstrate a significant increase in the level of HbF, improving Hb, RBC, Hct, and MCH concentration and reducing absolute neutrophils.The use of this therapy decreases painful crises, infection (pneumonia) and ATS.
This study considerably contributes to SCA knowledge, as it provides evidence of improved treatment conditions which may lead to effective interventions for patients with sickle cell disease in health services.
It is up to health systems to ensure the use of this drug as a treatment option for patients with sickle cell disease.We suggest future investigations that evaluate the efficacy of HU and the ideal dosage through experimental and epidemiological studies involving large populations.
Table 2 .
Hematologic parameters of patients with sickle cell anemia before and after four years of using hydroxyurea, Campo Grande/MS, Brazil, 2016 (n = 32).
Table 3 .
Complications due to vasoconstriction in patients with sickle cell anemia before and after four years of using hydroxyurea.Campo Grande/MS, Brazil, 2016 (n = 32).
|
2017-10-01T04:03:08.954Z
|
2017-06-05T00:00:00.000
|
{
"year": 2017,
"sha1": "8d1a27f9ffc5304c31e3b18ddcc2a92f516f144c",
"oa_license": "CCBY",
"oa_url": "http://imedicalsociety.org/ojs/index.php/iam/article/download/2510/2207",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8d1a27f9ffc5304c31e3b18ddcc2a92f516f144c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119217413
|
pes2o/s2orc
|
v3-fos-license
|
Gamma rays from cloud penetration at the base of AGN jets
Dense and cold clouds seem to populate the broad line region surrounding the central black hole in AGNs. These clouds could interact with the AGN jet base and this could have observational consequences. We want to study the gamma-ray emission produced by these jet-cloud interactions, and explore under which conditions this radiation would be detectable. We investigate the hydrodynamical properties of jet-cloud interactions and the resulting shocks, and develop a model to compute the spectral energy distribution of the emission generated by the particles accelerated in these shocks. We discuss our model in the context of radio-loud AGNs, with applications to two representative cases, the low-luminous Centaurus A, and the powerful 3C 273. Some fraction of the jet power can be channelled to gamma-rays, which would be likely dominated by synchrotron self-Compton radiation, and show typical variability timescales similar to the cloud lifetime within the jet, which is longer than several hours. Many clouds can interact with the jet simultaneously leading to fluxes significantly higher than in one interaction, but then variability will be smoothed out. Jet-cloud interactions may produce detectable gamma-rays in non-blazar AGNs, of transient nature in nearby low-luminous sources like Cen A, and steady in the case of powerful objects of FR II type.
Introduction
Active galactic nuclei (AGN) consist of an accreting supermassive black hole (SMBH) in the center of a galaxy and sometimes present powerful radio emitting jets (Begelman et al. 1984). Radio-loud AGNs have continuum emission along the whole electromagnetic spectrum, from radio to gamma rays (e.g. Boettcher 2007). This radiation basically comes from the accretion disc and bipolar relativistic jets originated close to the central SMBH. Radiation of accretion origin can be produced by the thermal plasma of either an optically-thick geometrically-thin disc under efficient cooling (Shakura & Sunyaev 1973), or an opticallythin geometrically-thick corona (e.g. Liang & Thompson 1979). The emission from the jets is non-thermal and generated by a population of relativistic particles likely accelerated in strong shocks, although other mechanisms are also possible (Rieger et al. 2007). This non-thermal emission is thought to be produced through synchrotron and inverse Compton (IC) processes (e.g. Ghisellini et al. 1985), although hadronic models have been also considered to explain gamma-ray detections (e.g. Mannheim 1993, Mücke & Protheroe 2001, Aharonian 2002. In addition to continuum radiation, AGNs also present optical and ultra-violet lines. Some of these lines are broad, Send offprint requests to: Anabella T. Araudo: aaraudo@fcaglp.unlp.edu.ar ⋆ Fellow of CONICET, Argentina ⋆⋆ Member of CONICET, Argentina emitted by gas moving with velocities v g > 1000 km s −1 and located in a small region close to the SMBH, the socalled broad line region (BLR). The structure of this region is not well known but some models assume that the material in the BLR could be formed by dense clouds confined by a hot (T ∼ 10 8 K) external medium (Krolik et al. 1981) or by magnetic fields (Rees 1987). These clouds would be ionized by photons from the accretion disc producing the observed emission lines, which are broad because of the cloud motion within the SMBH potential well. An alternative model proposes that the broad lines are produced in the chromosphere of evolved stars (Penston 1988) present in the nuclear region of AGNs.
The presence of material surrounding the base of the jets in AGNs makes jet-medium interactions likely. For instance, the interaction of BLR clouds with a jet in AGNs was already suggested by Blandford & Königl (1979) as a mechanism for knot formation in the radio galaxy M87. Also, the gamma-ray production due to the interaction of a cloud from the BLR with a proton beam or a massive star with a jet were studied in the context of AGNs by Dar & Laor (1997) and Bednarek & Protheroe (1997), respectively.
In this work, we study the interaction of BLR clouds with the innermost jet in an AGN and its observable consequences at high energies. The approach adopted is similar to that followed in Araudo et al. (2009) for high-mass microquasars (for a general comparison between these sources and AGNs see Bosch-Ramon 2008), where the interaction of stellar wind clumps of the companion star with the microquasar jet was studied. Under magnetic fields below equipartition with the jet kinetic energy (i.e. the jet should be matter dominated), cloud penetration will lead to the formation of a relativistic bow shock in the jet and a slow shock inside the cloud. Electrons and protons can be efficiently accelerated in the bow shock and produce nonthermal emission, in situ via synchrotron and synchrotron self-Compton (SSC) mechanism, and in the cloud through proton-proton (pp) collisions. For magnetic fields well below equipartition, the SSC component becomes the dominant electron cooling channel, that leads to significant gammaray production. Since the bow shock downstream is almost at rest in the laboratory reference frame (RF), this emission will not be significantly boosted. The resulting spectrum and the achieved luminosities in one jet-cloud interaction depend strongly on the magnetic field, the location of the interaction region, the cloud size, and the jet luminosity. However, many clouds could be inside the jet simultaneously, and then the BLR global properties, like size and total number of clouds, would also be relevant. Depending on whether one cloud or many of them penetrate into the jet, the lightcurve will be flare-like or rather steady, respectively.
In order to explore the radiative outcomes of jet-cloud interactions in AGNs, we apply our model to both Faranoff-Riley galaxies I (FR I) and II (FR II). In particular, we consider Centaurus A (Cen A) and 3C 273, the nearest FR I and a close and very bright flat spectrum radio quasar (with FR II as parent population), as illustrative cases. Although in FR I the BLR is not well-detected, clouds with similar characteristics to those found in FR II galaxies may surround the SMBH (Wang et al. 1986, Risaliti 2009). Cen A has been detected at high-(HE) (Hartman et al. 1999;Abdo et al. 2010) and very high-energy (VHE) gamma rays ), whereas 3C 273 has been detected so far only at HE gamma rays (Hartman et al. 1999;Abdo et al. 2010). We have computed the contribution of jet-cloud interactions to the gamma-ray emission in these sources, and estimated the gamma-ray luminosity in a wide range of cases. We find that gamma rays from jet-cloud interactions could be detectable by present and future instrumentation in nearby low-luminous AGNs at HE and VHE, and for powerful and nearby quasars only at HE, since the VHE radiation is absorbed by the dense nuclear photon fields. In the case of sources showing boosted gamma rays (blazars), the isotropic radiation from jet-cloud interactions will be masked by the jet beamed emission, which will not be the case in non-blazar sources.
The paper is organized as follows: in Sect. 2, the dynamics of jet-cloud interactions is described; in Sect. 3, a model for particle acceleration and emission is presented for one interaction, whereas in Sect. 4 the case of many clouds interacting with the jet is considered; in Sects. 5 and 6, the model is applied to FR I and FR II galaxies, focusing on the sources Cen A and 3C 273; finally, in Sect. 7, the results of this work are summarized and discussed. We adopt cgs units through-out the paper.
The jet-cloud interaction
Under certain combinations of the jet ram pressure, and the cloud size and density, cloud-jet penetration is expected to occur. The details of the penetration process itself are com- Fig. 1. Sketch, not to scale, of an AGN at the spatial scales of the BLR region. In the top part of the figure, the interaction between a cloud and the jet is also shown. Table 1. Values assumed in this work for BLR clouds and jets.
Description
Value Cloud size Rc = 10 13 cm Cloud density nc = 10 10 cm −3 Cloud velocity vc = 10 9 cm s −1 Cloud temperature Tc = 2 × 10 4 K Jet Lorentz factor Γ = 10 Jet half-opening angle φ ≈ 6 • plex. Here we do not treat them in detail, but just assume that penetration occurs if certain conditions are fulfilled. For low magnetic fields, a cloud inside the jet may represent a hydrodynamic situation in which a supersonic flow interacts with a body of approximately spherical shape at rest. The cloud, as long as it has not been accelerated by the jet ram pressure up to the jet speed (v j ), produces a perturbation in the jet medium in the form of a steady bow shock roughly at rest in the laboratoty RF, with a velocity with respect to the jet RF approximately equal to v j . Since the cloud is not rigid, a wave propagates also through it. Since the cloud temperature is much lower, and the density much higher than in the jet, this wave will be still supersonic but much slower than the bow shock. The jet pressure exerts a force on the cloud leading to cloud acceleration along the axis, hydrodynamical instabilites and, eventually, cloud fragmentation. In the following, the jet-cloud interaction is described. Further discussion, and a proper account of the literature, can be found in Araudo et al. (2009). Sketchs of the jet-cloud interaction and the jet-BLR scenario are shown in Fig. 1.
We adopt here clouds with typical density n c = 10 10 cm −3 and size R c = 10 13 cm (Risaliti 2009). The velocity of the cloud is taken v c = 10 9 cm s −1 (Peterson 2006). The jet Lorentz factor is fixed to Γ = 10, implying v j ≈ c, with a half-opening angle φ ≈ 6 • , i.e. the jet radius/height relation fixed to R j = tan(φ) z = 0.1 z. All these parameters are summarized in Table 1 and will not change along the paper; from them, the jet density n j in the laboratoty RF can be estimated: where σ j = πR 2 j and L j is the kinetic power of the matter dominated jet.
The jet ram pressure should not destroy the cloud before it has fully entered into the jet. This means that the time required by the cloud to penetrate into the jet should be: should be shorter than the cloud lifetime inside the jet. To estimate this cloud lifetime, let us compute first the time required by the shock in the cloud to cross it (t cs ). The velocity of this shock, v cs , can be derived making equal the jet and the cloud shock ram pressures: (Γ − 1) n j m p c 2 = n c m p v 2 cs , valid as long as v cs ≪ c . Then: where χ is the cloud to jet density ratio, n c /n j (Γ − 1). This yields a clould shocking time: Therefore, for a penetration time (t c ) at least as short as ∼ t cs , the cloud will remain being an effective obstacle for the jet flow. Setting t c ∼ t cs , we obtain a minimum value for χ and hence for z. Hydrodynamical instabilities produced by the interaction with the jet material will affect the cloud. First of all, the jet exerts a force on the cloud through the contact discontinuity. The acceleration applied to the cloud can be estimated from the jet ram pressure P j , and the cloud section σ c ∼ π R 2 c and mass M c ∼ (4/3) πR 3 c n j m p : Given the acceleration exerted by the jet in the cloud, Rayleigh-Taylor (RT) instabilities will develop in the cloud, at the jet contact discontinuity, with timescale: where the instability length l is the spatial scale of the perturbation. For perturbations of the size of the cloud: l ∼ R c , which are those associated to cloud significant disruption, one gets t RT ∼ t cs .
In addition to RT instabilities, Kelvin-Helmholtz (KH) instabilities also grow in the cloud walls in contact with shocked jet material that surrounds the cloud. Accounting for the high relative velocity, v rel < ∼ v j , one obtains: where g rel ∼ c 2 /χ l. For l ∼ R c , we obtain t KH > ∼ t cs . In the previous estimates of t RT and t KH we have not taken into account the effect of the magnetic field (e.g. Blake 1972), since we assume that it is dynamically negligible. We note that, given g, the time to accelerate the cloud up to the shock velocity v cs is ∼ t cs . However, the timescale to accelerate the cloud up to v j is ≫ t cs provided that v j ≫ v cs , and before, the cloud will likely fragment. Finally, there are two additional timescales also relevant for our study, the bow-shock formation time, t bs , and the time required by the cloud to cross the jet, t j . The timescale t bs can be roughly estimated assuming that the shock downstream has a cylindrical shape with one of the bases being the bow shock, relativistic shock jump conditions, equal particle injection and escape rates, and a escape velocity similar to the sound speed ∼ c/ √ 3 (for a relativistic plasma). This yields a shock-cloud separation distance of Z ∼ 0.3 R c , which implies: Since in general t bs ≪ t cs , we can assume that the bow shock is in the steady regime. The jet crossing time t j can be characterized by: Note that if the cloud lifetime is < t j , the number of clouds inside the jet will be smaller than expected just from the BLR properties. In order to summarize the discussion of the dynamics of the jet-clump interaction, we plot in Fig. 2 the t cs (for different L j ), t j , t c , and t bs as a function of z. As shown in the figure, for some values of z and L j the cloud could be destroyed by the jet before full penetration, i.e. t cs < t c . This is a constraint to determine the height z of the jet at which the cloud can penetrate into it. Note also that, in general, t bs is much shorter than any other timescale.
The interaction height
The cloud can fully penetrate into the jet if the cloud lifetime after jet impact is longer than the penetration time (the weaker condition that jet lateral pressure is < n c m p v 2 c is then automatically satisfied). This determines the minimum interaction height, z int , to avoid cloud disruption before full penetration. Also, this interaction cannot occur below the jet formation region, z 0 ∼ 100 R g ≈ 1.5 × 10 15 (M bh /10 8 M ⊙ ) cm (Junor et al. 1999). For BLR-jet interaction and cloud penetration to occur, the size of the BLR should be R blr > z 0 and z int .
The lifetime of the cloud depends on the fragmentation time, which is strongly linked to, but longer than, t cs . Table 1. The time t cs is plotted for L j = 10 44 , 10 46 and 10 48 erg s −1 .
The value of z int can be estimated then setting t c < ∼ t cs , since the cloud should enter the jet before being significantly distorted by the impact of the latter. Once shocked, the cloud can suffer lateral expansion and conduction heating, which can speed up fragmentation due to instabilities.
In this work, we choose z int fixing t cs = 2 t c : We note that the available power in the bow shock is L bs ∼ (σ c /σ j ) L j ∝ z −2 . Therefore, the most luminous individual jet-cloud interaction will take place at z ∼ z int . The BLR size can be estimated through an empirical relation obtained from sources with a well stablished BLR, i.e. FR II radio galaxies. This relation is in general of the type R blr ∝ L α blr , where L blr is the luminosity of the BLR and α ∼ 0.5 − 0.7 (e.g. Kaspi et al. 2005Kaspi et al. , 2007Peterson et al. 2005;Bentz et al. 2006). In this paper we use the following relations: and R blr ∼ 2.5 × 10 16 L blr 10 44 erg s −1 from Kaspi et al. (2005Kaspi et al. ( , 2007. In Fig. 3 we show the relation of z int and R blr with L j , assuming that L blr is a 10% of the disc luminosity, which is taken here equal to L j . As seen in the figure, for reasonable parameters, the condition z int < R blr is fulfilled for a wide range of L j . Figure 3 also shows the relation between z 0 and M bh , which shows that for M bh > ∼ 10 9 M ⊙ the jet could be even not (fully) formed at the BLR scales at the lowest L j -values.
Non-thermal particles and their emission
In the bow and cloud shocks, particles can be accelerated through diffusive shock acceleration (Bell 1978). However, the bow shock should be more efficient accelerating particles than the shock in the cloud because v bs ≫ v cs . In addition, the cloud shock luminosity is smaller than the bow-shock luminosity by ∼ 1/(2χ 1/2 ). For these reasons, we focus here on the particle acceleration in the bow shock. In this section, we briefly describe the injection and evolution of particles, and their emission, remarking those aspects that are specific to AGNs. The details of the emitting processes considered here (synchrotron, IC and pp interactions) can be found in Araudo et al. (2009) and references therein. First, one can estimate the non-thermal luminosity, L nt , injected at z int in the bow shock in the form of relativistic electrons or protons: L j 10 44 erg s −1 erg s −1 .
Then, the accelerator/emitter magnetic field in the bowshock RF (B) can be determined relating U B = η B U nt , where U B = B 2 /8π and U nt = L nt /(σ c c) are the magnetic and the non-thermal energy densities, respectively. For leptonic emission and to avoid supression of the IC channel, high gamma-ray outputs require η B well below 1. In this context, B can be parametrized as follows: v c 10 9 cm s −1 2 n c 10 10 cm −3 G .
Regarding the acceleration mechanism, since the bow shock is relativistic and the treatment of such a shocks is complex (see Achterberg et al. 2001), we adopt the following prescription for the acceleration rate: similar to that in the relativistic termination shock of the Crab pulsar wind (de Jager et al. 1996).
Particles suffer different losses that balance the energy gain from acceleration. The electron loss mechanisms are escape downstream, relativistic Bremsstrahlung, synchrotron emission, and external Compton (EC) and SSC. We note that B, L nt and the accelerator/emitter size at z int are constant for different L j and fixed η B and η nt , and only L blr and L d are expected to change with L j . Therefore, as long as the external photon fields are negligible, the maximum electron energy at z int does not change for different jet powers.
In Fig. 4, the leptonic cooling timescales are plotted together with the escape time and the acceleration time for a bow shock located at z int . A value for η B equal to 0.01 has been adopted. The SSC cooling timescale is plotted for the steady state. The escape time downstream the relativistic bow shock is taken as: Synchrotron and EC/SSC are the dominant processes in the high-energy part of the electron population, relativistic bremsstrahlung is negligible for any energy, and electron escape is relevant in the low-energy part. This yields a break in the electron energy distribution at the energy at which the synchrotron/IC time and the escape time are equal. The Thomson to KN transition is clearly seen in the EC cooling curves, but is much smoother in the SSC case. The maximun electrons energy are around several TeV. Given the similar cooling timescale for electrons via relativistic bremsstrahlung and protons through pp collisions (t brems/pp ∼ 10 15 /n s, where n is the target density), protons will not cool efficiently in the bow shock. Photomeson production can also be discarded as a relevant proton cooling mechanism in the bow shock due to the relatively low achievable proton energies and photon densities. The maximum proton energy of protons is constrained making equal the acceleration time and the time needed to diffuse out of the bow shock. Assuming Bohm diffusion, t diff = 3 Z 2 /2 r g c (where r g is the particle gyroradius), maximum proton energy is: TeV . (17) Electrons are injected in the bow shock region following a power-law in energy of index 2.2 (Achterberg et al. 2001) with an exponential cutoff at the maximum electron energy. The injection luminosity is L nt . To first order, the electron evolution can be computed assuming homogeneous conditions, following therefore a one-zone approximation with all the mentioned cooling and escape processes. The formulae for all the relevant radiative mechanisms, as well as the solved electron evolution differential equation, can be found in Araudo et al. (2009). In some cases, SSC is the dominant cooling channel at high energies. In that case, the calculations have to be done numerically splitting the In each step, the radiation field is updated accounting for the synchrotron emission produced in the previous step until the steady state is reached. The duration of each step should be shorter for the earlier phases of the evolutionary state to account properly for the rise of the synchrotron energy density in the emitter. Once the steady-state electron distribution in the bow shock is computed, the spectral energy distribution (SED) of the non-thermal radiation can be calculated. The synchrotron self-absorption effect has to be taken into account, which will affect the low energy band of the synchrotron emission. At gamma-ray energies, photon-photon absorption due to the disc and the BLR radiation are to be considered, the internal absorption due to synchrotron radiation being negligible. Given the typical BLR and disc photon energies, ∼ 10 eV and ∼ 1 keV, respectively, gamma rays beyond 1 GeV and 100 GeV can be strongly affected by photon-photon absorption. On the other hand, for most of cases photons between 100 MeV and 1 GeV energies will escape the dense disc photon field.
Although proton cooling is negligible in the bow-shock region, it may be significant in the cloud. Protons can penetrate into the cloud if t esc > t diff , which yields a minimum energy to reach the cloud of E p ∼ 0.4 E max p . These protons will radiate in the form of gamma rays only a fraction ∼ 3 × 10 −4 (R c /10 13 cm)(t pp /10 5 s) −1 of their energy, wich makes the process rather unefficient. The reason is that these protons cannot be efficiently confined and cross the cloud at a velocity ∼ c. For further details of the proton energy distribution in the cloud, see Araudo et al. (2009).
Many clouds inside the jet
Clouds fill the BLR, and many of them can simultaneously be inside the jet at different z, each of them producing nonthermal radiation. Therefore, the total luminosity can be much larger than the one produced by just one interaction, which is ∼ L nt . The number of clouds within the jets, at z ≤ R blr , can be computed from the jet (V j ) and cloud (V c ) volumes, resulting in: where the factor 2 accounts for the two jets and f ∼ 10 −6 is the filling factor of clouds in the whole BLR (Dietrich et al. 1999). Actually, N j c is correct if one neglects that the cloud disrupts and fragments, and eventually dilutes inside the jet. For instance, Klein et al. (1994) estimated a shocked cloud lifetime in several t cs , and Shin et al. (2008) found that even a weak magnetic field in the cloud can significantly increase its lifetime. Finally, even under cloud fragmentation, strong bow shocks can form around the cloud fragments before these have accelerated close to v j . All this makes the real number of interacting clouds inside the jet hard to estimate, but it should be between (t cs /t j ) N j c and N j c . The presence of many clouds inside the jet, not only at z int but also at higher z, implies that the total non-thermal luminosity available in the BLR-jet intersection region is: where dN j c /dz is the number of clouds located in a jet volume dV j = π (0.1z) 2 dz. In both Eqs. 18 and 19, L blr has been fixed to 0.1 L j , approximately as in FR II galaxies, and R blr has been derived using Eq. (11).
In Fig. 5, we show estimates for the gamma-ray luminosity when many clouds interact simultanously with the jet. For this, we have followed a simple approach assuming that most of the non-thermal luminosity goes to gamma rays. This will be the case as long as the escape and synchrotron cooling time are longer than the IC cooling time (EC+SSC) for the highest electron energies. Given the little information for the BLR in the case of FR I sources, we do not specifically consider these sources here.
In the two next sections, we present more detailed calculations applying the model presented in Sect. 3 to two characteristic sources, Cen A (FR I, one interaction) and 3C 273 (FR II, many interactions).
Application to FR I galaxies: Cen A
Cen A is the closest AGN, with a distance d ≈ 3.7 Mpc (Israel 1998). It has been classified as an FR I radio galaxy and as a Seyfert 2 optical object. The mass of the black hole is ≈ 6 × 10 7 M ⊙ (Marconi et al. 2000). The angle between the jets and the line of sight is large, > 50 • (Tingay et al. 1998), thus the jet radiation towards the observer should not suffer strong beaming. The jets of Cen A are disrupted at kpc scales, forming two giant radio lobes that extend ∼ 10 • in the southern sky. At optical wavelenghts, Upper limits on the gamma-ray luminosity produced by N j c clouds inside the jet as a function of L j in FR II sources. Two cases are plotted, one assuming that clouds cross the jet without disruption (green solid lines), and one in which the clouds are destroyed in a time as short as t cs (green dashed lines). The thick (solid and dashed) and thin (solid and dashed) lines correspond to R blr ∝ L 0.7 blr (Kaspi et al. 2007) and R blr ∝ L 0.55 blr (Kaspi et al. 2005), respectively. In addition, the sensitivity levels of Fermi in the range 0.1-1 GeV (maroon dotted lines) are plotted for three different distances d = 10, 100 and 1000 Mpc. the nuclear region of Cen A is obscured by a dense region of gas and dust, probably as a consequence of a recent merger (Thomson 1992, Mirabel et al. 1999. At higher energies, Chandra and XMM-Newton detected continuum X-ray emission coming from the nuclear region with a luminosity ∼ 5 × 10 41 erg s −1 between 2-7 keV (Evans et al. 2004). These X-rays could be produced by the accretion flow and the inner jet, although their origin is still unclear. In the GeV range, Cen A was detected above 200 MeV by Fermi, with a bolometric luminosity of ≈ 4 × 10 40 erg s −1 (Abdo et al. 2009), and above ∼ 200 GeV by HESS, with a bolometric luminosity of ≈ 3 × 10 39 erg s −1 ). In both cases, this HE emission is associated with the nuclear region. Cen A has been proposed to be a source of ultra HE cosmic rays (Romero et al. 1996).
A BLR has not been detected so far in Cen A (Alexander et al. 1999), although this could be a consequence of the optical obscuration produced by the dust lane. One can still assume that clouds surround the SMBH in the nuclear region (Wang et al. 1986, Risaliti et al. 2002 but, as a consequence of the low luminosities of the accretion disc, it is not expected that the photoionization of these clouds will be efficient enough to produce lines. Since no emission from these clouds is assumed, we only consider the EC scattering with photons from the accretion flow.
We adopt here a jet power for Cen A of L j = 10 44 erg s −1 . From this value, and the values given in Sect. 2 for the remaining parameteres of the jet and the cloud, z int results in ≈ 5 × 10 15 cm. At this jet height, the emission produced by the interaction between one cloud and the jet is calculated assuming a η B = 0.01, and the corresponding SED is presented in Fig. 6. As mentined in Sect. 3, the low- Fig. 6. Computed SED for one jet-cloud interaction at z int in Cen A. We show also the SEDs of the detected emission by Fermi and HESS, as well as the sensitivity curves of these instruments. energy band of the synchrotron spectrum is self-absorbed at energies below ∼ 10 −4 eV. At gamma-ray energies, photonphoton absorption is negligible due to the weak ambient photon fields (e.g. Rieger & Aharonian 2009, Araudo et al. 2009, 2010a. At high energies, SSC dominates the radiative output, with the computed luminosity above 100 MeV being ∼ 2 × 10 39 erg s −1 , and above 100 GeV about 10 times less. These luminosities are below the sensitivity of Fermi and HESS and one order of magnitude smaller than the observed ones. Note however that L nt ∝ R 2 c , and for slightly bigger clouds, L nt may grow up to detectable levels. The penetration of a big clump in the base of the jet of Cen A would lead to a flare with a duration of about one day. 6. Application to FR II galaxies: 3C 273 (off-axis) 3C 273 is a powerfull radio-loud AGN at a distance of d = 6.7 × 10 2 Mpc (Courvoisier 1998) with a SMBH mass M BH ∼ 7 × 10 9 M ⊙ (Paltani & Türler 2005). The angle of the jet with the line of sight is small, ≈ 6 • , which implies the blazar nature of 3C 273 (Jolley et al. 2009). The whole spectrum of this source shows variability (e.g. Pian et al. 1999) from years (radio) to few hours (gamma rays). At high energies, 3C 273 was the first blazar AGN detected in the MeV band by the COS-B satellite and, later on, by Fig. 7. Computed SED for one jet-cloud interaction at z int in 3C 273. The emission in the 0.1-1 GeV range from many clouds inside the jet is also shown, together with the sensitivity level of Fermi and the observed SED above 200 MeV.
EGRET (Hartman et al. 1999). Recently, this source was also detected at GeV energies by Fermi and AGILE, but it has not been detected yet in the TeV range. Given the jet luminosity of 3C 273, L j ≈ 4 × 10 47 erg s −1 (Kataoka et al. 2002), z int results in ≈ 3 × 10 17 cm. The BLR luminosity of this source is ≈ 4×10 45 erg s −1 (Cao & Jiang 1999), and its size 7 × 10 17 cm (Ghissellini et al. 2010), which implies that jet-cloud interactions can take place. The disc luminosity is high, ≈ 2 × 10 46 erg s −1 , with typical photon energies ≈ 54 eV (Grandi & Palumbo 2004). The non-thermal SED of the radiation generated by jet-cloud interactions in 3C 273 is shown in Fig. 7. At z int , the most important radiative processes are synchroton and SSC. The bolometric luminosities by these processes in one interaction at z int are 6 × 10 38 erg s −1 and 2 × 10 39 erg s −1 , respectively. Given the presence of the strong radiation fields from the disc and the BLR, the emission above ∼ 10 GeV is absorbed through photon-photon absorption, and the maximum of the emission is around 0.1-1 GeV. Given the estimated number of clouds in the BLR of 3C 273, ∼ 10 8 (Dietrich et al. 1999), and the size is R blr ≈ 7×10 17 cm, the filling factor results in f ∼ 3×10 −7 .
With this value of f , the number of clouds in the two jets results in ∼ 2 × 10 3 and 5 × 10 5 for both the minimum and the maximum values (see Sect. 4). Considering the most optimistic case, the SSC luminosity would reach 2 × 10 44 erg s −1 . This value is well below the observed luminosity by Fermi in the GeV range, ∼ 3 × 10 46 erg s −1 in the steady state and ∼ 1.7 × 10 47 erg s −1 in flare (Soldi et al. 2009). However, the detected emission is very likely of beamed origin and should mask any unbeamed radiation. However, powerful non-blazar AGNs (FR II galaxies) do not present this beamed component, which makes possible the detection of GeV emission from jet-cloud interactions in these sources. In this case, given that many BLR clouds can interact with the jet simultaneously, the radiation should be steady.
Summary and discussion
In this work, the interaction of clouds with the base of jets in AGNs is studied. Considering reasonable cloud and jet parameters, we estimate the relevant dynamical timescales of these interactions, concluding that clouds can enter into the jet only above a certain height, ∼ z int . Below z int , the jet is too compact and its ram/magnetic pressure will destroy the cloud before fully penetrating into the jet. Once the cloud significantly interacts with the jet, strong shocks are generated and gamma rays can be produced with an efficiency that depends strongly on the bow-shock magnetic field.
Bow shock B-values well below equipartition with nonthermal particles allow significant gamma-ray emission. For very high B-values (Poynting-flux dominated jets), the treatment performed here does not apply. In that case, z int could be still defined adopting the jet magnetic pressure instead of the ram pressue. If a cloud entered in such a jet, particle acceleration in the bow shock could still occur due to, for instance, magnetic reconnection. The study of such a case would require a completely different approach than the one presented here. In general, for bow-shock magnetic fields above equipartition with non-thermal particles, the IC channel and gamma ray production will be supressed in favor of synchrotron emission. Unless magnetic dissipation reduced the magnetic field enough for IC to be dominant. Bow shock B-values well below equipartition with nonthermal particles allow significant gamma-ray emission. We note that modeling of gamma rays from AGN jets uses to require relatively low magnetic fields (e.g. Ghisellini et al. 2010). Therefore, it could be that, even if the jet magnetic field were high at z int , it could become small enough farther up due to bulk acceleration (e.g. Komissarov et al. 2007) or some other form of magnetic dissipation.
For very nearby sources, like Cen A, the interaction of big clouds with jets may be detectable as a flaring event, although the number of these big clouds and thereby the duty cicle of the flares are difficult to estimate. Given the weak external photon fields in these sources, VHE photons can escape without suffering significant absorption. Therefore, jet-cloud interactions in nearby FR I may be detectable in both the HE and the VHE range as flares with timescales of about one day. Studying such a radiation would provide information on the environmental conditions and the base of the jet in these sources.
In FR II sources, many BLR clouds could interact simultaneously with the jet. The number of clouds depends strongly on the cloud lifetime inside the jet, which could be of the order of several t cs . Nevertheless, it is worthy noting that after cloud fragmentation many bow shocks can still form and efficiently accelerate particles if these fragments are slower than the jet. Since FR II sources are expected to present high accretion rates, radiation above 1 GeV produced in the jet base can be strongly attenuated due to the dense disc and the BLR photon fields, although gamma rays below 1 GeV should not be significantly affected. Since jet-cloud emission should be rather isotropic, it would be masked by jet beamed emission in blazar sources, although since powerful/nearby FR II jets do not present significant beaming, these objects could indeed show gamma rays from jet-cloud interactions. In the context of AGN unification (Urry & Padovani 1995), the number of non-blazar (radioloud) AGNs should be much larger than that of blazars with the same L j . As shown in Fig. 5, close and powerful sources could be detectable by deep enough observations of Fermi. After few-years exposure a significant signal from these objects could arise, their detection providing strong evidence that jets are already strongly matter dominated at the bow shock regions, as well as physical information on the BLR and jet base region.
|
2010-07-13T20:51:09.000Z
|
2010-07-13T00:00:00.000
|
{
"year": 2010,
"sha1": "d1cd7652ef81db71acf19f424b8799fc4fbac170",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2010/14/aa14660-10.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "d1cd7652ef81db71acf19f424b8799fc4fbac170",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
52942103
|
pes2o/s2orc
|
v3-fos-license
|
Root growth, function and rhizosphere microbiome analyses show local rather than systemic effects in apple plant response to replant disease soil
Apple replant disease (ARD) is the phenomenon of soil decline occurring after repeated planting of apple trees at the same site. This study aimed to elucidate whether ARD is systemic, i.e. whether the contact of parts of the root system with ARD soil causes the whole plant to show poor shoot and root growth. A split-root experiment was conducted with seedlings of ‘M26’, offering the same plant for its root system the choice between the substrates ARD soil (+ARD), γ-sterilized ARD soil (-ARD) or soil from a grass parcel (Control) with the following combinations: +ARD/+ARD, -ARD/-ARD; +ARD/-ARD; +ARD/Control. Root growth was analysed throughout the 34-day growing period. Samples from bulk, rhizosphere and rhizoplane soil were collected separately for each compartment, and analysed by fingerprints of 16S rRNA gene or ITS fragments amplified from total community (TC) DNA. The response of the plant to +ARD was not systemic as root growth in -ARD compartment was always superior to root growth in +ARD soil. Crosswise 15N-labelling of the N-fertilizer applied to the split-root compartments showed that nitrate-N uptake efficiency was higher for roots in -ARD soil compared to those in +ARD. Bacterial and fungal community composition in the rhizoplane and rhizosphere of the same plants differed significantly between the compartments containing +ARD/-ARD or +ARD/Control. The strongest differences between the bacterial fingerprints were observed in the rhizoplane and rhizosphere. Bacterial genera with increased abundance in response to ARD were mainly Streptomyces but also Sphingobium, Novosphingobium, Rhizobium, Lysobacter and Variovorax. The strongest differences between the fungal fingerprints were observed in bulk soil. Our data showed that the response of the apple plant to ARD soil is local and not systemic.
ARD caused by biotic factors
Since the 1980s, several studies were conducted on the biological components of ARD [11]. Molecular fingerprinting methods revealed large differences in the composition of bacterial, fungal, and nematode communities in ARD soils compared to healthy ones [8,12,13].
Nematodes such as Pratylenchus penetrans were initially suspected to cause ARD [14]. However, in later studies, the influence of Pratylenchus and other nematodes was found to be low or negligible [15,16].
Fungi such as Rhizoctonia solani, Phytophthora spp., Phythium spp, Cylindrocarpon spp., and Fusarium spp. are actually referred to as the most important agents associated with ARD [15,[17][18][19][20][21]. In experiments with ARD soils from several countries of Europe, however, no negative link was observed between plants growth and abundance of Fusarium and Rhizoctonia spp., indicating that Cylindrocarpon and Pythium were the main cause of ARD in Central Europe [11].
However, even though these microorganisms are often mentioned as inducing symptoms of ARD, many uncertainties remain. About their actual link with ARD usually little or nothing is known [1]. In addition, all the studies on apple trees from different regions of the world revealed that dominant pathogens strongly depended on the site of apple plantation [15]. One of the reasons for this might be that the interplay between the soil microbiome and plants largely depends on environmental conditions (soil type, cropping history, weather) and the physiological state of the plants [11,22].
In addition, contrasting results shown by different studies might be in part related to the methodology of sampling. If root rhizodeposits like phloridzin are the reason for the shift in microbial community, this shift most likely occurs next to the root. Therefore, it is inevitable investigate in detail the apple plant rhizosphere and rhizoplane to better understand the etiology of ARD.
Is the response to ARD systemic?
Split-root experiments are well-established tools to distinguish between local effects such as the increase in root branching as a response to local nitrate or ammonium placement [23], and systemic effects, i.e. the induction of stomatal closure in response to the root system sensing drought [24], or the systemic suppression of root colonization by rhizobium or mycorrhiza due to Nod factors [25]. Presently it is not known whether the apple plant response to ARD soil is local or systemic. In particular, there is a need to investigate whether (i) the symptoms spread throughout the root system, (ii) the potential causing agents spread throughout the root system and associated rhizoplane, rhizosphere and bulk soil, (iii) the exposure of part of the root system to ARD has an negative impact on shoot growth or whether (iv) a compensation by root growth in the ARD unaffected substrate is possible. A clarification of these points could help to identify the causal agents and mechanisms behind ARD. At the same time the split-root approach is a tool to overcome uncertainty introduced by plant to plant variation in shoot and root growth, as the same individual is exposed to +ARD and control or-ARD soil, respectively. To our knowledge no previous experiments using the split-root approach to investigate ARD have been conducted. However, recently, split-root approach has been successfully applied to investigate the genetic events taking place during the tripartite interaction of Verticillium dahliae-olive-Pseudomonas fluorescens PICF7 in olive tree roots [26].
Objective and experimental approach
The goal of the present study was primarily to unravel whether the response of apple to ARD soil is systemic. Roots of the same plant were grown partly in +ARD soil and partly in γ-irradiated ARD soil (-ARD) or the same soil with no previous apple cropping (Control). In addition, plants were also grown in split-root boxes with both compartments either filled with +ARD soil or -ARD soil. Along with the question of whether the response of apple to ARD is systemic, the objective was to check whether differences in the microbiome composition of the same plant were dependent on the substrates and the different microhabitats (bulk and rhizosphere soil or rhizoplane). We focused on root growth and activity as symptoms of ARD. Root growth was investigated in situ by scanning the split-root boxes. Root activity was investigated by crosswise labelling of +ARD or -ARD soil by fertilization with K 15 NO 3 . The microbial communities associated with the bulk and rhizosphere soil and the rhizoplane were investigated by denaturing gradient gel electrophoresis (DGGE) fingerprints of 16S rRNA gene and ITS fragments amplified from total community (TC) DNA. Differentiating bands of the bacterial DGGE fingerprints were characterized by sequencing.
Soil from this field site is a loamy sand classified as Endostagnic Luvisol according to WRB [27].
The field moist soils from ARD-plots and grass-plots were sieved to 2 mm and packed in 15 L autoclaveable bags. Half of the soil from ARD plots was subsequently treated by γ-radiation (>10 kGray, Synergyhealth, Radeberg; compare McNamara et al. [28], further denoted '-ARD' soil in contrast to the soil from ARD plots not receiving γ-radiation, denoted '+ARD' soil. Soil from grass plots present at the same site and never planted with apple trees was denoted 'Control'.
All three substrates received a basal fertilizer dressing consisting of 50 mg N kg -1 , 25 mg P kg -1 , 140 mg K kg -1 , 10 mg Mg kg -1 . Nutrients were applied as KNO 3 , CaHPO 4 , and MgSO 4 . The nitrogen fertilizer was either applied as standard KNO 3 , with natural abundance of 15 N or as K 15 NO 3 , consisting of 98 at % 15 N.
Split-root boxes and experimental design
Split-root boxes consisting of two adjacent compartments (32.5 x 10 x 2 cm, h x w x d) and a transparent front plate were used. The back of the boxes was lined with an irrigation mat and a 30 μm nylon mesh to enable water supply to the soil compartments from the back along the height of the boxes, without providing roots direct access to the irrigation mat. Each compartment was filled with 600 g of the respective substrate (bulk density 1.1 g cm -3 ), and irrigated to 20 vol.%. 'M26' plants (see below) were carefully transplanted to the split-root boxes after removing adhering substrate from the pre-culture. Existing root system was divided between the two compartments. The top was covered with a layer of vermiculite to avoid drying of the hypocotyl, and a layer of coarse gravel to reduce evaporation from the surface.
The experiment consisted of four main treatments, varying in the combination of substrates in the two compartments of the split-root boxes: -ARD in both compartments (-ARD/-ARD); +ARD in both compartments (+ARD/+ARD); -ARD in one and +ARD in the other compartment (-ARD/+ARD); one compartment with Control and one with +ARD (Control/+ARD).
For crosswise labelling of soil compartments with 15 N, two sub-treatments of treatment (-ARD/+ARD) were established ( 15 N-ARD/ 14 N+ARD), ( 14 N-ARD/ 15 N+ARD), i.e. the 15 N labelled nitrogen fertilizer was applied once to the -ARD soil, once to the +ARD soil. The second compartment, like all other treatments, received the same amount of unlabelled nitrogen fertilizer (see above).
Plant material and growth conditions
Acclimatized in vitro apple rootstock 'M26' plants, 60 days old, were selected for equal size and distributed evenly among the different treatments, resulting in four biological replicates.
Plants were grown in split-root boxes in a climate chamber for 34 days with a 16 h photoperiod (350 μM m -2 s -1 PAR). Day and night temperatures were 20 and 18˚C, respectively, while relative humidity (70%) was kept constant.
Plants were watered to 20 vol.% every two to three days. Towards the harvest water content was reduced to 18 vol.% to enable a standardized collection of rhizosphere soil.
Scanning root growth in situ
The split-root boxes were placed into the climate chamber at an angle of about 30˚. With this angle most roots were growing towards the transparent front plate where root growth could be observed. Therefore, the surface of the split-root boxes was scanned every two to four days during the 34-day growth period using a photo scanner (EPSON Perfection V700 Photo). The corresponding resolution was 600 dpi, the colour depth was 32 bit. The images were analysed for their root length and root diameter classes (see 2.6).
Sampling of bulk and rhizosphere soil, rhizoplane and roots.
After opening the split-root boxes the stem of the apple plantlet was cut with a sterile scalpel. In each compartment the complete root system was separated from the bulk soil. A toothbrush was used to remove soil adhering to the roots after shaking off loosely attached soil. This soil fraction was termed rhizosphere. To obtain the rhizoplane fraction, the complete root system from each compartment was placed into a falcon tube with 30 mL distilled water and vigorously shaken for 30 seconds. The obtained suspension was centrifuged (10,000 g for 30 min at 4ºC). The pellet was re-suspended and transferred to a 2 mL reaction tube and centrifuged again (14.000 g for 20 min at 4ºC). The pellets obtained (rhizoplane fraction) were transferred to a lysis tube. 0.5 g of bulk and rhizosphere soil were placed in lysis tubes provided by MP Biomedicals (Santa Ana, CA, USA). The tubes were stored in a freezer at -20˚C until DNA extraction. Aliquots of rhizosphere and bulk soil were oven-dried at 65˚C for determination of soil dry weights and for C, N, 15 N analyses.
Root length and root diameter classes
Root length and diameter classes were measured with WinRHIZO (2009, Regent Instruments Canada Inc.). A colour analysis was done for the images of the split-root boxes. The basis of this analysis is a pixel classification depending on colour classes. These classes differentiate roots from soil. To obtain comparable results, they were defined once and then used for all images.
To get information about the "hidden" part of the root system, root length and diameter classes were also measured destructively after final harvest. For this, half of the washed root system was scanned with 600 dpi and 8 bit and also analysed with WinRHIZO. Between harvest and analysis roots were stored in Rotisol (Roth GmbH, Karlsruhe, Germany). The second half of the roots was dried in an oven for C, N analyses and for calculating the total root dry mass. The latter and the WinRHIZO results were used to calculate total root length of one compartment.
C, N and 15 N analyses of plant and soil samples
After 24 h in an oven at 65˚C soil samples (rhizosphere and bulk soil) and plant samples (stem and leaves and half of the root system) were ground. C, N and 15 N analyses were conducted using a coupled system of elementar analyser and quadrupole mass spectrometer (Vario EL cube, Elementar Hanau, Germany; Quadropole MS ESD 100, ICI Bremen, Germany).
Determination of microbial abundance and diversity
TC-DNA extraction and purification. TC-DNA was extracted, from 0.5 g of rhizosphere and bulk soil or the microbial pellet obtained from the roots (rhizoplane) using FastDNA SPIN Kit for soil after a harsh cell lysis step with the FastPrep instrument (MP Biomedicals, Santa Ana, CA, USA) according to the manufacturer's protocol. The TC-DNA was purified by GENECLEAN SPIN Kit (Qbiogene, Inc., Carlsbad, CA, USA) following the manufacturer's instructions. The DNA yield was checked on an agarose gel and stored at -20˚C.
Amplification of bacterial 16S rRNA gene and ITS fragments from TC-DNA. Copy numbers of 16S rRNA gene and ITS were determined by real time quantitative PCR 5'-nuclease assay (qPCR) in a CFX96 Real-Time System (Biorad, Germany) with primer and TaqMan probe as previously described by Suzuki et al. [29] and Gschwendtner et al. [30], respectively. Amplification conditions, reagents concentrations and standards used in both qPCR were as previously described by Vogel et al. [31]. Primer sets and Taqman probes are provided in S1 Table. All PCRs were performed with purified and 1:10 diluted TC-DNA. To study the bacterial community composition the amplification of the bacterial 16S rRNA gene fragments from TC-DNA was performed with the primers F984GC/R1378 according to Gomes et al. [32], except that 0.2 μM primer concentration and 0.025 U μL -1 Go Taq polymerase (Promega GmbH, Mannheim, Germany) were used for amplification (25 μL final volume). Group-specific PCRs were performed as described by Weinert et al. [33] with primers that allow the amplification of 16S rRNA gene fragments specific for: Alphaproteobacteria, Betaproteobacteria, Actinobacteria, Pseudomonas, Bacillus and Streptomyces, except that for Bacillus the reverse universal primer R1494 was used.
To get information about fungal communities, ITS fragments were amplified in a nested PCR with primers ITS1F/ITS4 and ITS1F-GC/ITS2 according to protocols described by Weinert et al. [33]. Primer sets with respective references are listed in S1 Table. DGGE analyses. Microbial community composition was studied by Denaturant Gradient Gel Electrophoresis (DGGE) of the amplified 16S rRNA gene or ITS fragments. The analyses were performed in an Ingeny PhorU 2 system (Ingeny, Goes, The Netherlands) according to Weinert et al. [33], and gels were silver-stained as described in Heuer et al. [34].
Cloning of dominant DGGE bands and sequencing. DGGE bands with higher relative abundance in +ARD treatments comparison to-ARD and control soils were excised from DGGE gels. Between 4-8 bands from the replicates of the same treatment and with the same electrophoretic mobility were pooled together and DNA fragments were extracted as described by Babin et al. [35], except that the gel slices were smashed with the help of a sterile pipette tip. One microlitre of the extracted DNA was re-amplified using primer pair F984GC/R1378 (see above) and a DGGE was performed to confirm the electrophoretic mobility of the amplicons. A-Tailing (using GoTaq polymerase), ligation into pGEM-T vector and transformation into E. coli JM109 (Promega, Madison, VI, USA) was performed as described by Babin et al. [35]. Twenty positive clones per band were selected and screened as explained by Smalla et al. [36]. Plasmids with the correct insert were extracted with GeneJET Plasmid Miniprep Kit (Thermo Scientific TM ) as recommended by the manufactures. Plasmid inserts were sequenced in both directions with standard primers SP6 and T7prom (Macrogen, Amsterdam, Netherlands). Closest relative identification was carried out with the BLASTN search tool using the Reference RNA gene sequences database of the National Center for Biotechnology Information (NCBI, USA).
Statistics
Differences between means for either side of the split-root boxes were tested by t-test following tests for normal distribution and equal variance. For comparison between the four treatments (-ARD/-ARD; +ARD/+ARD; -ARD/+ARD; Control/+ARD) one factorial ANOVA followed by Tukey's test for pairwise comparison was conducted. 15 N isotope abundance was investigated by crosswise labelling the compartments of the treatment -ARD/+ARD with 15 N nitrogen fertilizer instead of 14 N nitrogen fertilizer. The four replications established for each labelling treatment ( 15 N-ARD/ 14 N+ARD; 14 N-ARD/ 15 N +ARD) were analysed separately for 15 N abundance (n = 4) and were pooled for all other parameters (n = 8).
For the evaluation of root length determination, derived from scanning split-root boxes versus destructive sampling, Pearson correlation coefficient was calculated across all compartments and treatments. For this part of statistics SigmaPlot 11.0 statistics tool was used.
DGGE fingerprints were analysed with the software GelCompar II 6.6., and DGGE profiles were compared pairwise, for each gel, by Pearson correlation indices. As a result, Pearson similarity coefficients were obtained and used for the construction of dendrograms based on the Unweighted Pair Group Method with Arithmetic mean (UPGMA) cluster algorithm. To test for significant differences between the fingerprints of the soil in the compartments (-ARD, +ARD or Control) a permutation test at 10.000 times according to Kropf et al. [37] was done with the Pearson similarity coefficients. The test provides dissimilarity values (d-val ue) that indicate the extent of the differences between the DGGE fingerprints of different soil variants. Differences in the qPCR data were revealed with a one factorial ANOVA in conjunction with Tukey's HSD. For this the software R 3.23 in combination with the package agricolae was used. Prior to statistical analysis bacterial 16S rRNA gene and fungal ITS fragment copy numbers were log-transformed.
Root growth
The split-root box approach appeared well suited for growth of 'M26' seedlings, enabling observation of roots throughout the 34-day growth period (Fig 1). The majority of roots was visible through the transparent front plate, and root length data derived from scanning splitroot box surface correlated well (r = 0.86 Ã ) with the root growth measured destructively at harvest (S1 Fig). After a lag-phase of about 13 days, root length increased and started to differ between treatments and between compartments within one treatment, if these had been filled with different substrates (Fig 2). The most pronounced differences were observed between the two compartments of treatment Control/+ARD, with much higher growth rates in Control compared to +ARD soil. For the treatment -ARD/+ARD there was a tendency for weaker growth in +ARD soil compartment, but differences were not significant. The treatment with +ARD soil in both compartments showed significantly reduced root length compared to the treatments with -ARD in both compartments (Figs 2 and 3). Root length in the +ARD compartment of the treatments with -ARD or Control soil in the second compartment of the splitroot box showed intermediate root length (Fig 3).
Differences in root diameter classes among treatments and compartments within each treatment were not observed. More than 80% of the root length showed less than 0.5 mm in diameter, half of which was less than 0.25 mm (data not shown).
Shoot growth and N uptake
Shoot growth was poor for the treatment +ARD soil only (+ARD/+ARD) compared to -ARD (-ARD/-ARD) ( Fig 4A). For treatments -ARD/+ARD and Control/+ARD shoot growth was comparable to the treatment with -ARD/-ARD soil.
Similar results were observed for N uptake into the shoot, i.e. the product of biomass and N concentration in the tissue showing statistically significant differences (Fig 4B).
N abundance-Root function
Crosswise 15 N labelling was conducted for the treatment -ARD/+ARD to derive information on the activity and functionality of roots grown in +ARD soil compared to those growing in -ARD soil. Higher abundance of 15 N was observed in the roots and in particular in the shoots if 15 N was applied to the -ARD soil as compared to +ARD (Fig 5). This was still the case if 15 N uptake was normalized to root length in the respective compartment (data not shown). Interestingly, some of the 15 N label was detected in the roots of the unlabelled compartment. 15 N data indicated that not only uptake activity of roots in -ARD soil was higher but also redistribution of N within the plant was more efficient (Fig 5). Total N concentrations in shoot (1.91%) and root tissue (1.84%) did not differ for the crosswise labelled treatments (data not shown).
Microbiome analysis
Quantification of 16S rRNA gene and ITS fragment copy numbers. Real-time PCR results showed that the 16S rRNA gene copy numbers were higher in the rhizoplane than in rhizosphere and bulk soil for all treatments (Fig 6). In contrast, the ITS fragment copy numbers were higher in the bulk soil than in the rhizosphere and rhizoplane samples. Significant differences between treatments were detected only within the bulk soil samples (Tukey test, n = 4) as significantly lower copy numbers of 16S rRNA gene and ITS fragments were observed in -ARD soil compared to +ARD and Control.
DGGE. The bacterial community 16S rRNA gene fingerprints displayed a complex banding pattern. They showed, for all treatments, enrichments of some populations in the rhizosphere and in the rhizoplane indicating a lower evenness and richness, in particular in the rhizoplane samples (Fig 7). The comparison of the fingerprints for rhizosphere soil and rhizoplane, respectively, showed no significant differences when both compartments were filled with the same substrate (+ARD/+ARD or -ARD/-ARD) ( Table 1). However, significantly different fingerprints were observed between the respective rhizosphere and rhizoplane fingerprints, when the compartments were filled with different substrates (-ARD/+ARD and Control/+ARD). These differences in the microbiome composition were investigated for +ARD/Control (Fig 1) for different taxonomic groups (Fig 7; Table 2). Significant differences were observed between +ARD and Control fingerprints for rhizoplane, rhizosphere and bulk soil, respectively (Fig 7). A marked rhizosphere effect was observed in +ARD with some strong populations and three dominant bands were detected only in the rhizoplane of the +ARD but not in Control samples (see marked bands in Fig 7). In the rhizosphere of roots exposed to +ARD only one band was detected in all bacterial fingerprints that was not detected in Control soil. Interestingly, the differences between bacterial fingerprint of +ARD and Control were lowest in bulk soil, while those differences between the fungal fingerprints were highest in bulk soil (Fig 8; Table 2). In contrast to the bacterial fingerprints no enrichment of particular fungal populations was observed in the rhizoplane or rhizosphere, indicating a less pronounced rhizosphere effect for fungi (Fig 8). Indeed, most of the fungal fingerprints bands were shared between soil microhabitats. In the +ARD samples two distinct bands were detected in all samples which were not detected in the Control (see bands marked in Fig 8). In addition, a higher variability between replicates was observed. UPGMA revealed distinct separate clusters for fungal communities of +ARD and Control soil with rather high similarity of rhizoplane, rhizosphere and bulk soil samples. Fig 9 shows the DGGE fingerprints of rhizoplane samples of -ARD/-ARD, +ARD/+ARD, -ARD (-ARD/+ARD), +ARD (-ARD/+ARD), Control (Control/ +ARD) and +ARD (Control/+ARD). Interestingly, one band was detected in -ARD rhizoplane fingerprints only when the second compartment was +ARD. The fingerprints of the -ARD bulk and rhizosphere samples showed higher variability among replicates (data not shown).
Sequence analysis of differentiating DGGE bands. One of our goals was to identify bacterial populations more enriched in the rhizoplane of plants grown in +ARD soils in comparison to Control and/or-ARD soils (-ARD, -ARD; bands 1-5, Figs 7 and 9 marked by red arrow). Therefore, two additional DGGE fingerprints were carried out to excise these bands (S2 and S3 Figs; identical samples as shown in Figs 7 and 9). For all clones (except 5 and 7 derived from band 2), the closest NCBI hits belonged to the same genus but differed at species level. Sequences of clones 5 and 7 showed, in contrast, high similarity with species belonging to two different genera (Lysobacter and Pseudoxanthomonas) that are phylogenetically closely related. Bands 1 to 4, although having the same electrophoretic mobility, were excised and analyzed separately according to the treatment. A very intense band was detected (Fig 9) in +ARD soils but not in Control and-ARD soils (-ARD, -ARD). Interestingly the same band was as well highly enriched in-ARD soils when the neighboring compartment contained +ARD soil, suggesting a migration of a specific population from the +ARD soil to the-ARD soil. Different clones of this band excised from the different substrates/treatments were affiliated to the genus Streptomyces (97-98% identity). For some treatments even all clones were affiliated to Streptomyces spp.: +ARD (+ARD, -ARD), +ARD (Control, +ARD) (Table 3). However, in +ARD soils (+ARD, +ARD) additional sequences were affiliated to the genera Novosphingobium, Rhizobium and Pseudonocardia. Within-ARD soils (-ARD, +ARD), besides Streptomyces sequences, some clones were affiliated to genera like Sphingobium (3 clones from the 7 analysed) and Lysobacter (2 clones from the 7 analysed). Although the investigated band was mostly assigned to Streptomyces, other populations were assigned to other genera (Sphingobium and Rhizobium Lysobacter spp.). In conclusion, the sequencing data suggest that different bacterial populations contributed to the very intensive bands indicated in Fig 9. The band number 5 in Fig 7 that was present in the rhizoplane from plants grown in +ARD soils but not in rhizoplane from Control was identified based on the sequence of all three clones obtained, as Variovorax paradoxus (99-100% identity). Is the apple plant response to replant disease soil systemic?
Discussion
The split-root approach enabled us to evaluate substrate-specific effects on root growth and shoot growth of the same plant. In the present experiment plants grown in +ARD/-ARD soil showed shoot growth comparable to plants grown in -ARD/-ARD only, while those from +ARD/+ARD rhizoboxes showed poor shoot growth. Obviously, spatial separation of +ARD and -ARD soil allowed reducing negative effects on shoot growth while experiments using dilution of +ARD soil with sterilized (or soil never planted to apple) at similar or even higher rates (20 to 95% healthy soil) were not successful in overcoming ARD [2,15,18,20,39]. Also for root growth, compensation was observed in -ARD/+ARD or control/+ARD treatment, i.e. if one of the compartments was filled with -ARD soil or control soil, root growth in the +ARD compartment was enhanced compared to +ARD/+ARD. From these results it can be concluded that the response of apple to ARD soil is not systemic as mainly roots in direct contact with +ARD soil were strikingly affected in their growth and morphology as well as in their ability to take up 15 N. Based on this observation we propose that only direct exposure to the +ARD microbiome and its metabolites, e.g. secreted molecules, volatiles and local plant defence responses, seemed to cause the changes in root morphology. This is in line with field observations as reported by Hoestra [2], showing that ARD mainly affects the apple trees in the first years of planting, thereafter the roots grow deeper into soil layers less affected by ARD. It must be emphasized that this non-systemic response was observed despite strong indications that some bacterial populations (Streptomyces, Sphingobium) were detected in the -ARD compartment only when the other compartment contained +ARD soil (Fig 9). This finding might either be explained by a migration of respective bacterial populations via the plant or due to their enrichment caused by root metabolites in the -ARD compartment only, when the other part of the apple roots were exposed to +ARD. Exchange between the two compartments of the split-root system was revealed by the increase of 15 N abundance in the root tissue grown in the unlabelled compartment of treatment -ARD/+ARD. Whatever caused the enrichment of these populations, they themselves are likely not the causal agent of ARD as the roots in the -ARD compartment were not affected.
Studies reported that the rhizosphere microbiomes of apple rootstocks or seedlings were significantly different when seedlings were grown in +ARD soil compared to heat-treated [8] or γ-irradiated soil [40]. However, in these studies different plant individuals were assessed. In the split-root experiments reported here we show that one individual plant being exposed to different substrates showed drastic local changes of the rhizoplane and rhizosphere microbiome. With the exceptions mentioned above, however, only upon direct exposure.
We conclude that the mobility of the ARD-causing agents in soil is low and that it is unlikely to migrate within the root system. This is in line with the conceptual model developed by Emmett et al. [4], which investigated the relationship between root order, pathogen DNA abundance and phenolic profiles. They hypothesized that roots in primary development and transitioning to secondary development have the highest pathogen abundance, while plant chemical defences constitutively allocated to higher order roots protect the vascular tissues and hence the spread throughout the root system.
In the split-root approach, shoot growth was only reduced if both parts of the root system were growing in +ARD soil. Root growth declined to a much lesser extent if the second half of Is the apple plant response to replant disease soil systemic?
the root system was growing in -ARD or Control soil. However, the roots growing in +ARD soil, despite this growth compensation, still showed significantly lower nitrate uptake activity as indicated by the lower 15 N abundance in root and shoot tissue when the 15 N label was applied to the +ARD side of the treatment. This decrease of nitrate uptake activity is not a result of decreased nitrogen demand as shoot size was the same for the crosswise labelled treatments. Whether ARD is specifically inhibiting nitrate uptake or root uptake activity in general cannot be concluded from the present data. 15 N labelled nitrate was chosen in the first place as nitrogen requirement of young seedlings is high, and 15 N isotopes are stable and relatively easy to measure. Nitrate instead of ammonium was used to avoid any possible confounding effect with alterations of nitrification potential between +ARD and -ARD. It has been reported that roots growing in ARD soil show brownish discolouration, necrotic cortex and epidermis and only few root hairs [2,8,40]. Similar discolouration was observed in the present experiment. However, as reported by others [2] some root tips were still white and growing. In general, roots extracted from +ARD soil were more brittle, a phenomenon observed during our sampling procedure.
A more mechanistic explanation of the observed changes in root appearance comes from recent, comparative transcriptome analysis of roots from apple 'M26' grown in +ARD and -ARD soil reported by Weiß et al. [8]. Massive sequencing of cDNA ends (MACE) and RT-qPCR revealed that roots of apple plants exposed to ARD soil showed an up-regulated expression of genes coding for secondary metabolite production as well as plant defence, regulatory and signalling genes. This is in line with Yim et al. [40] proposing that damaged +ARD roots invest more energy in defence reactions. The observations of Weiß et al. [8] were similar to those of Shin et al. [41]. In their transcriptome studies they showed specific molecular response of apple roots to Pythium ultimum and identified genes involved in infection-induced production of pathogenesis-related proteins and several antimicrobial secondary metabolites. Moreover, ethylene, jasmonate and cytokinin signaling was indicated to play a role in the defence response [41]. The dark coloration of ARD roots is easily confounded with the red-brown coloration of apple root segments as they increase in age during ontogeny (Fig 1). High concentrations of phloridzin and quercetin, which may cause the discolouration, are typical of apple roots, and reduced growth rates may indirectly result in increased concentrations. Henfrey et al. [42] showed that phenolic compounds accumulated in plants exposed to +ARD. A potential role as an antioxidant substance was proposed by these authors. In addition, a higher level of the flavonoid phloridzin was found in root exudates of +ARD plants [3].
The changes in the root morphology and exudation patterns that occurred in response to the growth in +ARD soil likely shaped the rhizoplane and rhizosphere microbiome. At the same time a so-called soil memory effect [43] due to previous growth of apple might have shaped the soil microbiome and the abundance of likely resting stages of potential pathogens.
The DGGE fingerprints as well as the qPCR data indicated that sterilization by γ-radiation reduced bacterial and fungal diversity and abundances. The higher variability between replicates of DGGE fingerprints of -ARD is most likely due to the lower abundance of target sequences. The -ARD soil was not sterile at the time of sampling. The colonization of the -ARD/-ARD compartments might be due to plant-, air-or irrigation-derived microorganisms. An additional Control soil was used in this split-root experiment. Although the soil was taken Is the apple plant response to replant disease soil systemic? from the same site, its microbial communities were likely influenced by grass growth. The importance of the soil with its associated microbiome for plant growth was most impressively observed when root growth in the compartments with Control soil was compared to the +ARD compartment.
The comparison of the DGGE fingerprints clearly revealed a strong enrichment of a few soil populations in the rhizosphere which was far more pronounced in the rhizoplane of apple plantlets. A reduced diversity in the rhizosphere was described for many plant species previously and it is readily accepted that root exudates shape the rhizosphere microbiome [22,44].
DGGE fingerprints appeal as a straightforward tool to compare fingerprints of bacteria and fungi present in the different microhabitats (bulk, rhizosphere and rhizoplane) as well as in the different substrates (+ARD, -ARD, Control). Sequencing of bands from the bacterial DGGE fingerprints (Fig 7 and Fig 9) indicated that diverse bacterial populations contributed to the differentiating band that was strikingly increased in abundance in the +ARD compartment. In all rhizoboxes with +ARD Streptomyces (Actinomycetales) were detected as dominant population, contributing to the differentiating band. Actinomycete-like microorganisms were reported previously to be constantly present not only in the root lesions from apple seedlings Table 3. Putative phylogenetic affiliation of 16S rRNA partial gene sequences (V6-V8 region) from the rhizoplane of the bands excised from DGGE (Figs 7 and 9). Is the apple plant response to replant disease soil systemic? grown in ARD soils but also inhabiting the margins were the lesion begins [45], being able to invade the cortical layers till the endodermis, suggesting a pathogenic behaviour [45]. In contrast, the less intense but clearly differentiating band was only represented by Variovorax. Interestingly, bacteria belonging to Variovorax were isolated more frequently from apple grown in ARD in comparison with plants grown in control soils [46]. In conclusion, data suggest that plants show no systemic response to ARD although transport and / or exchange between the two parts occurred as indicated by 15 N and microbial data. The splitroot experiment provided a better understanding of the close links between soil, microbiome and plant.
Figure/ Band
Supporting information S1 Table. Primer used in this study for DGGE and qPCR analyses.
|
2018-10-22T17:25:40.450Z
|
2018-10-08T00:00:00.000
|
{
"year": 2018,
"sha1": "2568f78bb01aa406995f3523170ceff4be6039e9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0204922&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2568f78bb01aa406995f3523170ceff4be6039e9",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
253797185
|
pes2o/s2orc
|
v3-fos-license
|
Prospects for the Development of Pink1 and Parkin Activators for the Treatment of Parkinson’s Disease
Impaired mitophagy is one of the hallmarks of the pathogenesis of Parkinson’s disease, which highlights the importance of the proper functioning of mitochondria, as well as the processes of mitochondrial dynamics for the functioning of dopaminergic neurons. At the same time, the main factors leading to disruption of mitophagy in Parkinson’s disease are mutations in the Pink1 and Parkin enzymes. Based on the characterized mutant forms, the marked cellular localization, and the level of expression in neurons, these proteins can be considered promising targets for the development of drugs for Parkinson’s therapy. This review will consider such class of drug compounds as mitophagy activators and these drugs in the treatment of Parkinson’s disease.
Introduction
Parkinson's disease (PD) is a progressive neurodegenerative disease that predominantly affects dopaminergic neurons (which produce dopamine) in the substantia nigra. [1]. PD is the second most common neurodegenerative disease after Alzheimer's disease (AD), with an incidence of approximately 0.5-1% among persons aged 65-69 years and up to 1-3% among persons aged 80 years and older [2]. It is assumed that with an increase in the general age of the population, the prevalence of PD will increase by more than 30% by 2030, which will lead to various costs for the global medicine and economy [3].
The pathogenesis of PD is primarily characterized by a loss of nigrostriatal dopaminergic innervation, although the resulting neurodegeneration is not limited to substantia nigra dopaminergic neurons alone, but can also affect other neurons located in different areas of the central nervous system [2]. One of the important problems in the fight against the spread of PD is the difficulty of diagnosing this disease. Currently, there are no effective diagnostic tests.Thus, it is not possible to diagnose the development of PD before the onset of clinical symptoms, which also complicates the treatment of the disease: at the time of diagnosis, 60 to 80% of dopaminergic neurons are already affected [4].
The complexity of PD is additionally due to the multifactorial and not completely clear etiology of this disease, which includes both genetic and environmental factors. It has been reliably noted that the average age of onset of the disease is 60 years, and the prevalence of PD also increases with age [5]. Currently, more than 20 genes have been discovered, mutations in which are potentially associated with the development of PD and are transmitted both by autosomal dominant inheritance (such genes as: SCNA, LRRK2 and VPS32), as well as by autosomal recessive inheritance (such genes as: PRKN, PINK1 and DJ-1) [2]. At the same time, there are studies showing the possible role of non-genetic factors in the development of PD. Thus, it has been proven that the prototoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), which is used in the production of herbicides, causes PD in humans and monkeys [6]. Studies [7][8][9] have shown an inverse relationship between smoking and the risk of developing PD. The exact reason for the relationship between smoking and PD is not completely clear, but one hypothesis is the observation that activation of nicotinic acetylcholine receptors on dopaminergic neurons by nicotine or selective agonists has a neuroprotective effect, which has been demonstrated in experimental models of PD. Another study showed an inverse relationship between the level of caffeine consumption and the development of PD [10].
The main drug in the treatment of PD is Levodopa, which is an isomer of the amino acid from which dopamine is synthesized. It is able to penetrate the blood-brain barrier, thus restoring the lack of dopamine in patients with PD [11]. To increase the effectiveness of Levodopa, it is often taken together with carbidopa, a drug that prevents exterior metabolism of Levodopa, that is, protects it from being converted into dopamine outside the brain [11]. Additional drugs used in PD are dopamine agonists, which do not directly convert to dopamine, but mimic its effects (pramipexole, ropinirole) [11]. The third group of drugs are inhibitors of enzymes involved in the dopamine cleavage reaction (monoamine oxidase B and catechol-O-methyltransferase), which include selegiline, rasagiline, safinamide, entacapone and opicapone [12]. Despite the fact that these drugs help to alleviate the condition of patients with PD, they are not able to stop the progression of the disease. The search for such compounds is a promising direction in the fight against PD. In this review, we consider the potential therapeutic efficacy of Pink1 and Parkin protein activators (one of the key targets in the pathogenesis of PD) in the treatment of PD.
General Mechanism of Mitophagy
Mitophagy is a process of selective destruction of damaged and dysfunctional mitochondria [13]. Mitophagy is the complex process, which is characterized by the presence of successive stages: initiation of mitophagy with labeling of protein targets selected for destruction on mitochondria, absorption of mitochondria through fusion with autophagosome, and final sequestration in lysosomes occurring by hydrolytic degradation of mitochondria [13]. On the basis of the proteins involved in the process of mitophagy, mitophagy is subdivided into PINK1/Parkin dependent mitophagy (which will be discussed below) and receptor-mediated mitophagy, which is activated to external stimuli through proteins of the outer mitochondrial membrane, which in this case are mitophagy induction receptors: BNIP3L (BCL2 interacting protein 3 like), FUNDC1 (FUN14 Domain Containing 1) and others. Both types of mitophagy are also initiated in response to different stimuli [14].
PINK1 and Parkin are the best-known proteins that directly regulate the quality of working mitochondria. PINK1 is a serine/threonine kinase, and Parkin is a cytosolic ubiquitin E3 ligase [15]. In healthy mitochondria, which are not subject to destruction, the N-terminus of the PINK1 protein molecule moves to the inner mitochondrial membrane, while interacting with the TOM/TIM protein complex. The efficiency of transport of the N-site of the PINK1 molecule is related to the value of the mitochondrial membrane potential; at the same time, the C-terminus of the PINK1 molecule, which contains the kinase domain, is directed towards the cytoplasm. On the inner membrane of healthy mitochondria, PINK1 is partially degraded by mitochondrial processing peptidase (MPP) and presenilin-associated rhomboid protease (PARL). The initially intact PINK1 domain with kinase activity undergoes proteasomal degradation already in the cytosol [16]. Stress factors, including: depolarization of the mitochondrial membrane, dysfunctional state of the ETC (electron transport chain) complex proteins located in the mitochondrial matrix, increased mutagenicity of mitochondrial proteins inhibit the degradation of PINK1 and lead to the accumulation of undamaged molecules of this protein on the outer mitochondrial membrane due to disruption of the process of intermembrane transfer of the N-terminal domain of PINK1 to the outer mitochondrial membrane [17]. PINK1 molecules located on the outer mitochondrial membrane undergo homodimerization, which leads to autophosphorylation and, as a result, promotes the activation of kinase activity and improves binding to key PINK1 substrates: Parkin and ubiquitin [17]. Due to properties such as a rapid increase in the number of molecules on the outer mitochondrial membrane and the ability to be activated in response to mitochondrial stress, PINK1 is an effective sensor of mitochondrial damage and mitochondrial dysfunction.
Parkin, as mentioned above, is the ubiquitin E3 ligase and contains the ubiquitin like domain and four RING domains, which, due to intramolecular interactions, block the active site and exhibit competitive activity for binding to E2 ligase [15]. In mitochondrial damage or mitochondrial dysfunction, PINK1 activates Parkin using two different but similar variants: in the first case, activation occurs through binding to ubiquitin and its subsequent phosphorylation at the Ser65 position, which also interacts with Parkin and activates it; the second variant is based on the direct interaction of these mitophagic enzymes-PINK1 directly phosphorylates Parkin at position Ser65 in the Parkin ubiquitinlike domain, which leads to conformational changes in the Parkin protein and allows it to interact with E2 ligase, which in turn leads to triggering the ubiquitination reaction [17]. Parkin thus functions as an enhancer of the mitochondrial damage signal from PINK1, which increases the number of ubiquitin protein molecules on the mitochondrial membrane, which results in the recruitment of even more Parkin molecules to the mitochondria. Being recruited into mitochondria, Parkin labels various mitochondrial proteins with ubiquitin, which are located in different parts of the mitochondria [18]. The emergence of a large number of ubiquitin chains serves as a signal to attract and bind to the mitochondrial surface of autophagy mediators such as: OPTN (Optineurin), NDP52 (nuclear dot protein 52 kDa), RABGEF1 (RAB Guanine Nucleotide Exchange Factor 1), RAB7A (Ras-related protein Rab-7a) and RAB5 (Ras-related protein Rab-5A), which are also called adapter proteins. Light chain 3 (LC3) of microtubule-associated protein 1 recognizes and interacts with the adapter proteins indicated above, which subsequently leads to the formation of mitophagosomes, in which dysfunctional mitochondria undergo final degradation after the fusion of mitophagosomes with lysosomes [19]. There is evidence, that for PINK1 and Parkin, there are a number of naturally occurring activator proteins in the cell. These proteins, as well as their synthetic derivatives, can be considered as possible drug options for the treatment of Parkinson's disease, which will be discussed in the following sections [20].
The Role of Mitophagy Disorders in the Development of Parkinson's Disease
The fundamental function of mitochondria is the production of ATP molecules. Additional functions of mitochondria are: Participation in the metabolism of fatty acids and amino acids, the formation of cofactors and coenzymes, including NADH and FADH2, regulation of Ca 2+ homeostasis and initiation of internal apoptosis [21,22]. Mitochondria are not static organelles; they are subject to mitochondrial dynamics, which includes fusion, fission, intracellular transport, and mitophagy. These processes require the coordinated work of a large number of special proteins, mutations in which can disrupt the functioning of mitochondrial dynamics and, as a result, lead the development of various types of diseases, including PD [22].
To maintain the vital activity of neurons, the effective work of mitochondria is especially important. Neurons have a heterogeneous polarized structure, the main elements of which are the body of the neuron and the axon and dendrites, through which nerve impulses enter and are transmitted. For such structure, it creates different needs for different parts of the neuron in the supply of ATP energy, while the greatest energy costs occur in the places where the nerve impulse is transmitted: presynaptic and postsynaptic terminals, which requires the localization of a large number of functional mitochondria in these compartments [23]. The energy of ATP is used to perform important processes occurring in neurons: the mobilization of synaptic vesicles, the formation of an actin cytoskeleton for efficient transport of neurotransmitters and organelles within a neuron, the creation of an electrochemical potential for synaptic transmission of a nerve impulse, the capture, and recycling of neurotransmitters, and in addition to the regulation of Ca 2+ dynamics [23,24].
The need for a different concentration of working mitochondria in different compartments of the neuron, depending on their energy and metabolic needs, requires the correct and coordinated work of the processes of mitochondrial dynamics. Mitochondrial transport is required for the delivery of mitochondria to distant regions of the neuron and back to the soma; it is subdivided into retrograde and anterograde transport. The fusion of mitochondria ensures the creation of new healthy and larger mitochondria, due to the fact that when functional and dysfunctional mitochondria are combined, healthy mitochondria are formed [25], the negative side of fusion is the fact that the total number of mitochondria in the cell decreases. Mitochondrial fission most often precedes mitophagy, provided that damage is unevenly distributed as a result of fission, healthy mitochondria are also formed in addition to dysfunctional mitochondria [25]. Mitophagy, which occurs predominantly in the body of the neuron, leads to the destruction of "expired" mitochondria, thus increasing the proportion of functional mitochondria in the cell and preventing the risks for the cell associated with an increased concentration of dysfunctional mitochondria, including the initiation of apoptosis and the development of an inflammatory response.
In the pathogenesis of PD, the occurrence of mitochondrial dysfunction has been proven, especially associated with a defect in complex I of the mitochondrial respiratory chain, which produces up to 40% of the proton gradient for ATP synthesis, and is also the main source of ROS (reactive oxygen species), which is formed as a by-product of electron transfer reactions in respiratory chain [26]. The role of mutations in the mitophagy proteins Pink1 and Parkin has also been proven in the development of PD, which leads to disruption of mitophagy and, as a result, to the accumulation of dysfunctional mitochondria in the neuron body [27]. What does this mean for dopaminergic neurons? Mitochondrial dysfunctions, concentrated in one mitochondrion, accumulate in many mitochondria, which leads to the generation of a pathological condition. Firstly, the energy and metabolic balance inside the cell is disrupted due to disruption of the process of oxidative phosphorylation and, as a result, a decrease in ATP production, which over time leads to energy depletion of the neuron. Secondly, mitochondrial dysfunction causes an increase in the production of ROS, which are stress signaling molecules that cause damage to biological macromolecules (which further enhances mitochondrial dysfunction in case of damage), the development of an inflammatory response, which is noted during neurodegeneration, and the initiation of apoptosis, which leads to death of dopaminergic neurons [28,29]. In addition, the direct participation of the Pink1 protein in the inhibition of the progression of PD was noted through interaction with α-synuclein, the accumulation of which in neurons leads to the development of neurotoxicity and the initiation of apoptosis. The interaction between PINK1 and α-synuclein reduced the accumulation of α-synuclein in neurons. At the same time, as a result of the introduction of the G309D mutation into the gene encoding PINK1, this interaction was terminated [30]. A relationship was also shown between the mitophagy regulator Pink1 and the regulator of mitochondrial division DRP1 [31]. Pink1 initiated phosphorylation of DRP1 in S616, which led to the activation of mitochondrial division. In samples of patients with PD with mutations in Pink1, a decrease in DRP1 phosphorylation was noted, which led to an elongation of mitochondria in neurons and a decrease in their number. This, in turn, can reduce the transport capacity of mitochondria to move to the required cellular compartments and lead to a lack of mitochondria, energy deficiency and subsequent initiation of apoptosis. The general scheme of mitophagy involvement in the pathogenesis of PD is shown in Figure 1.
Characterized Mutant Forms of Mitophagy Proteins
The presence of characterized mutations with an understanding of the protein d functions they cause is an important criterion for the preliminary evaluation of molec targets for which drugs will be developed. There are several key mutations in the PIN and Parkin proteins that lead to the development of PD. Thus, it was shown that PINK1-I368N mutant could not bind to the outer mitochondrial membrane due to con mational changes in its enzymatic center, which led to blocking of the very first stag mitophagy [32]. Homozygous nonsense mutation of PINK1 p.Q456X leads to prema stop codon and decreased levels of PINK1 mRNA, resulting in the formation of c pletely dysfunctional PINK1 proteins [33]. The p.G411S mutant variant forms dimers can even localize on the outer mitochondrial membrane; however, a partial loss of kin activity does not allow it to perform its function completely [34]. PINK1 phosphoryl Parkin in the region of the ubiquitin-like (UBL) domain, which changes its conforma to open and active [35]. It was found that mutations in the UBL domain: G12R, R33Q, R42P led to a decrease in Parkin phosphorylation, which, in turn, led to impaired Pa activation [36]. Two other substitutions in the UBL domain, G12R and T55I, lead to a ubiquitination of Parkin, resulting in its degradation [36]. It should be noted that the d functions of PINK1 and Parkin in PD are related not only to fixed mutations. For exam the disruption of mitophagy can be manifested by the accumulation of S-nitrosyla PINK1 (SNO-PINK1), which is an incorrect post-translational modification leading to hibition of the kinase activity of this enzyme. It was found that increased productio SNO-PINK1 led to neuronal death [37]. Mutant forms and post-translational modif tions of Pink1 and Parkin and their pathological mechanisms are summarized in Tabl Table 1
Characterized Mutant Forms of Mitophagy Proteins
The presence of characterized mutations with an understanding of the protein dysfunctions they cause is an important criterion for the preliminary evaluation of molecular targets for which drugs will be developed. There are several key mutations in the PINK1 and Parkin proteins that lead to the development of PD. Thus, it was shown that the PINK1-I368N mutant could not bind to the outer mitochondrial membrane due to conformational changes in its enzymatic center, which led to blocking of the very first stage of mitophagy [32]. Homozygous nonsense mutation of PINK1 p.Q456X leads to premature stop codon and decreased levels of PINK1 mRNA, resulting in the formation of completely dysfunctional PINK1 proteins [33]. The p.G411S mutant variant forms dimers and can even localize on the outer mitochondrial membrane; however, a partial loss of kinase activity does not allow it to perform its function completely [34]. PINK1 phosphorylates Parkin in the region of the ubiquitin-like (UBL) domain, which changes its conformation to open and active [35]. It was found that mutations in the UBL domain: G12R, R33Q, and R42P led to a decrease in Parkin phosphorylation, which, in turn, led to impaired Parkin activation [36]. Two other substitutions in the UBL domain, G12R and T55I, lead to auto ubiquitination of Parkin, resulting in its degradation [36]. It should be noted that the dysfunctions of PINK1 and Parkin in PD are related not only to fixed mutations. For example, the disruption of mitophagy can be manifested by the accumulation of S-nitrosylated PINK1 (SNO-PINK1), which is an incorrect post-translational modification leading to inhibition of the kinase activity of this enzyme. It was found that increased production of SNO-PINK1 led to neuronal death [37]. Mutant forms and post-translational modifications of Pink1 and Parkin and their pathological mechanisms are summarized in Table 1.
Level of Expression of Mitophagy Proteins in Neurons
Better targeting of the selected protein target requires high expression of this protein in the pathological group of cells, which will provide a high concentration of the protein in the cell and a greater likelihood of drug binding. In a mammalian cell culture study, dopaminergic neurons were found to overexpress PINK1 and Parkin proteins [38]. However, this study was conducted on healthy neurons that did not show signs of degeneration. A number of studies using cell culture models suggest that PINK1 may play a neuroprotective role in some forms of stress, because overexpression of wild-type PINK1, but not mutant PINK1, protects against cell death caused by chemical stressors such as neurotoxin [39][40][41]. Most likely, in response to increased oxidative stress and depolarization of mitochondrial membranes of dysfunctional mitochondria, an additional increase in the expression of mitophagy proteins occurs, however, since mutant proteins cannot perform their proper functions, they accumulate as "dead weight" and can serve as a good target, except for those forms of mutant proteins, that are rapidly degraded.
Labeled Cellular Localization of Mitophagy Proteins
Knowledge of the predominant intracellular localization of therapeutic targets is an important step in drug development. The localization of PINK1 and Parkin differs: PINK1 is a mitochondria-targeted protein, while Parkin is predominantly contained in the nucleus, and can also be located in the cytoplasm and recruited into mitochondria immediately after the initiation of mitophagy from the PINK1 signal [42][43][44]. Therefore, therapeutic compounds directed to PINK1 and Parkin should have the same preferential localization as their targets. However, the submitochondrial localization of PINK1 should be taken into account depending on the level of mitophagy in cells [45]. Thus, in normally functioning dopaminergic neurons, where high energy costs require rapid renewal of mitochondria, PINK1 is localized in a high concentration on the outer mitochondrial membrane, due to the blocking of its import into mitochondria, where it serves as a signal for the activation of mitophagy reactions. In pathogenic dopaminergic neurons, PINK1 can accumulate in the mitochondrial matrix in the case of mutations at the N-terminus of PINK1; however, if mutations do not affect the ability of PINK1 to bind to the outer mitochondrial membrane, then it is also predominantly localized on it, if we additionally take into account the high needs for mitophagy, which not happening. For the Parkin protein mutants associated with the development of PD: ParkinR42P and ParkinG430D, inhibition of import into the nucleus was shown, which indicated the predominant cytoplasmic localization of these mutant forms, which, however, did not mean the abolition of nuclear localization for other Parkin mutants [46]. Additionally, taking into account the increased need for mitophagy in pathological neurons in PD, Parkin concentration on the outer mitochondrial membrane is possible if PINK1 is not subject to dysfunctional mutation.
Current State of Development of Pink1 and Parkin Activators
Since the deficiency of Pink1 and Parkin function is one of the factors in the development of PD, the development of pharmaceutical compounds that can restore the normal level of mitophagy, which will potentially help stop neurodegenerative processes, is promising in the fight against PD. Despite the well-known role of dysfunction of Pink1 and Parkin proteins in the pathogenesis of PD, there are currently only a few preclinical studies of the effectiveness of compounds activating the action of these proteins, so it will be possible to state what effect this therapy will have in the treatment of patients with PD no sooner than in 10-15 years. An important issue before the study of drugs in clinical trials is the assessment of the toxicity of selected compounds and the absence of an excessive effect of enhancing mitophagy, which can also negatively affect the vital activity of cells.
Mitophagy activators can be conditionally divided into three groups: Pink1 activators, Parkin activators, and inhibitors of ubiquitin-specific protease (USP30), whose function is inverse to that of Parkin [47]. One promising option for Pink1 activators is the use of kinetin, which is a precursor of the energy substrate kinetin triphosphate (KTP), which is an analog of ATP. In the study in a human neuron model, Pink1 was shown to bind to KTP with higher catalytic efficiency than to ATP. The introduction of kinetin led to an increase in the activity of wild PINK1 and mutant PINK1 G309D, and also inhibited neuronal apoptosis [48]. Kinetin had previously been shown to be well tolerated in humans and was found to freely cross the blood-brain barrier in a mouse model [49]. However, in a later study in rodent models, kinetin did not protect against α-synuclein-induced neurodegeneration, suggesting the need for more comprehensive preclinical studies [50]. There is also information about the effectiveness of natural compounds isolated from plant tissues. One such compound is celastrol, which reduced MPP+-induced death of dopaminergic neurons, reduced mitochondrial membrane depolarization, and increased ATP production in a cellular model of PD. In a mouse model, celastrol administration had a restorative effect on the motor symptoms of PD, slowed down neurodegeneration in the substantia nigra, and enhanced mitophagy. It was shown that celastrol was able to enhance the expression of PINK1 and some other proteins inhibited in PD [51]. Similar effects were also shown by another natural compound, salidroside [52]. Currently, there are no carried out in vivo studies on the effectiveness of Parkin activators, however, there are patented compounds that have shown their effectiveness in activating Parkin in vitro, which provides prerequisites for further development of this area of research [53]. Mediated activation of mitophagy through inhibition of USP30 may also be a promising option in the treatment of PD. USP30 is a convenient target located predominantly on the outer mitochondrial membrane. USP30 removes ubiquitin residues from labeled mitochondria, thus preventing mitophagy [47,54]. To date, several highly selective USP30 inhibitors have been identified that had shown their effectiveness in cell cultures by increasing the levels of ubiquitination and mitophagy [47]. However, the question of the effectiveness of these developments in in vivo studies is still open. Therapeutic strategies to restore mitophagy in PD are presented in Table 2.
Evaluation of the Prospects for the Development of Pink1 and Parkin Activators for the Treatment of Parkinson's Disease
The development of drugs based on Pink1 and Parkin activators is a new direction and is currently still in its infancy, but it has the potential for further development. This is especially true for the development of Pink1 activators, the effectiveness of which has already been tested in animal models. However, it should be taken into account that before using the developed drugs, patients with PD should undergo genetic screening for the detection of Pink1 and Parkin mutations, since it is for patients with these mutations that a therapeutic effect in the treatment of PD is potentially possible. A more detailed analysis of the advantages and disadvantages of the development of Pink1 and Parkin activators for the treatment of PD is presented in Table 3. Table 3. Main advantages and disadvantages of the development of Pink1 and Parkin activators for PD therapy.
Factor
Advantages Disadvantages
Investment attractiveness
Approximately 5-10% of PD patients have monogenic forms of the disease. Mutations encoding genes in Pink1 and/or Parkin account for 1-9% of all genetic PD. Therefore, considering the low percentage of subjects bearing these mutations, it does not seem very attractive to invest money and time in the development of novel activators of mitophagy.
State of Development
Studies have shown the ability of potential drugs to reduce neuronal degeneration, which is a prerequisite for efficacy in the treatment of PD.
1.
For Parkin activators and USP30 inhibitors, only results of single in vitro studies are available; for Pink1 activators, results on animal models are available.
2.
Due to the lack of ongoing clinical trials, the potential entry of drugs to the market will not occur earlier than in 10-15 years.
Choice of active compound 1.
Small molecules that can be picked up relatively quickly by in silico methods.
2.
Small molecules are more likely to reach the target location of the mutant protein.
3.
High probability of detecting the active substance from natural compounds, which will simplify the production process.
1.
High toxicity of synthetic compounds for humans is possible.
2.
An accurate determination of the effective dose of the active substance is required in order to avoid excessive activation of mitophagy.
Discussion
Developing of small-molecule-based Pink1 and Parkin activators is the most practical option for a relatively rapid release of such drugs. Some studies have focused on a different approach, based on increasing the expression of Pink1 and Parkin by introducing viral vectors containing wild-type genes of these proteins [55]. This approach has indeed been shown to be effective in reducing neurodegeneration in animal models, but seems far more distant and less realistic for clinical application in the treatment of PD. A potentially effective option for the search for Pink1 and Parkin activators can be provided by the use of in silico modeling. This is facilitated by the detailed structure of the open and closed conformations of the Parkin molecule with the resolution of each atom [56]. In addition to drug screening, in silico methods can help identify new upstream regulators of mitochondrial dysfunction in PD, as was shown in the identification of ATF4, which is a regulator of transcriptional changes identified in Pink1 and Parkin mutants [57]. One of the potential therapeutic strategies for the treatment of PD may be an increase in the expression of Pink1 and Parkin proteins. A number of studies in animal models have shown that increasing the expression of these proteins through gene augmentation reduced neurodegeneration, improved locomotor activity, and increased survival [55]. An issue requiring attention is the selection of the correct system of markers for assessing mitophagy in vivo directly in neurons. In cell culture, methods for studying mitophagy are effectively applied, but studies on animal models often show unsatisfactory results. This is primarily due to the fact that in vivo detection requires greater sensitivity than in vitro detection. Thus, the study [58] compared two markers for the effectiveness of studying the process of mitophagy in vivo. In addition to Pink1 and Parkin, other therapeutic targets are also considered, mutations in which also led to the development of PD: alpha-synuclein, DJ-1, VPS35, LRRK2, etc. [26]. Many of these proteins are also directly or indirectly involved in mitophagy. The development of complex drugs aimed at normalizing the functioning of several key mutant proteins can help patients who have more than one mutation associated with the pathogenesis of PD. In addition, it is necessary to study the effect of Pink1 and Parkin activators on other features of the pathogenesis of neurons in PD in addition to protection against neurodegeneration, namely, the reduction of neuroinflammation and the normalization of energy status with the restoration of ATP production. In contrast to similar reviews, for example, here we conducted a more detailed analytical work, which included not just a description of Pink1 and Parkin and currently available therapeutic developments, but also an analysis of the consideration of Pink1 and Parkin as therapeutic targets based on their properties, as well as an analysis of the prospects for the development of mitophagy protein activators for the treatment of PD [59].
Conclusions
To effectively combat the progression of Parkinson's disease requires the development of new drugs that can greatly slow down or stop neurodegeneration. One of the promising targets for new therapeutic agents are the main mitophagy enzymes: Pink1 and Parkin. Mutant forms of Pink1 and Parkin, as well as associated dysfunctional states of these proteins, are well characterized. To date, there are several promising compounds that are mitophagy activators and have been shown to protect dopaminergic neurons from degeneration in cell culture and animal models. However, the clinical use of these drugs will not be expected until 10-15 years if the studied compounds pass all stages of clinical trials. It will be possible to use mitophagy activators in the treatment of Parkinson's disease in patients with mutations in the Pink1 and Parkin proteins.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-11-23T16:13:53.313Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "1afbe7ac6a355514da3e118685808b8f8a8298f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/14/11/2514/pdf?version=1668853692",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acdcc9aee8d6b45ac2369268cc743fa5a6cb6322",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.